CN117197902A - Intelligent prediction system and method for sow delivery - Google Patents

Intelligent prediction system and method for sow delivery Download PDF

Info

Publication number
CN117197902A
CN117197902A CN202311465835.XA CN202311465835A CN117197902A CN 117197902 A CN117197902 A CN 117197902A CN 202311465835 A CN202311465835 A CN 202311465835A CN 117197902 A CN117197902 A CN 117197902A
Authority
CN
China
Prior art keywords
sow
data
module
time
delivery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311465835.XA
Other languages
Chinese (zh)
Other versions
CN117197902B (en
Inventor
肖德琴
刘克坚
黄一桂
闫志广
康俊琪
陈芳玲
周圣杰
陈淼彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202311465835.XA priority Critical patent/CN117197902B/en
Publication of CN117197902A publication Critical patent/CN117197902A/en
Application granted granted Critical
Publication of CN117197902B publication Critical patent/CN117197902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an intelligent prediction system and method for sow delivery, and relates to the field of pig industry. The method solves the problems that the prior art has high requirements on equipment and technology, affects the health of sows, only predicts the attitude information, has a narrow detection range, cannot analyze other factors possibly affecting the delivery time, and has low stability and robustness.

Description

Intelligent prediction system and method for sow delivery
Technical Field
The invention relates to the field of pig industry, in particular to an intelligent prediction system and method for sow delivery.
Background
The pig industry occupies a very important position in Chinese economy, and along with the development of the pig industry, how to realize intensive, fine and intelligent production management becomes an important problem facing the industry. Sow is one of main breeding objects in pig farming, and the production capacity and the production benefit of sow have important significance for the whole pig farming industry. Sows are exposed to extreme stress during farrowing, which can threaten the health of the sow itself and even lead to high mortality in piglets. Therefore, the accurate control of the sow delivery time has important significance for pig breeding, and the delivery time of the sow is predicted, so that the breeding personnel can be helped to timely monitor and intervene delivery, and the occurrence of dystocia and other delivery complications is avoided, so that the safety and health of the sow and the piglet are ensured. For sows which need manual assistance or use oxytocin to help delivery, necessary support and treatment can be provided in time, so that the postpartum health of the sows is ensured, the death rate of piglets is reduced, and the production benefit of live pigs is improved.
At present, a plurality of researchers design a plurality of sow delivery time prediction methods based on modern scientific technology, but the problems that the requirements on equipment and technology are high, the health of the sow is influenced, the prediction is only carried out on attitude information, the detection range is narrow, other factors possibly influencing the delivery time cannot be analyzed, and the stability and the robustness are low exist.
Disclosure of Invention
Aiming at the defects in the prior art, the intelligent prediction system and the intelligent prediction method for sow delivery solve the problems that the prior art has high requirements on equipment and technology, affects sow health, predicts only posture information, has a narrow detection range, cannot analyze other factors possibly affecting delivery time, and has low stability and robustness.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: the intelligent prediction system for sow delivery comprises an AI edge equipment module, an environmental factor acquisition module, a network transmission module, a video image data storage module, a local server, a cloud server and a terminal real-time display module, wherein the network transmission module comprises a switch and a sensor gateway, the switch is respectively connected with the AI edge equipment module, the video image data storage module and the local server through network cables, the sensor gateway is connected with the local server through network cables, the environmental factor acquisition module is in communication connection with the sensor gateway, the local server is in communication connection with the cloud server, and the cloud server is in communication connection with the terminal real-time display module;
The system comprises an AI edge equipment module, an environment factor acquisition module, a network transmission module, a video image data storage module and a local server, wherein the AI edge equipment module is used for acquiring and processing video image data of a sow in the late gestation period under the environment of a sow limit fence, the environment factor acquisition module is used for acquiring environment factor data of a pig house, the network transmission module is used for data transmission among the AI edge equipment module, the environment factor acquisition module, the video image data storage module and the local server, the video image data storage module is used for storing the video image data for a long time, the local server is used for overall management and data processing interaction of transmitted data, the cloud server is used for deploying a sow delivery time prediction model, and the terminal real-time display module is used for displaying the prediction time of sow delivery and monitoring the physiological health condition of the sow in real time.
The beneficial effect of above-mentioned scheme is: the invention provides an intelligent prediction system for sow delivery, which utilizes an AI edge equipment module, an environmental factor acquisition module, a network transmission module, a video image data storage module and a local server to acquire data, transmit the data, store the data and process the data. The method solves the problems that the prior art has high requirements on equipment and technology, affects the health of sows, only predicts the attitude information, has a narrow detection range, cannot analyze other factors possibly affecting the delivery time, and has low stability and robustness.
Further, the AI edge equipment module comprises a plurality of AI edge equipment, the AI edge equipment is arranged on a ceiling right above each sow limit fence, the AI edge equipment comprises a camera and an AI calculation core unit, the camera is positioned below the AI calculation core unit and is movably connected with the AI calculation core unit, the camera comprises an RGB camera module, a thermal infrared imaging module and a depth camera module, and the RGB camera module, the thermal infrared imaging module and the depth camera module are positioned in the same plane;
the AI computing core unit adopts a Jetson Xavier NX submodule, the Jetson Xavier NX submodule is integrated in an industrial control machine shell and is provided with a power supply unit and a heat dissipation unit, and the RGB camera module, the thermal infrared imaging module and the depth camera module are all connected with the Jetson Xavier NX submodule through MIPI CSI interfaces.
The beneficial effects of the above-mentioned further scheme are: the RGB image data and the thermal infrared data are collected by using the camera, the gesture of the sow is detected by using the AI computing core unit, the Jetson Xavier NX sub-module adopted by the AI computing core unit is integrated with the 8-core ARM CPU and the 384-core NVIDIA GPU, the AI computing capacity of 10TOPS can be provided, various neural network algorithm deployment is supported, and meanwhile, the MIPI CSI interface is a point-to-point serial interface and supports high-speed image data transmission.
Further, the environmental factor acquisition module comprises an environmental sensor, wherein the environmental sensor comprises a temperature and humidity sensor, a carbon dioxide concentration sensor and an ammonia concentration sensor, and one environmental sensor is arranged in each three sow limiting columns.
The beneficial effects of the above-mentioned further scheme are: according to the technical scheme, the influence factors of environmental factors such as temperature and humidity, carbon dioxide and ammonia on the delivery time of the sow are considered, and the prediction accuracy and the robustness are improved.
In addition, the invention adopts the following technical scheme: an intelligent prediction method for sow delivery comprises the following steps:
s1: the method comprises the steps of continuously acquiring RGB video data, thermal infrared data and environmental factor data of sows in the late gestation period under a sow limit fence by using an AI edge equipment module and an environmental factor acquisition module;
s2: video image preprocessing is carried out on the collected RGB video data, the preprocessed RGB video data is input into a sow gesture detection algorithm which is deployed in an AI computing core unit and is based on key point detection, and the sow gesture is detected;
s3: when the posture of the sow is detected to be lying on the side, recording a detection result of the frame, extracting RGB video data of the sow when the sow is lying on the side according to the detection result, and extracting thermal infrared data of the sow at a corresponding time point when the sow is lying on the side from the acquired thermal infrared data;
S4: performing video framing and image enhancement preprocessing on RGB video data of the sow in lateral lying and thermal infrared data of the sow in lateral lying;
s5: inputting the preprocessed RGB video data of the sow during lateral lying into a sow body shaking detection algorithm based on an optical flow method in a cloud server, and extracting time sequence data of sow body shaking;
s6: the pretreated thermal infrared data of the sows in lateral lying is input into a sow body temperature extraction algorithm based on example segmentation and PCA in a cloud server, and time series data of the sow body temperature are extracted;
s7: performing dimension reduction processing on the collected environmental factor data through characteristic engineering to obtain time sequence data of the environmental factor, and performing data preprocessing on the time sequence data of the environmental factor, the time sequence data of sow body shake and the time sequence data of sow body temperature;
s8: and inputting each preprocessed time series data into a delivery time prediction algorithm based on LSTM-KF in the cloud server to obtain a sow delivery time prediction result, and completing intelligent prediction of sow delivery.
The beneficial effect of above-mentioned scheme is: according to the sow labor time prediction method, the posture of the sow is recognized, only the video in lateral lying is intercepted, the calculated amount of the prediction model is reduced, and the sow labor time prediction model mainly comprises three algorithms: the sow body shake detection algorithm based on the optical flow method, the sow body temperature extraction algorithm based on example segmentation and PCA and the sow delivery time prediction algorithm based on LSTM-KF realize comprehensive prediction of sow delivery, and dynamic prediction is carried out through LSTM-KF, so that more accurate prediction is achieved.
Further, the sow posture detection algorithm based on the key point detection in S2 adopts an improved GS-yolov7, the improved GS-yolov7 is to replace a CBS module in a neck network of the original yolov7 with a GSConv, and the sow posture is detected by using a branch GS-yolov 7-ose of the posture estimation branch of the GS-yolov7, which comprises the following steps:
s2-1: processing each frame of image of the input video by utilizing a GS-yolov 7-phase backbone network, extracting the characteristics of the bottom layer, the middle layer and the high layer of the image, and respectively inputting the characteristics into a neck network;
s2-2: the neck network is utilized to conduct multi-scale feature fusion on the extracted features of the bottom layer, the middle layer and the high layer of the image, and a feature map with multi-scale feature information is obtained;
s2-3: processing the feature map with the multi-scale feature information by using a detection head, and obtaining three anchor frames with different shapes for each cell;
s2-4: and carrying out regression based on the anchor frame, predicting the boundary frame coordinates, the boundary frame sizes and the key point coordinates, calculating the boundary frame loss, the key point loss and the total loss, taking the total loss as a supervision signal of the network, updating and optimizing the network weight, obtaining the final boundary frame and the key point, and finishing the posture detection of the sow.
The beneficial effects of the above-mentioned further scheme are: the invention improves on the basis of the traditional yolov7, so that the purpose of real-time detection can be achieved, and the detection of the posture of the sow is completed through the scheme.
Further, in S5, the sow carcass shake detection algorithm based on the optical flow method is used to extract time series data of sow carcass shake, including the following steps:
s5-1: the method comprises the steps of inputting RGB video data of sows in a lateral lying mode into FlowNet2.0 frame by frame, and outputting a two-dimensional optical flow field to each pair of adjacent frames by using FlowNet 2.0;
s5-2: calculating the average value of each optical flow field to obtain a quantized value representing the shake of the sow body, wherein the average value of the optical flow fieldsThe calculation formula is as follows:
wherein,is the number of pixels in the optical flow field, < >>Is the displacement vector of the pixel point in the optical flow field, < >>The value of the pixel point is taken;
s5-3: adding the quantized values representing the sow somatic jitter obtained from all adjacent frames within 1 second to obtain a value of the sow somatic jitter within 1 secondThe formula is:
wherein,for 1 second image frame number, < >>Is the value of the number of image frames in 1 second, < >>Is->Frame and->Frame-derived->A value;
s5-4: time series calculation every 1 second And storing the time series data in a one-dimensional array to obtain the time series data of the sow body shake.
The beneficial effects of the above-mentioned further scheme are: the sow vibration detection method is used for detecting the sow vibration, is used for comprehensively predicting the sow delivery time, calculates the average value of all displacement vectors in the optical flow field, and can accurately represent the sow body vibration degree.
Further, in S6, the time series data of the sow body temperature is extracted by using a sow body temperature extraction algorithm based on example segmentation and PCA, including the following sub-steps:
s6-1: performing example segmentation on a plurality of key parts of the sow by using an example segmentation branch GS-yolov7-seg in the improved GS-yolov7 to obtain a mask of the key parts;
s6-2: processing the mask of each key part to obtain the coordinates of all the pixel points in the mask area on the original image;
s6-3: extracting the region temperature of each key part in a temperature matrix according to the pixel point coordinates, and calculating the region temperature value of each key part to represent the temperature of the current key part;
S6-4: the temperature of each key part is ordered according to time sequence to form oneTemperature matrix of>,/>Is the number of rows of the matrix;
s6-5: for temperature matrixPCA is carried out to obtain time sequence data of the sow body temperature.
The beneficial effects of the above-mentioned further scheme are: the temperature of the sow can gradually rise along with time before and after delivery, so the sow temperature change detection device is used for extracting the temperature change of the sow and comprehensively predicting the delivery time of the sow, and simultaneously, the data are subjected to dimensionality reduction fusion through the principal component analysis, so that the sow temperature can be extracted more accurately and comprehensively.
Further, the temperature matrix is used in S6-5PCA was performed, comprising the following sub-steps:
s6-5-1: for temperature matrixZero-equalizing the data of each column of (1) to obtain a +.>Matrix of->
S6-5-2: computing a matrixTo obtain a covariance matrix of +.>Covariance matrix>
S6-5-3: calculating covariance matrix using eigenvalue decomposition methodIs a feature value and a feature vector of (1);
s6-5-4: normalizing the feature vector, and taking the feature vector corresponding to the largest feature value as a column vector
S6-5-5: matrix the temperatureMultiplying by column vector>Obtaining a +.>Matrix of->Matrix +.>And the final total temperature is stored in a one-dimensional array to obtain time sequence data of the body temperature of the sow.
The beneficial effects of the above-mentioned further scheme are: through the technical scheme, the temperature matrix is subjected to principal component analysis, and multidimensional data is subjected to dimension reduction fusion.
Further, the data preprocessing in S7 includes the following sub-steps:
s7-1: performing data cleaning and data interpolation on the time series data of the body shake of the sow and the time series data of the body temperature of the sow;
s7-2: and carrying out data normalization on the data cleaning and interpolation results and the time sequence data of the environmental factors to finish data preprocessing.
The beneficial effects of the above-mentioned further scheme are: noise and abnormal values are removed from the data through data cleaning, and as only body shake and body temperature data of sows in lateral lying are extracted, the data are not completely continuous in time sequence, and the missing is filled by using a data interpolation method, so that the data are completely continuous in time sequence, and adverse effects caused by singular sample data are eliminated through normalization.
Further, the sow delivery time prediction result is obtained in S8, and the method comprises the following sub-steps:
s8-1: inputting each preprocessed time series data into the LSTM to predict, obtaining a static time series of the residual time of the start of the delivery, and defining the start of the delivery as the time point when the first piglet falls on the ground;
S8-2: and constructing a KF model, setting initial parameters, dynamically adjusting a time sequence of the static delivery starting residual time by using a time updating equation and a measurement updating equation to obtain a dynamically adjusted predicted time sequence, and obtaining a sow delivery time prediction result by representing the residual time from the occurrence of the sow delivery behavior by the dynamically adjusted predicted time sequence.
The beneficial effects of the above-mentioned further scheme are: according to the invention, by combining LSTM and KF, dynamic prediction is performed on the rest time of the beginning of the delivery, and the state variable is dynamically updated according to real-time data, so that the method is well adapted to sudden changes of a time sequence and more accurate prediction is achieved.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent prediction system for sow delivery.
Fig. 2 is a flow chart of an intelligent prediction method for sow delivery.
Fig. 3 is a general technical roadmap of an intelligent prediction method for sow delivery.
Fig. 4 is a schematic diagram of the network structure of the modified model GS-yolov 7.
Fig. 5 is a schematic structural diagram of the lightweight convolution technique GSConv.
Fig. 6 is a schematic diagram of the structure of the CBS module and DWCBS module.
Detailed Description
The invention will be further described with reference to the drawings and specific examples.
The embodiment 1 is shown in fig. 1, and the intelligent prediction system for sow delivery comprises an AI edge equipment module, an environmental factor acquisition module, a network transmission module, a video image data storage module, a local server, a cloud server and a terminal real-time display module, wherein the network transmission module comprises a switch and a sensor gateway, the switch is respectively connected with the AI edge equipment module, the video image data storage module and the local server through network cables, the sensor gateway is connected with the local server through network cables, the environmental factor acquisition module is in communication connection with the sensor gateway, the local server is in communication connection with the cloud server, and the cloud server is in communication connection with the terminal real-time display module;
the system comprises an AI edge equipment module, an environment factor acquisition module, a network transmission module, a video image data storage module and a local server, wherein the AI edge equipment module is used for acquiring and processing video image data of a sow in the late gestation period under the environment of a sow limit fence, the environment factor acquisition module is used for acquiring environment factor data of a pig house, the network transmission module is used for data transmission among the AI edge equipment module, the environment factor acquisition module, the video image data storage module and the local server, the video image data storage module is used for storing the video image data for a long time, the local server is used for overall management and data processing interaction of transmitted data, the cloud server is used for deploying a sow delivery time prediction model, and the terminal real-time display module is used for displaying the prediction time of sow delivery and monitoring the physiological health condition of the sow in real time.
In one embodiment of the invention, the AI edge equipment module continuously collects and processes RGB video and thermal infrared data of sows in the later gestation period under the limit fence all day, the original data collected by the AI edge equipment is transmitted to the video image data storage module through the network transmission module, and the RGB video data and the thermal infrared data of the sows in the lateral lying state, which are extracted after being processed by the AI edge equipment, are transmitted to the local server. And meanwhile, the environmental factor acquisition module continuously acquires environmental factor data all the day, and the acquired data is transmitted to a local server through a sensor gateway. Network hard disk recorders (NVRs) in the video image data storage module are used for long-term storage of RGB video data, thermal infrared data. The local server comprehensively manages data transmitted from the AI edge equipment module and the environmental factor acquisition module, and processes data interaction between the cloud server and the terminal equipment. A series of data preprocessing algorithms and a sow delivery time prediction model are deployed in the cloud server and are used for predicting the starting and remaining time of sow delivery, and the model is composed of three algorithms: the sow body shake detection algorithm based on the optical flow method, the sow body temperature extraction algorithm based on example segmentation and PCA and the sow delivery time prediction algorithm based on LSTM-KF. And finally, transmitting the sow body shake information, sow body temperature information, pig house environment information and a sow delivery time prediction model result to a terminal display module from the cloud server. The terminal display module comprises a PC client and a mobile terminal APP, and performs data communication with the cloud server through a SFTP (Secure File Transfer Protocol) protocol.
The AI edge equipment module comprises a plurality of AI edge equipment, the AI edge equipment is arranged on a ceiling right above each sow limit fence, the AI edge equipment comprises a camera and an AI calculation core unit, the camera is positioned below the AI calculation core unit and is movably connected with the AI calculation core unit, the camera comprises an RGB camera module, a thermal infrared imaging module and a depth camera module, and the RGB camera module, the thermal infrared imaging module and the depth camera module are positioned in the same plane;
the AI computing core unit adopts a Jetson Xavier NX submodule, the Jetson Xavier NX submodule is integrated in an industrial control machine shell and is provided with a power supply unit and a heat dissipation unit, and the RGB camera module, the thermal infrared imaging module and the depth camera module are all connected with the Jetson Xavier NX submodule through MIPI CSI interfaces.
In one embodiment of the invention, the casing of the AI edge device is made of three-proofing materials, so that the effects of water resistance, dust resistance and corrosion resistance can be achieved. The camera can adjust the shooting angle according to the requirement, and the AI computing core can realize different functions through different algorithms. In the implementation, an AI edge device is installed on a ceiling (2.0 meters away from the ground) right above a column, sow RGB and thermal infrared data under a overlook angle are obtained through a camera, an AI calculation core unit receives original data and then transmits the original data to a video image data storage module, a video image preprocessing algorithm is used for carrying out denoising, image enhancement and other preprocessing on RGB video, the preprocessed RGB video data is input into a sow gesture detection algorithm based on key point detection to detect the sow gesture, if the sow is detected to be on the side, the frame is ignored, if the sow is detected to be on the side, a corresponding frame is marked, then a video frame of the sow when the sow is on the side is intercepted according to a detection result, corresponding thermal infrared data of a corresponding time point is extracted from the thermal infrared image data, and finally the RGB video data and the thermal infrared data of the sow when the sow is on the side are transmitted to a local server.
In this embodiment, the RGB camera module: resolution 1080P is selected, a CMOS sensor is adopted, and an industrial camera with a frame rate of 30fps is adopted, so that the image quality is ensured. Thermal infrared imaging camera module: the uncooled micro thermal imaging module with the resolution of 320 multiplied by 240 and the frame rate of 30fps and the wave band of 8-14um is selected, and can work at room temperature. Depth camera module: and selecting a depth camera with the resolution of 640 multiplied by 480 and the working distance of 0.6-8m, and acquiring the 3D depth information of the scene.
The control mode of the camera is as follows: the control program on the Jetson Xavier NX module sends control commands to each camera through the MIPI CSI interface, and parameters such as image format, resolution, exposure time and the like are set. After receiving the command, the camera performs corresponding setting and returns state information.
Internal data transmission and communication policies: (1) internal interface: the camera is connected to the Jetson Xavier NX through an MIPI CSI interface, interfaces such as PCIe, USB 3.0 and the like are arranged in the Jetson Xavier NX, and an M.2 interface is provided for connecting an SSD memory; (2) data streaming: the image collected by the camera is transmitted to Jetson for processing through MIPI CSI, the processing result of Jetson is transmitted to SSD for storage through PCIe, and the image data and the result which need to be uploaded to the outside are transmitted out through a gigabit Ethernet interface; (3) communication protocol: the camera control command uses a self-defined control protocol, jetson and SSD storage to carry out point-to-point communication by adopting an NVMe protocol, and data uploading uses standard protocols such as TCP/IP, RTSP, MQTT and the like; (4) data stream optimization: direct memory access and zero copy technology of DMA are adopted, unnecessary data copying is avoided, multi-core parallel computing is used, GPU heterogeneous computing is used for accelerating data processing, and mechanisms such as a memory pool, circular storage and the like are adopted for optimizing memory use; (5) error handling: setting a timeout retransmission mechanism to process network communication faults, using ECC error correction to improve storage reliability, and adopting heartbeat detection to monitor link connection state; (6) safety guarantee: and encrypting the key data transmission connection and adopting an access control mechanism to prevent unauthorized access.
Information communication with other external modules: (1) communication with NVR: adopting an RTSP protocol, and using an AI edge device as an RTSP client to continuously push the video stream coded by H.264/H.265 to an NVR server; (2) communication with a local server: adopting an MQTT protocol, and using an AI edge device as an MQTT client, publishing the detection result to a designated Topic, and subscribing the Topic by a local server to obtain a processing result; (3) update mechanism: a database is designed to record information such as IP addresses, port numbers, and topics of NVR and the local server. The AI edge device periodically queries a database to obtain the latest information and updates the connection configuration; (4) heartbeat detection: adding a heartbeat packet detection mechanism, confirming whether the connection is normal or not, and rapidly reconnecting if the connection is abnormal; (5) flow control: presetting a data uploading rate threshold value per second, and monitoring flow in real time to avoid network congestion; (6) security policy: and performing measures such as VLAN isolation, ACL access control and the like on the network switch. Encrypted video streaming; (7) logging: and key connection, processing and error logs are recorded, so that problem tracking is facilitated.
The environmental factor acquisition module comprises an environmental sensor, wherein the environmental sensor comprises a temperature and humidity sensor, a carbon dioxide concentration sensor and an ammonia concentration sensor, and one environmental sensor is arranged in every three sow limiting columns.
Embodiment 2, as shown in fig. 2 and 3, an intelligent prediction method for sow delivery includes the following steps:
s1: the method comprises the steps of continuously acquiring RGB video data, thermal infrared data and environmental factor data of sows in the late gestation period under a sow limit fence by using an AI edge equipment module and an environmental factor acquisition module;
s2: video image preprocessing is carried out on the collected RGB video data, the preprocessed RGB video data is input into a sow gesture detection algorithm which is deployed in an AI computing core unit and is based on key point detection, and the sow gesture is detected;
in the embodiment, a plurality of image enhancement technologies based on bilateral filtering, self-adaptive histogram equalization, laplacian and the like are used for carrying out image enhancement processing on the RGB video of the sow;
s3: when the posture of the sow is detected to be lying on the side, recording a detection result of the frame, extracting RGB video data of the sow when the sow is lying on the side according to the detection result, and extracting thermal infrared data of the sow at a corresponding time point when the sow is lying on the side from the acquired thermal infrared data;
in this example, the sow lateral position is defined as: one side of the sow body is contacted with the ground, and the abdomen, the breast and the limbs are visible;
s4: performing video framing and image enhancement preprocessing on RGB video data of the sow in lateral lying and thermal infrared data of the sow in lateral lying;
In the embodiment, video framing and image enhancement are performed on RGB video data, and image enhancement is performed on thermal infrared data;
s5: inputting the preprocessed RGB video data of the sow during lateral lying into a sow body shaking detection algorithm based on an optical flow method in a cloud server, and extracting time sequence data of sow body shaking;
s6: the pretreated thermal infrared data of the sows in lateral lying is input into a sow body temperature extraction algorithm based on example segmentation and PCA in a cloud server, and time series data of the sow body temperature are extracted;
s7: performing dimension reduction processing on the collected environmental factor data through characteristic engineering to obtain time sequence data of the environmental factor, and performing data preprocessing on the time sequence data of the environmental factor, the time sequence data of sow body shake and the time sequence data of sow body temperature;
s8: and inputting each preprocessed time series data into a delivery time prediction algorithm based on LSTM-KF in the cloud server to obtain a sow delivery time prediction result, and completing intelligent prediction of sow delivery.
The sow posture detection algorithm based on the key point detection in the S2 adopts improved GS-yolov7, wherein the improved GS-yolov7 is to replace a CBS module in a neck network of the original yolov7 with GSConv, as shown in fig. 4, and the sow posture is detected by using a posture estimation branch GS-yolov 7-ose of the GS-yolov7, and the method comprises the following sub-steps:
S2-1: processing each frame of image of the input video by utilizing a GS-yolov 7-phase backbone network, extracting the characteristics of the bottom layer, the middle layer and the high layer of the image, and respectively inputting the characteristics into a neck network;
s2-2: the neck network is utilized to conduct multi-scale feature fusion on the extracted features of the bottom layer, the middle layer and the high layer of the image, and a feature map with multi-scale feature information is obtained;
s2-3: processing the feature map with the multi-scale feature information by using a detection head, and obtaining three anchor frames with different shapes for each cell;
the image will be divided into a grid of cells (e.g. 13 x 13, suitably arranged according to the duty cycle of the detected object in the image), each cell predicting 3 anchor frames of different sizes.
S2-4: and carrying out regression based on the anchor frame, predicting the boundary frame coordinates, the boundary frame sizes and the key point coordinates, calculating the boundary frame loss, the key point loss and the total loss, taking the total loss as a supervision signal of the network, updating and optimizing the network weight, obtaining the final boundary frame and the key point, and finishing the posture detection of the sow.
The anchor frame with the highest IoU overlap ratio with the ground trunk is found first, the predicted boundary frame coordinates are calculated relative to the center point of the matched anchor frame, the predicted boundary frame size is calculated relative to the size of the matched anchor frame, and the predicted key point position is also calculated relative to the center point of the matched anchor frame. The bounding box loss is then calculated using CIoU, and the keypoint loss is calculated using OKS, the total loss being a linear combination of the bounding box loss and the keypoint loss.
As shown in fig. 5, GSConv is a mixed convolution of standard convolution (standard convolution), depth-wise separable convolution (depth separable convolution), shuffle (channel shuffle). Inputting a characteristic diagram with the channel number of C1 into GSConv; firstly, a CBS module comprises a standard convolution layer (standard convolution), BN (Batch Normalization) layers and SiLU activation functions, and the number of output channels is a characteristic diagram of C2/2; inputting the feature map output by the CBS module into a DWCBS module, wherein the module comprises a depth-separable convolution layer (depth-wise separable convolution), BN (Batch Normalization) layers and SiLU activation functions, and outputting a feature map with the number of channels of C2/2; and connecting the feature map output by the CBS module with the feature map output by the DWCBS module to form a feature map with the channel number of C2. The CBS module and DWCBS module architecture is shown in fig. 6; and finally, channel shuffling is carried out through a shuffle layer, and a characteristic diagram with the number of channels being C2 is output. The shuffle mode selects a matrix transpose operation (but is not limited to).
In one embodiment of the invention, in the early work, a camera firstly collects more than 5000 sow images under a limiting fence, then carries out image enhancement processing on the collected images, artificially defines the lateral lying posture of the sow, marks 5 key points (a connection point of a buttock arc line and a rear leg, a connection point of a breast arc line and a front leg, a connection point of a breast arc line and a rear leg, a midpoint of a breast arc line and a connection point of a front leg and a body) of the sow by manpower, constructs a data set, and randomly divides a training set, a verification set and a test set according to the proportion of 8:1:1. The data set is then used to train a sow posture detection algorithm based on keypoint detection. And finally, deploying the trained algorithm on the AI edge equipment.
S5, extracting time sequence data of sow somatic jitter by using a sow somatic jitter detection algorithm based on an optical flow method, wherein the method comprises the following steps of:
s5-1: the method comprises the steps of inputting RGB video data of sows in a lateral lying mode into FlowNet2.0 frame by frame, and outputting a two-dimensional optical flow field to each pair of adjacent frames by using FlowNet 2.0;
s5-2: calculating the average value of each optical flow field to obtain a quantized value representing the shake of the sow body, wherein the average value of the optical flow fieldsThe calculation formula is as follows:
wherein,is the number of pixels in the optical flow field, < >>Is the displacement vector of the pixel point in the optical flow field, < >>The value of the pixel point is taken;
s5-3: adding the quantized values representing the sow somatic jitter obtained from all adjacent frames within 1 second to obtain a value of the sow somatic jitter within 1 secondThe formula is:
wherein,for 1 second image frame number, < >>Is the value of the number of image frames in 1 second, < >>Is->Frame and->Frame-derived->A value;
s5-4: time series calculation every 1 secondAnd storing the time series data in a one-dimensional array to obtain the time series data of the sow body shake.
In one embodiment of the invention, in the early work, the sow carcass shake detection algorithm based on the optical flow method is trained by using a FlyingChairs optical flow data set (but not limited to), and then the trained algorithm is deployed on a cloud server.
S6, extracting time sequence data of the sow body temperature by using a sow body temperature extraction algorithm based on example segmentation and PCA, wherein the method comprises the following steps of:
s6-1: performing example segmentation on a plurality of key parts of the sow by using an example segmentation branch GS-yolov7-seg in the improved GS-yolov7 to obtain a mask of the key parts;
s6-2: processing the mask of each key part to obtain the coordinates of all the pixel points in the mask area on the original image;
s6-3: extracting the region temperature of each key part in a temperature matrix according to the pixel point coordinates, and calculating the region temperature value of each key part to represent the temperature of the current key part;
s6-4: ordering the temperatures of each critical part in time seriesConstitutes oneTemperature matrix of>,/>Is the number of rows of the matrix;
s6-5: for temperature matrixPCA is carried out to obtain time sequence data of the sow body temperature.
S6-5 temperature matrixPCA was performed, comprising the following sub-steps:
s6-5-1: for temperature matrixZero-equalizing the data of each column of (1) to obtain a +.>Matrix of->
S6-5-2: computing a matrixTo obtain a covariance matrix of +.>Covariance matrix>
S6-5-3: calculating covariance matrix using eigenvalue decomposition method Is a feature value and a feature vector of (1);
S6-5-4:normalizing the feature vector, and taking the feature vector corresponding to the largest feature value as a column vector
S6-5-5: matrix the temperatureMultiplying by column vector>Obtaining a +.>Matrix of->Matrix +.>And the final total temperature is stored in a one-dimensional array to obtain time sequence data of the body temperature of the sow.
In one embodiment of the invention, in the early work, thermal infrared images of more than 5000 sows on their sides are acquired, then the acquired images are subjected to image enhancement processing, the images are marked manually, a data set is constructed, and a training set, a verification set and a test set are randomly divided according to the proportion of 8:1:1. The data set is then used to train a sow body temperature extraction algorithm. And finally, deploying the trained algorithm on a cloud server.
The data preprocessing in S7 includes the following sub-steps:
s7-1: performing data cleaning and data interpolation on the time series data of the body shake of the sow and the time series data of the body temperature of the sow;
s7-2: and carrying out data normalization on the data cleaning and interpolation results and the time sequence data of the environmental factors to finish data preprocessing.
S8, obtaining a sow farrowing time prediction result, which comprises the following sub-steps:
S8-1: inputting each preprocessed time series data into the LSTM to predict, obtaining a static time series of the residual time of the start of the delivery, and defining the start of the delivery as the time point when the first piglet falls on the ground;
in the embodiment, the length of a time sequence window is selected as a data dimension, and the input and output of an LSTM model are defined, wherein the input dimension is defined as 3 (three characteristics of sow body shake, sow body temperature and environmental factors), the output dimension is defined as 1, the loss measurement index is Mean Square Error (MSE), and the gradient descent is selected as an Adam algorithm;
s8-2: and constructing a KF model, setting initial parameters, dynamically adjusting a time sequence of the static delivery starting residual time by using a time updating equation and a measurement updating equation to obtain a dynamically adjusted predicted time sequence, and obtaining a sow delivery time prediction result by representing the residual time from the occurrence of the sow delivery behavior by the dynamically adjusted predicted time sequence.
In one embodiment of the invention, body shaking time series data, body temperature time series data and environmental factor time series data of at least 200 batches of different sows five days before delivery and one day after delivery are firstly obtained, then data cleaning, data interpolation, normalization and other pretreatment are carried out on the data, after manual marking, each time point in the time series data corresponds to real sow delivery remaining time data, a data set is divided into a training set, a verification set and a test set according to the proportion of 3:1:1, and then the data set is used for training a sow delivery time prediction algorithm. And finally, deploying the trained algorithm on a cloud server.
According to the invention, by means of the gesture recognition algorithm based on key point detection, the lateral lying and non-lateral lying of the sow are distinguished, videos of the sow during lateral lying are intercepted, and the calculated amount of a sow delivery time prediction model is reduced; the adjacent frame optical flow field of the lateral sow video is extracted by using an optical flow method, and the average value of all displacement vectors in the optical flow field is calculated, so that the shake degree of the sow trunk can be accurately represented; the temperature values of a plurality of parts of the sow are detected, and the multidimensional data are subjected to dimension reduction fusion through principal component analysis, so that the body temperature of the sow can be extracted more accurately and comprehensively; the information of multiple dimensions such as sow body shake, sow body temperature, environmental factors (temperature, humidity, carbon dioxide concentration, ammonia concentration) and the like are comprehensively considered, so that more accurate prediction can be made on the sow delivery time; the LSTM (Long short-term memory) and KF (Kalman Filter) are combined to dynamically predict the start and the rest time of the delivery, and the state variable can be dynamically updated according to real-time data, so that the method is well adapted to the sudden change of a time sequence, and further more accurate prediction is achieved.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit of the invention, and such modifications and combinations are still within the scope of the invention.

Claims (10)

1. The intelligent prediction system for sow delivery is characterized by comprising an AI edge equipment module, an environmental factor acquisition module, a network transmission module, a video image data storage module, a local server, a cloud server and a terminal real-time display module, wherein the network transmission module comprises a switch and a sensor gateway, the switch is respectively connected with the AI edge equipment module, the video image data storage module and the local server through network cables, the sensor gateway is connected with the local server through network cables, the environmental factor acquisition module is in communication connection with the sensor gateway, the local server is in communication connection with the cloud server, and the cloud server is in communication connection with the terminal real-time display module;
the system comprises an AI edge equipment module, an environment factor acquisition module, a network transmission module, a video image data storage module and a local server, wherein the AI edge equipment module is used for acquiring and processing video image data of a sow in the late gestation period under the environment of a sow limit fence, the environment factor acquisition module is used for acquiring environment factor data of a pig house, the network transmission module is used for data transmission among the AI edge equipment module, the environment factor acquisition module, the video image data storage module and the local server, the video image data storage module is used for storing the video image data for a long time, the local server is used for overall management and data processing interaction of transmitted data, the cloud server is used for deploying a sow delivery time prediction model, and the terminal real-time display module is used for displaying the prediction time of sow delivery and monitoring the physiological health condition of the sow in real time.
2. The intelligent prediction system for sow labor according to claim 1, wherein the AI edge equipment module comprises a plurality of AI edge equipment, the AI edge equipment is installed on a ceiling right above each sow limit fence, the AI edge equipment comprises a camera and an AI computing core unit, the camera is located below the AI computing core unit and is movably connected with the AI computing core unit, the camera comprises an RGB camera module, a thermal infrared imaging module and a depth camera module, and the RGB camera module, the thermal infrared imaging module and the depth camera module are located in the same plane;
the AI computing core unit adopts a Jetson Xavier NX submodule, the Jetson Xavier NX submodule is integrated in an industrial control machine shell and is provided with a power supply unit and a heat dissipation unit, and the RGB camera module, the thermal infrared imaging module and the depth camera module are all connected with the Jetson Xavier NX submodule through MIPI CSI interfaces.
3. The intelligent prediction system for sow delivery according to claim 1, wherein the environmental factor acquisition module comprises an environmental sensor, the environmental sensor comprises a temperature and humidity sensor, a carbon dioxide concentration sensor and an ammonia concentration sensor, and one environmental sensor is arranged in every three sow limiting columns.
4. The intelligent prediction method for sow delivery is characterized by comprising the following steps:
s1: the method comprises the steps of continuously acquiring RGB video data, thermal infrared data and environmental factor data of sows in the late gestation period under a sow limit fence by using an AI edge equipment module and an environmental factor acquisition module;
s2: video image preprocessing is carried out on the collected RGB video data, the preprocessed RGB video data is input into a sow gesture detection algorithm which is deployed in an AI computing core unit and is based on key point detection, and the sow gesture is detected;
s3: when the posture of the sow is detected to be lying on the side, recording a detection result of the frame, extracting RGB video data of the sow when the sow is lying on the side according to the detection result, and extracting thermal infrared data of the sow at a corresponding time point when the sow is lying on the side from the acquired thermal infrared data;
s4: performing video framing and image enhancement preprocessing on RGB video data of the sow in lateral lying and thermal infrared data of the sow in lateral lying;
s5: inputting the preprocessed RGB video data of the sow during lateral lying into a sow body shaking detection algorithm based on an optical flow method in a cloud server, and extracting time sequence data of sow body shaking;
s6: the pretreated thermal infrared data of the sows in lateral lying is input into a sow body temperature extraction algorithm based on example segmentation and PCA in a cloud server, and time series data of the sow body temperature are extracted;
S7: performing dimension reduction processing on the collected environmental factor data through characteristic engineering to obtain time sequence data of the environmental factor, and performing data preprocessing on the time sequence data of the environmental factor, the time sequence data of sow body shake and the time sequence data of sow body temperature;
s8: and inputting each preprocessed time series data into a delivery time prediction algorithm based on LSTM-KF in the cloud server to obtain a sow delivery time prediction result, and completing intelligent prediction of sow delivery.
5. The intelligent prediction method for sow delivery according to claim 4, wherein the key point detection-based sow posture detection algorithm in S2 adopts a modified GS-yolov7, the modified GS-yolov7 is to replace a CBS module in a neck network of the original yolov7 with GSConv, and the posture estimation branch GS-yolov 7-ose of the GS-yolov7 is used for detecting the posture of the sow, which comprises the following steps:
s2-1: processing each frame of image of the input video by utilizing a GS-yolov 7-phase backbone network, extracting the characteristics of the bottom layer, the middle layer and the high layer of the image, and respectively inputting the characteristics into a neck network;
s2-2: the neck network is utilized to conduct multi-scale feature fusion on the extracted features of the bottom layer, the middle layer and the high layer of the image, and a feature map with multi-scale feature information is obtained;
S2-3: processing the feature map with the multi-scale feature information by using a detection head, and obtaining three anchor frames with different shapes for each cell;
s2-4: and carrying out regression based on the anchor frame, predicting the boundary frame coordinates, the boundary frame sizes and the key point coordinates, calculating the boundary frame loss, the key point loss and the total loss, taking the total loss as a supervision signal of the network, updating and optimizing the network weight, obtaining the final boundary frame and the key point, and finishing the posture detection of the sow.
6. The intelligent prediction method for sow delivery according to claim 4, wherein the step of extracting time series data of sow somatic jitter in S5 by using a sow somatic jitter detection algorithm based on an optical flow method comprises the following steps:
s5-1: the method comprises the steps of inputting RGB video data of sows in a lateral lying mode into FlowNet2.0 frame by frame, and outputting a two-dimensional optical flow field to each pair of adjacent frames by using FlowNet 2.0;
s5-2: calculating the average value of each optical flow field to obtain a quantized value representing the shake of the sow body, wherein the average value of the optical flow fieldsThe calculation formula is as follows:
wherein,is the number of pixels in the optical flow field, < >>Is the displacement vector of the pixel point in the optical flow field, < >>The value of the pixel point is taken;
S5-3: adding the quantized values representing the sow somatic jitter obtained from all adjacent frames within 1 second to obtain a value of the sow somatic jitter within 1 secondThe formula is:
wherein,for 1 second image frame number, < >>Is the value of the number of image frames in 1 second, < >>Is->Frame and->Frame-derived->A value;
s5-4: time series calculation every 1 secondAnd storing the time series data in a one-dimensional array to obtain the time series data of the sow body shake.
7. The intelligent prediction method for sow labor according to claim 5, wherein the step of extracting time series data of sow body temperature in S6 by using a sow body temperature extraction algorithm based on example segmentation and PCA comprises the following sub-steps:
s6-1: performing example segmentation on a plurality of key parts of the sow by using an example segmentation branch GS-yolov7-seg in the improved GS-yolov7 to obtain a mask of the key parts;
s6-2: processing the mask of each key part to obtain the coordinates of all the pixel points in the mask area on the original image;
s6-3: extracting the region temperature of each key part in a temperature matrix according to the pixel point coordinates, and calculating the region temperature value of each key part to represent the temperature of the current key part;
S6-4: the temperature of each key part is ordered according to time sequence to form oneTemperature matrix of (2),/>Is the number of rows of the matrix;
s6-5: for temperature matrixPCA is carried out to obtain time sequence data of the sow body temperature.
8. The intelligent prediction method for sow labor according to claim 7, wherein the temperature matrix is used in the step S6-5PCA was performed, comprising the following sub-steps:
s6-5-1: for a pair ofTemperature matrixZero-equalizing the data of each column of (1) to obtain a +.>Matrix of->
S6-5-2: computing a matrixTo obtain a covariance matrix of +.>Covariance matrix>
S6-5-3: calculating covariance matrix using eigenvalue decomposition methodIs a feature value and a feature vector of (1);
s6-5-4: normalizing the feature vector, and taking the feature vector corresponding to the largest feature value as a column vector
S6-5-5: matrix the temperatureMultiplying by column vector>Obtaining a +.>Matrix of->Matrix +.>And the final total temperature is stored in a one-dimensional array to obtain time sequence data of the body temperature of the sow.
9. The intelligent prediction method for sow labor according to claim 4, wherein the data preprocessing in S7 comprises the following sub-steps:
S7-1: performing data cleaning and data interpolation on the time series data of the body shake of the sow and the time series data of the body temperature of the sow;
s7-2: and carrying out data normalization on the data cleaning and interpolation results and the time sequence data of the environmental factors to finish data preprocessing.
10. The intelligent prediction method for sow labor according to claim 4, wherein the step of obtaining the sow labor time prediction result in S8 comprises the following sub-steps:
s8-1: inputting each preprocessed time series data into the LSTM to predict, obtaining a static time series of the residual time of the start of the delivery, and defining the start of the delivery as the time point when the first piglet falls on the ground;
s8-2: and constructing a KF model, setting initial parameters, dynamically adjusting a time sequence of the static delivery starting residual time by using a time updating equation and a measurement updating equation to obtain a dynamically adjusted predicted time sequence, and obtaining a sow delivery time prediction result by representing the residual time from the occurrence of the sow delivery behavior by the dynamically adjusted predicted time sequence.
CN202311465835.XA 2023-11-07 2023-11-07 Intelligent prediction system and method for sow delivery Active CN117197902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311465835.XA CN117197902B (en) 2023-11-07 2023-11-07 Intelligent prediction system and method for sow delivery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311465835.XA CN117197902B (en) 2023-11-07 2023-11-07 Intelligent prediction system and method for sow delivery

Publications (2)

Publication Number Publication Date
CN117197902A true CN117197902A (en) 2023-12-08
CN117197902B CN117197902B (en) 2024-01-30

Family

ID=88998365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311465835.XA Active CN117197902B (en) 2023-11-07 2023-11-07 Intelligent prediction system and method for sow delivery

Country Status (1)

Country Link
CN (1) CN117197902B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102139048B1 (en) * 2020-01-22 2020-07-30 주식회사 피플멀티 Health care monitoring system of pregnancy animal using smart health care apparatus
CN112131927A (en) * 2020-08-03 2020-12-25 南京农业大学 Sow delivery time prediction system based on posture transformation characteristics in later gestation period
CN113971066A (en) * 2020-07-22 2022-01-25 中国科学院深圳先进技术研究院 Kubernetes cluster resource dynamic adjustment method and electronic equipment
KR20220062911A (en) * 2020-11-09 2022-05-17 주식회사 일루베이션 Sow management and Environmental control System
CN114677624A (en) * 2022-03-18 2022-06-28 南京农业大学 Sow parturition intelligent monitoring system based on cloud edge synergy
CN114674444A (en) * 2022-03-28 2022-06-28 华南农业大学 Live pig key part temperature inspection device and method based on thermal image
CN115861833A (en) * 2022-11-16 2023-03-28 西北大学 Real-time remote sensing image cloud detection method based on double-branch structure
CN116543462A (en) * 2023-05-08 2023-08-04 内蒙古大学 Method for identifying and judging dairy cow health condition based on dairy cow behaviors of video bones
CN116935439A (en) * 2023-07-18 2023-10-24 河北农业大学 Automatic monitoring and early warning method and automatic monitoring and early warning system for delivery of pregnant sheep

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102139048B1 (en) * 2020-01-22 2020-07-30 주식회사 피플멀티 Health care monitoring system of pregnancy animal using smart health care apparatus
CN113971066A (en) * 2020-07-22 2022-01-25 中国科学院深圳先进技术研究院 Kubernetes cluster resource dynamic adjustment method and electronic equipment
CN112131927A (en) * 2020-08-03 2020-12-25 南京农业大学 Sow delivery time prediction system based on posture transformation characteristics in later gestation period
KR20220062911A (en) * 2020-11-09 2022-05-17 주식회사 일루베이션 Sow management and Environmental control System
CN114677624A (en) * 2022-03-18 2022-06-28 南京农业大学 Sow parturition intelligent monitoring system based on cloud edge synergy
CN114674444A (en) * 2022-03-28 2022-06-28 华南农业大学 Live pig key part temperature inspection device and method based on thermal image
CN115861833A (en) * 2022-11-16 2023-03-28 西北大学 Real-time remote sensing image cloud detection method based on double-branch structure
CN116543462A (en) * 2023-05-08 2023-08-04 内蒙古大学 Method for identifying and judging dairy cow health condition based on dairy cow behaviors of video bones
CN116935439A (en) * 2023-07-18 2023-10-24 河北农业大学 Automatic monitoring and early warning method and automatic monitoring and early warning system for delivery of pregnant sheep

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALI SEYDI KECELIA 等: "Development of a recurrent neural networks-based calving prediction model using activity and behavioral data", 《COMPUTERS AND ELECTRONICS IN AGRICULTURE》, vol. 170, no. 105285, pages 1 - 9 *
YIGUI HUANG 等: "An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model", 《SENSORS》, vol. 23, no. 6309, pages 1 - 18 *
肖德琴 等: "基于红外热成像的生猪耳温自动提取算法", 《农业机械学报》, vol. 52, no. 8, pages 255 - 262 *
钟艳: "母猪的分娩预测及管理", 《饲料博览》, no. 6, pages 83 *

Also Published As

Publication number Publication date
CN117197902B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN105531995B (en) System and method for using multiple video cameras to carry out object and event recognition
JP7038744B2 (en) Face image retrieval methods and systems, imaging devices, and computer storage media
WO2019100608A1 (en) Video capturing device, face recognition method, system, and computer-readable storage medium
CN111242025A (en) Action real-time monitoring method based on YOLO
CN113392765B (en) Tumble detection method and system based on machine vision
CN112200157A (en) Human body 3D posture recognition method and system for reducing image background interference
CN104251737A (en) Infrared thermometer data analysis processing platform and method
CN107133611A (en) A kind of classroom student nod rate identification with statistical method and device
CN115035088A (en) Helmet wearing detection method based on yolov5 and posture estimation
CN112766040A (en) Method, device and apparatus for detecting residual bait and readable storage medium
Zhang et al. Unsupervised depth estimation from monocular videos with hybrid geometric-refined loss and contextual attention
WO2021134311A1 (en) Method and apparatus for switching object to be photographed, and image processing method and apparatus
WO2023041904A1 (en) Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes
CN117197902B (en) Intelligent prediction system and method for sow delivery
CN111222459A (en) Visual angle-independent video three-dimensional human body posture identification method
JP7211428B2 (en) Information processing device, control method, and program
US11605220B2 (en) Systems and methods for video surveillance
Zhang et al. EventMD: High-speed moving object detection based on event-based video frames
CN110210530A (en) Intelligent control method, device, equipment, system and storage medium based on machine vision
CN114202819A (en) Robot-based substation inspection method and system and computer
CN108184062A (en) High speed tracing system and method based on multi-level heterogeneous parallel processing
Zhou et al. A low-resolution image restoration classifier network to identify stored-grain insects from images of sticky boards
Zhang et al. Fully automatic system for fish biomass estimation based on deep neural network
CN105847711A (en) High integration infrared imaging system based on high performance FPGA+DDR3 chips
CN108965885B (en) Video online reconstruction and moving target detection method based on frame compression measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant