CN117788847A - Method, system and related products for detecting respiratory rate of pigs - Google Patents

Method, system and related products for detecting respiratory rate of pigs Download PDF

Info

Publication number
CN117788847A
CN117788847A CN202311813227.3A CN202311813227A CN117788847A CN 117788847 A CN117788847 A CN 117788847A CN 202311813227 A CN202311813227 A CN 202311813227A CN 117788847 A CN117788847 A CN 117788847A
Authority
CN
China
Prior art keywords
pig
pigs
respiratory rate
image
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311813227.3A
Other languages
Chinese (zh)
Inventor
张玉良
辛超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Muyuan Foods Co Ltd
Original Assignee
Muyuan Foods Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Muyuan Foods Co Ltd filed Critical Muyuan Foods Co Ltd
Priority to CN202311813227.3A priority Critical patent/CN117788847A/en
Publication of CN117788847A publication Critical patent/CN117788847A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting the respiratory rate of pigs, which comprises the following steps: obtaining original video data of pigs; screening out video data to be calculated, which meets preset requirements, according to the original video data; according to the video data to be calculated, calculating the respiratory rate of pigs by using a gray level image so as to obtain calculation result data; and carrying out dimension analysis on the respiratory rate of the pig according to the calculation result data so as to obtain detection result data. According to the invention, the automatic detection of the respiratory rate of pigs is realized through vision, and after calculation and statistical analysis are carried out on the respiratory rate of pigs, whether the temperature of a pig farm in a pig house is comfortable or not can be represented. The detection result can be provided for the breeding personnel, and is used as a basis for adjusting the environmental parameters in the pig house, so that the comfort level of the pig group is improved, the breeding efficiency of live pigs is improved, and the rapid expansion of the production scale is facilitated.

Description

Method, system and related products for detecting respiratory rate of pigs
Technical Field
The present invention relates generally to the field of respiratory rate detection. More particularly, the present invention relates to a method, system and related products for detecting respiratory rate in pigs.
Background
In pig house management, the detection of respiratory rate is a critical sign parameter, which is closely related to the health condition and comfort of pig groups, so that the detection and monitoring of the respiratory rate of pig groups is particularly important. Traditionally, detecting the frequency of the porcine group call mainly depends on random sampling and manual detection, which not only consumes a great deal of manpower and material resources, but also is difficult to ensure the accuracy and real-time performance of the detection result.
With the rapid development of automation, digitization and intellectualization technologies, pig raising technicians are actively exploring technologies for automatically detecting the frequency of pig group calls. Current respiratory rate detection techniques are largely divided into two categories, contact and non-contact. The contact detection method mainly utilizes wearable equipment to test the changes of parameters such as pressure, displacement, gas content and the like caused by respiration, so that the monitoring of the respiratory frequency is realized. The non-contact detection method is mainly used for monitoring the respiratory rate of the swinery through radar detection or machine vision technology.
However, the above detection techniques have different drawbacks. For example, the touch detection method has limited applicability and poor flexibility. Although the radar detection technology can effectively detect the respiratory rate of pigs in a scattered scene, different targets are difficult to effectively distinguish in a scene of dense pig groups, so that the respiratory rate of the pigs cannot be accurately monitored. Moreover, the machine vision technology needs to use a depth camera to identify and track the breathing actions of pigs, and meanwhile, expensive instruments and equipment are also needed to ensure the data quality and the operation speed, which increases the difficulty and the cost of detection.
In view of the foregoing, it is desirable to provide a method for detecting respiratory rate in pigs so that respiratory rate in pigs can be effectively detected in a dense herd of pigs.
Disclosure of Invention
In order to solve at least one or more of the technical problems mentioned above, the present invention proposes, in various aspects, a solution for detecting respiratory rate in pigs.
In a first aspect, the invention provides a method for detecting respiratory rate in pigs comprising: obtaining original video data of pigs; screening out video data to be calculated, which meets preset requirements, according to the original video data; according to the video data to be calculated, calculating the respiratory rate of pigs by using a gray level image so as to obtain calculation result data; and carrying out dimension analysis on the respiratory rate of the pig according to the calculation result data so as to obtain detection result data.
In some embodiments, calculating the porcine respiratory rate using the gray scale image based on the video data to be calculated to obtain calculation result data comprises: converting the video data to be calculated into a gray image; according to the gray level image, utilizing the pig numbers to divide the pig images in the video so as to generate a pig mask; extracting pixel intensity values according to the regional image of the pig mask; the pixel intensity values are used as time series data for signal processing; converting the time-series data into a frequency domain; and calculating the respiratory frequency according to the peak value in the frequency domain to obtain calculation result data.
In some embodiments, segmenting the pig images in the video according to the gray scale images using pig numbering to generate pig masks includes performing pig recognition according to changes in the pig gray scale images to obtain gray scale images to be segmented; dividing the gray image to be divided by using the pig numbers to generate a pig mask; and rejecting pigs above an activity threshold according to the pig mask.
In some embodiments, calculating the respiratory rate of the pig using the gray scale image based on the video data to be calculated to obtain calculation result data further comprises: and performing non-maximum suppression on the calculation result data, and screening the calculation result data according to a preset threshold value of the confidence coefficient.
In a second aspect, the invention provides a system for detecting respiratory rate in pigs comprising: the perception module is configured to acquire original video data detected by pigs; the preprocessing module is configured to screen out video data to be calculated, which meets preset requirements, according to the original video data; a calculating module configured to calculate a respiratory rate of a pig using a gray scale image according to the video data to be calculated to obtain calculation result data; and the analysis module is configured to conduct dimension analysis on the respiratory frequency of the pig according to the calculation result data so as to obtain detection result data.
In some embodiments, the computing module comprises: the conversion unit is used for converting the video data to be calculated into a gray image; the segmentation unit is used for segmenting the pig images in the video by using the pig numbers according to the gray level images so as to generate pig masks; an extracting unit for extracting pixel intensity values according to the regional image of the pig mask; a processing unit for taking the pixel intensity values as time series data for signal processing; a conversion unit configured to convert time-series data into frequency domain data; and a calculation unit for calculating a respiratory rate from the peak value in the frequency domain to obtain the calculation result data.
In some embodiments, the partitioning unit comprises: according to the change of the gray level image of the pig, identifying the pig so as to obtain the gray level image to be segmented; dividing the gray image to be divided by using the pig numbers to generate a pig mask; and (5) removing pigs above the liveness threshold according to the pig mask.
In some embodiments, the system further comprises: and the optimizing module is configured to compare the detection result data with the manual inspection video data so as to obtain false alarm data.
In a third aspect, the present invention provides an apparatus for detecting respiratory rate of pigs, comprising a memory for storing a computer program and a processor for implementing a method according to any of the embodiments of the first aspect when the computer program is executed.
In a fourth aspect, the present invention provides a computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements a method according to any of the embodiments of the first aspect.
According to the scheme for detecting the respiratory rate of the pigs, the respiratory rate of the pigs is calculated by utilizing the gray level image through the video data to be calculated which are screened out from the original video data, the respiratory rate of the pigs is subjected to multidimensional analysis, and finally the result data for detecting the respiratory rate of the pigs are obtained. According to the scheme, through combining a deep learning image processing algorithm, a signal processing algorithm and a hardware system, automatic detection of the respiratory rate of pigs is achieved through vision, the respiratory rate of the pigs is calculated, and after statistical analysis of a calculation result, whether the temperature of a pig farm in a pig house is comfortable or not can be represented. The detection result can be provided for the breeding personnel, and is used as a basis for adjusting the environmental parameters in the pig house, so that the comfort level of the pig group is improved, the breeding efficiency of live pigs is improved, and the rapid expansion of the production scale is facilitated.
Further, in some embodiments, the pixel values in the mask region may be extracted by converting the video data to be calculated into a gray image, and dividing the gray image according to the pig number, and then generating a pig mask. The pixel values are converted into a frequency domain according to the intensity of the pixel values as a time sequence. The breathing frequency is calculated from the peaks in the frequency domain. Therefore, the respiratory rate can be automatically extracted and calculated by utilizing the respiratory characteristics of pigs in the video image. The automation of detection is improved.
Still further, in some embodiments, the calculation results are filtered according to the confidence, and the multi-dimensional analysis detection result of the cultivation environment is obtained through analysis of the calculation results. And comparing the detection result of the multidimensional analysis with the manual inspection video, optimizing the detection method, reducing false alarm data and improving the detection accuracy.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 shows a flow chart of a method for detecting respiratory rate in pigs in accordance with an embodiment of the invention;
FIG. 2 is a flowchart of a method for calculating respiratory rate using video data according to some embodiments of the invention;
FIG. 3 is a schematic diagram of a system for detecting respiratory rate in pigs according to some embodiments of the invention;
FIG. 3a is a schematic diagram of an improved system for detecting respiratory rate in pigs according to some embodiments of the invention;
FIG. 4 is a schematic diagram of a computing module according to some embodiments of the invention;
FIG. 4a is a schematic diagram illustrating an improved architecture of a computing module according to some embodiments of the invention; and
fig. 5 shows an exemplary block diagram of an apparatus for detecting porcine respiratory rate according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that when the terms "first," "second," "third," and "fourth," etc. are used in the claims, the specification and the drawings of the present invention, they are used merely to distinguish between different objects and not to describe a particular sequence. The terms "comprises" and "comprising" when used in the specification and claims of the present invention are taken to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification and claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present specification and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
In order that those skilled in the art can better understand the present invention, the following description will generally refer to some related terms and terminology.
The respiratory rate of the invention refers to the respiratory rate of the pig, namely, one breath is one time when one breath is inhaled and one breath is inhaled, the respiratory rate is called the respiratory rate, and the unit is times per minute. The invention recognizes respiratory rate including, but not limited to, chest breathing, abdominal breathing, lying breathing, prone breathing, and the like.
Comfort refers to a state when the pig is on one side and the limbs are stretched, and the respiratory frequency is about 20-50 times/min; in particular to the external temperature and humidity and the weight of pigs.
The heat bias means that the respiration frequency is 50-80 times/min when no contact exists between pig groups; in particular to the external temperature and humidity and the weight of pigs.
The heat stress means that the respiratory frequency of pigs among pig groups is more than 80 times/min under the conditions of no contact, lying down, shortness of breath and the like; in particular to the external temperature and humidity and the weight of pigs.
Fig. 1 shows a flowchart of a method 100 for detecting respiratory rate in pigs in accordance with an embodiment of the invention.
As shown in fig. 1, the method 100 provides a method for detecting respiratory rate of pigs, comprising: in step S101, original video data of pigs is obtained; in step S102, according to the original video data, screening out video data to be calculated, which meets the preset requirements; in step S102, according to the video data to be calculated, calculating the respiratory rate of the pig by using the gray scale image to obtain calculation result data; in step S103, dimension analysis is performed on the respiratory rate of the pig according to the calculation result data, so as to obtain detection result data.
Specifically, in step S101, raw video data of pigs is acquired. In an embodiment scene, the pig original video can be obtained by adopting a manual shooting or automatic shooting mode, and certain shooting rules need to be followed in the shooting process. The basic requirement for the video is that the shooting effect restores the real situation of the scene as much as possible, and the functions such as beautifying, denoising and watermarking are avoided to prevent the situation of chromatic aberration or large-scale blurring, thereby causing certain interference to the recognition result. For example, when manual shooting is employed, acquiring raw video data of pigs includes: and collecting pig original video data by using visible light video equipment. When using an automatic device, for a movable camera, shooting can be performed by setting a shooting position point and a shooting time point; for a fixed camera, a pig column can be shot by setting a time point. In one embodiment, the video may be captured at a variety of angles, which may include up, side, etc. angles. The devices used may also employ a variety of photographing devices including video cameras, cell phones, cameras, and the like. Through shooting videos at multiple angles, better robustness and universality can be provided for users.
Further, in step S102, the video data to be calculated, which meets the preset requirements, is screened out according to the original video data. In one embodiment, the video data to be calculated, which meets preset requirements, is screened out according to the original video data, wherein the preset requirements include: image sharpness, image chromatic aberration, shooting time, and/or shooting position.
Further, in step S103, the respiration rate of the pig is calculated using the gray scale image from the video data to be calculated to obtain calculation result data. Through video analysis equipment, common machine vision algorithms, deep learning vision algorithms, signal processing algorithms and the like are carried on, video data to be calculated are screened and identified, videos meeting requirements are selected, and the videos are sent into a preset detection model to calculate the respiratory rate of pigs. In one embodiment, choosing satisfactory video may include the video image meeting size standardization requirements, such as 1280x720 pixels. Video is converted to grayscale images using grayscale conversion by, for example, resizing video frames across platform computer vision and machine learning software libraries OpenCV, preserving necessary image information while optimizing and reducing computational complexity of subsequent processing.
In one embodiment, the detection model described above may utilize a modified YOLO model (You Only Look Once). Those skilled in the art will appreciate that the YOLO model, as an algorithm for target detection using convolutional neural networks, can be trained with labeled pig image datasets to achieve detection of target objects.
Further, after the video is converted into the gray image, the gray image is segmented into the pig images according to the numbers of the pigs, so that a pig mask can be formed. In one embodiment, pixel intensity values may be extracted from each mask region as time series data, converted to the frequency domain using a Fast Fourier Transform (FFT). Further, specific frequency components are identified from the frequency domain data to calculate the respiratory frequency. For example, in the frequency domain, frequency peaks associated with periodic pig chest and abdomen movements are identified based on the FFT analysis results. The respiratory rate is calculated from these peaks and the video frame rate.
Further, in one embodiment, non-maximum suppression (NMS) algorithms may also be utilized to filter the respiratory rate for non-maximum suppression and confidence thresholds to improve detection result accuracy.
Further, the method 100 performs a dimension analysis on the respiratory rate of the pig according to the calculation result data to obtain detection result data in step S104. With the calculation result data obtained in step S103, in one embodiment, the calculation result data may be read and analyzed by various BI systems and software, for example, using commercially available BI analysis software or systems such as tab, power BI, pyechorts, etc., so as to obtain specific values, proportions and variation trends of the respiratory rate of each field region and each unit. The analysis result can be accurate to the information of each column, and the analysis result is visually displayed in the form of a chart, so that data and suggestions are provided for a decision maker, and improvement suggestions are provided for a production field. Early warning is carried out on a field with serious heat discomfort in time, so that a decision maker can find out abnormality and process the abnormality in time, and the risk that the abnormality becomes more serious is reduced.
In one embodiment, the dimensional analysis includes one or more of the following: a field dimension, a swinery unit dimension, a numerical dimension, a proportion dimension and a variation trend dimension; the detection result data includes: pig proportion above the preset respiratory frequency, breeding environmental conditions and/or trend of environmental change.
By automatically calculating and analyzing the respiratory rate of pigs and forming corresponding processing suggestions according to analysis results, the response period can be shortened, the influence of insufficient experience of breeders on the breeding results can be reduced, and therefore the survival rate of pig groups and the breeding efficiency of live pigs can be improved, and the rapid expansion of the production scale is facilitated.
In one embodiment, according to the calculation result data, the dimension analysis is performed on the respiratory rate of the pig, so as to obtain detection result data, and then the method further comprises the following steps: and comparing the detection result data with the manual inspection video to obtain false alarm data.
After the analysis result is obtained, the result data of the respiratory frequency is checked, and if the data is found to calculate the wrong scene and/or the respiratory frequency detection is not in accordance with the actual situation, optimization suggestion feedback is carried out. User feedback can be collected and analyzed in the background, and then the scene video with the calculation errors can be used for further optimizing the model processing examples. Further, the newly appeared condition can be used as the improvement direction of the subsequent detection method.
The following table is an example of respiratory rate detection results using the inventive approach:
fig. 2 is a flowchart of a method 200 for calculating respiratory rate using video data according to some embodiments of the invention.
As shown in fig. 2, in the method 200, calculating the respiratory rate of the pig using the gray scale image based on the video data to be calculated to obtain the calculation result data includes: in step S201, video data to be calculated is converted into a grayscale image. In step S202, the pig image in the video is segmented according to the gray image using the pig number to generate a pig mask. In step S203, pixel intensity values are extracted from the region image of the pig mask. In step S204, the pixel intensity value is used as time-series data for signal processing. In step S205, the time-series data is converted into a frequency domain. In step S206, the respiratory rate is calculated from the peak in the frequency domain to obtain calculation result data. Since the method 200 may be understood as one specific implementation of step S103 in the method 100, the features described in connection with fig. 1 in the method 100 may be similarly applied thereto.
In step S201, video data to be calculated is converted into a grayscale image. Before computing the video data to be computed, preprocessing of the video data to be computed is required, including resizing of the video frames, which may be for example 1280x720 pixels in size.
In step S202, the pig image in the video is segmented according to the gray image using the pig number to generate a pig mask. In a farm, each pig will generate a configuration number. For each numbered pig, a mask is generated that will characterize the coordinates of the contours of the pig by the change in the gray scale image. In particular, the following detailed description of how the mask is generated is provided to better understand the present solution by those skilled in the art.
In one embodiment, dividing the pig images in the video according to the gray level images by using pig numbers to generate pig masks comprises recognizing pigs according to the change of the pig gray level images to obtain gray level images to be divided; dividing the gray image to be divided by using the pig numbers to generate a pig mask; and rejecting pigs above the liveness threshold according to the pig mask.
First, pigs in the pig farm were numbered. And secondly, recognizing gray level change according to the gray level image, so that image segmentation according to gray level change of pigs is realized.
In one embodiment, segmenting the gray image to be segmented using the pig number to generate a pig mask comprises: and selecting a first frame image and a last frame image of the gray level image to be segmented. And respectively generating a pig outline coordinate point set according to the pig numbers by using the first frame image and the last frame image. Generating a first frame mask and a last frame mask according to the pig outline coordinate point set; wherein the mask comprises: pig number and pig outline coordinate point set.
The present invention is described by taking a video with a length of 10 seconds and a frame number of 25 frames as an example. The first frame image and the last frame (250 th frame) image are selected for instance segmentation. Wherein the segmented pig outline is a closed, irregular shape, and is represented on the data as a point set formed by a plurality of (x, y) coordinates. Each coordinate corresponds to a pixel point on the image. And generating a mask representing the coordinates of the outline of the pig according to the coordinate point set of the outline. The mask information here is information with the pig number and the outline of the pig. By performing example segmentation on the first frame image, pixel points with strong correlation and available for calculation can be determined, so that the calculation amount is greatly reduced. In the same way, the image of the last frame is subjected to example segmentation to generate a second mask representing the coordinates of the outline of the pig.
In one embodiment, culling pigs above the liveness threshold based on the pig mask comprises: determining a first center point location of a first frame mask; determining a second center point position of the last frame mask; judging the liveness of pigs according to the distance between the first central point position and the second central point position; and removing pigs above the liveness threshold according to the liveness of the pigs.
Because the respiration rate of the pig can be calculated only in a resting state, whether the pig is active or not can be judged by comparing the distances between the center points of the two masks (namely the first frame mask and the last frame mask) of the same pig, and then the respiration rate corresponding to the active pig is removed, so that the accuracy of a calculation result is improved. The standard for judging whether the pig is active or not is not fixed, and other various factors, such as specific breeding environment, sex of the pig, age of the pig and the like, need to be combined for comprehensive judgment.
Further, with the porcine mask obtained in step S202, in step S203, pixel intensity values are extracted from the region image of the porcine mask. As can be seen from the above description, since the pig mask includes a set of coordinate points, each coordinate point corresponds to a pixel point on the image. Thus, the intensity values of the pixels can be extracted by the porcine mask.
Further, in step S204, the pixel intensity value is used as time-series data for signal processing. A time domain signal can be constructed from a data describing the change in pixel intensity values over time.
Further, in step S205, the time-series data is converted into a frequency domain. The time series data may be converted into the frequency domain, for example, by fourier transform (FFT). This step helps identify major frequency components in the data that may be related to respiration of the target object (e.g., a pig).
Further, in step S206, the respiratory rate is calculated from the peak in the frequency domain to obtain calculation result data. Before calculating the breathing frequency, a specific frequency range needs to be determined to analyze the FFT result. This frequency range is set based on the frequency band in which the expected respiratory frequency may occur.
In one embodiment, calculating the respiratory rate from the peaks in the frequency domain further comprises: setting a first protection area and a second counting area according to the peak value in the frequency domain; determining the confidence level of the pixel corresponding to the peak value according to the ratio of the energy of the first protection area to the energy of the second counting area; wherein the first protection area is smaller than the second counting area.
First, a maximum peak point in the FFT result is found in a set frequency range. These peak points represent the most significant frequency components, possibly related to respiratory rate. Next, the respiratory rate is calculated from the frequency peak points. The peak frequency is converted here to breaths per minute. Finally, a smaller first guard area and a larger second count area are set for each peak point. As will be appreciated by those skilled in the art, confidence is a measure of the degree of reliability. A high confidence level indicates the energy concentration of the FFT around that frequency, and thus may indicate that the frequency may be due to some stable periodic process (e.g., breathing). A low confidence level indicates a more diffuse energy distribution of the FFT, meaning that the frequency may be caused by noise or other non-periodic processes.
The confidence of the present invention is calculated from the ratio of the FFT energy in the first guard area to the FFT energy in the count area. If most of the energy of a peak is concentrated in a smaller area, this means that the confidence of the frequency bin is higher. In one embodiment scenario, when the confidence is greater than 0.4, then the pixel is considered to be post-processing available for result statistics.
In one embodiment, calculating the respiratory rate of the pig using the gray scale image based on the video data to be calculated to obtain the calculation result data further comprises: and performing non-maximum suppression on the calculation result data, and screening the calculation result data according to a preset threshold value of the confidence coefficient.
Typically, a suitable threshold range is manually tested. Since the confidence value is a value of 0 to 1, one embodiment of the preset threshold value may set, for example, 0.45. The threshold may be set smaller if the margin to the active target is high, whereas the threshold may be set larger if the margin to the active target is low.
In one embodiment, according to the video data to be calculated, the gray scale image is used to calculate the respiration rate of the pig, so as to obtain the calculation result data, and then the method further comprises the following steps: storing the data; the stored data includes video data to be calculated and/or calculation result data.
Those skilled in the art will appreciate, in light of the teachings and elicitations of the above embodiments of the present invention, that both video data to be computed and computation result data may be saved, and that structured or unstructured databases may be used in particular. For example, databases including, but not limited to, mySQL, oracle, postgreSQL, which are open-source or non-open-source, may be used instead, and cloud database storage may be used even to enhance its expansion performance. The present invention is not limited in any way as long as the purpose of storing data can be achieved. In one embodiment, to optimize the performance of the database, the original or filtered video captured by the device may be saved in the hard disk recorder, while the call address is saved in the database to support subsequent retrieval of the video. In the database storage, the calculation results such as the respiratory rate value, the unit information, the column information, the confidence level and the like are all stored in the corresponding fields, so that the subsequent analysis and the reference can be supported.
Fig. 3 is a schematic diagram of a system 300 for detecting respiratory rate in pigs according to some embodiments of the invention.
As shown in fig. 3, a system for detecting respiratory rate in pigs in a system 300 comprises: a perception module 301 configured to obtain raw video data detected by pigs; the preprocessing module 302 is configured to screen out video data to be calculated, which meets preset requirements, according to the original video data; a calculating module 303 configured to calculate a respiratory rate of the pig using the gray scale image based on the video data to be calculated to obtain calculation result data; an analysis module 304 configured to perform a dimensional analysis on the porcine respiratory rate based on the calculated result data to obtain detection result data.
Specifically, in the sensing module 301, the sensing module includes: and collecting pig original video data by using visible light video equipment.
The pig original video can be obtained by adopting a manual shooting or automatic shooting mode, and certain shooting rules are required to be followed in the shooting process. The basic requirement of the video is that the shooting effect restores the real situation of the scene as much as possible, and the functions such as beautifying, denoising and watermarking are avoided to prevent the situation of chromatic aberration or large-scale blurring, thereby causing certain interference to the recognition result. For example, when manual shooting is employed, acquiring raw video data of pigs includes: and collecting pig original video data by using visible light video equipment. When using an automatic device, for a movable camera, shooting can be performed by setting a shooting position point and a shooting time point; for a fixed camera, a pig column can be shot by setting a time point. In one embodiment, the video may be captured at a variety of angles, which may include up, side, etc. angles. The devices used may also employ a variety of photographing devices including video cameras, cell phones, cameras, and the like. Through shooting videos at multiple angles, better robustness and universality can be provided for users.
In the preprocessing module 302, the module includes: screening the original video data to obtain video data to be calculated, wherein the video data to be calculated meets preset requirements; wherein, the preset requirements include: image sharpness, image chromatic aberration, shooting time, and/or shooting position. The original video data is preprocessed, so that the screened video data is more beneficial to the subsequent conversion of gray images, the result data is calculated, the resource waste is reduced, and the calculation efficiency is improved.
In the calculation module 303, the module is configured with algorithms including a piggyback machine vision algorithm, a deep learning vision algorithm, and a signal processing algorithm. By utilizing video analysis equipment, such as a high performance processor, a GPU and the like, the videos transmitted from the preprocessing module 302 are screened and identified, the videos meeting the requirements are selected, and the videos are sent into a preset detection model for pig respiration rate calculation. After the calculation results on each pixel point are obtained, all the calculation results are screened according to the confidence coefficient threshold value on each pixel point, the feature confidence coefficient is considered as a trusted result only when the feature confidence coefficient is higher than the confidence coefficient threshold value, the confidence coefficient threshold value can be manually set along with the effect of a preset detection model, and the confidence coefficient threshold value can be adjusted downwards as much as possible on the premise of ensuring the accuracy rate so as to reserve more pixel points.
In the analysis module 304, it is configured to perform a dimensional analysis on the porcine respiratory rate according to the calculation result data, so as to obtain detection result data. The dimensional analysis includes one or more of the following: a field dimension, a swinery unit dimension, a numerical dimension, a proportion dimension and a variation trend dimension; the detection result data includes: pig proportion above the set respiratory rate, breeding environmental conditions and/or trend of environmental change.
The module mainly utilizes various data analysis (BI) systems and software, and can obtain the proportion, the thermal discomfort condition and the change trend of the high-respiratory-rate pigs corresponding to the columns through reading and analyzing the calculated result data. And the analysis result is used for feeding back field management and guiding production. The method has the advantages that the field with serious heat discomfort is timely early-warned, a decision maker is helped to timely find and process the abnormality, and the risk of further serious abnormality is reduced.
Fig. 3a is a schematic diagram of an improved configuration of a system 300 for detecting respiratory rate in pigs according to some embodiments of the invention. It should be noted that, for the sake of simplicity, the present invention presents some methods and embodiments thereof as a series of acts and combinations thereof, and those skilled in the art will appreciate that the embodiments described herein can be considered as alternative embodiments, i.e., the acts or modules involved therein are not necessarily required for the implementation of some or some aspects of the present invention. In addition, the description of some embodiments of the present invention is also focused on according to the different schemes. In view of this, those skilled in the art will appreciate that portions of one embodiment of the invention that are not described in detail may be referred to in connection with other embodiments.
As shown in fig. 3a, in some embodiment scenarios, based on the system 300 shown in fig. 3, the system 300 may further comprise: a storage module 303-1 configured to store data, wherein the stored data includes video data to be calculated and/or calculation result data.
The storage module 303-1 may be used to hold identification history data. Those skilled in the art will appreciate from the teachings and elicitations of the above embodiments of the present invention that both video data to be computed and computation result data may be saved to provide data support for data analysis and optimization of models. The stored data may in particular use structured or unstructured databases. For example, databases including, but not limited to, mySQL, oracle, postgreSQL, which are open-source or non-open-source, may be used instead, and cloud database storage may be used even to enhance its expansion performance. The present invention is not limited in any way as long as the purpose of storing data can be achieved. In one embodiment, to optimize the performance of the database, the original or filtered video captured by the device may be saved in the hard disk recorder, while the call address is saved in the database to support subsequent retrieval of the video. In the database storage, the calculation results such as the respiratory rate value, the unit information, the column information, the confidence level and the like are all stored in the corresponding fields, so that the subsequent analysis and the reference can be supported.
In some embodiments, as shown in fig. 3a, the system 300 further comprises: the optimization module 305 is configured to compare the detection result data with the manual inspection video data to obtain false alarm data.
The optimization module 305 is configured to collect feedback information of the user during the use of the technical solution of the present invention, including false detection cases and optimization suggestions. When the user checks the inspection video, if false alarm data is found, the false alarm data can be fed back to the system background. Background personnel can suggest new applications as a subsequent development direction for the project by collecting false positive videos for improving or optimizing model algorithms used in the scheme. The implementation of the optimization module in the embodiment of the invention comprises, but is not limited to, modes of manual feedback collection, feedback collection of a system and the like.
Fig. 4 is a schematic diagram of a computing module 400 according to some embodiments of the invention. It is appreciated that the structure 400 is a specific implementation of the step S103, the method 200 and the calculation module 303, and thus the features described in connection with fig. 1-3 can be similarly applied thereto.
As shown in fig. 4, the calculation module includes: a conversion unit 401 for converting video data to be calculated into a grayscale image; a segmentation unit 402, configured to segment the pig images in the video according to the gray level images by using the pig numbers, so as to generate a pig mask; an extracting unit 403, configured to extract a pixel intensity value according to the region image of the pig mask; a processing unit 404 for regarding the pixel intensity values as time series data for signal processing; a conversion unit 405 for converting time-series data into frequency domain data; and a calculation unit 406 for calculating the respiratory rate from the peak in the frequency domain to obtain calculation result data.
In the conversion unit 401, it is configured to convert video data to be calculated into a grayscale image. Before conversion, the pixels of the video to be calculated are adjusted, for example, to 1280x720 pixels. In one embodiment, the OpenCV library may be used to perform size adjustment and gray conversion of a video frame, so as to calculate the transformed value of each gray value in the [0, 255] interval, and in the transformation process, the table lookup may be directly performed according to the gray value to obtain a transformed result. Through gray level conversion, necessary image information can be reserved, and meanwhile, the calculation complexity of subsequent processing is reduced.
The dividing unit 402 includes: according to the change of the gray level image of the pig, identifying the pig so as to obtain the gray level image to be segmented; dividing the gray image to be divided by using the pig numbers to generate a pig mask; and (5) removing pigs above the liveness threshold according to the pig mask.
In order to obtain accurate pig masks, one embodiment is a scenario in which an improved YOLO deep learning model may be used, in combination with a pig image dataset within the farm. By training the YOLO model, pigs in the video can be accurately identified and corresponding masks can be generated.
In the above-mentioned segmentation unit 402, the segmentation of the gray-scale image to be segmented using the pig number to generate a pig mask includes: selecting a first frame image and a last frame image of the gray level image to be segmented; generating a pig outline coordinate point set according to the pig numbers by using the first frame image and the last frame image; generating a first frame mask and a last frame mask according to the pig outline coordinate point set; the mask includes: pig number and pig outline coordinate point set.
The converted gray scale image is submitted to a segmentation unit 402, which segments the pig images in the video with the pig numbers to generate a pig mask. The process of generating a pig mask will be described below by taking an example.
Taking a 10 second video with 25 frames as an example, the first and last frame (250 th frame) images are subjected to example segmentation using the algorithm described below.
The image of the first frame is first instance segmented, resulting in a mask that characterizes the coordinates of the contours of the pig. The specific implementation method comprises the following steps: a number is generated for each pig, and then a mask is generated based on the number and according to the outline of each pig. The pig profile is a closed, irregular shape. The outline is represented on the data as a set of points formed by a plurality of (x, y) coordinates, each corresponding to a pixel point on the image. The generated mask comprises a pig number and contour coordinate point set information attached to the pig after numbering. By performing example segmentation on the first frame image, the pixel points with strong correlation with calculation are determined, and subsequent calculation amount can be greatly reduced.
Thereafter, similar to the example segmentation of the first frame, the image of the last frame is also example segmented, creating a second mask characterizing the coordinates of the pig outline. The specific segmentation process is not described in detail.
After the pig mask is generated, removing pigs above the liveness threshold according to the pig mask comprises the following steps: determining a first center point location of a first frame mask; determining a second center point position of the last frame mask; judging the liveness of pigs according to the distance between the first central point position and the second central point position; and removing pigs above the liveness threshold according to the liveness of the pigs.
Because the respiration rate of the pigs can be calculated only in a resting state, whether the pigs are active or not can be judged by comparing the distances between the center points of the front and rear masks of the same pigs, and then the respiration rates corresponding to the active pigs are eliminated, so that the accuracy of a calculation result is improved. The specific judging standard of whether the pig is active needs to be combined with various factors to judge, such as specific breeding environment, sex of the pig, age of the pig and the like.
Further, in the extraction unit 403, the pixel intensity values are extracted from the porcine mask generated in the segmentation unit 402 using the region image of the porcine mask.
Further, in the processing unit 404, the pixel intensity value is used as time-series data for signal processing according to the pixel intensity value extracted in the extracting unit 403.
Further, the conversion unit 405 converts the time-series data into frequency domain data using fourier transform (FFT).
Further, the calculation unit 406 calculates the respiratory rate from the peak in the frequency domain to obtain calculation result data. Specifically, the method comprises the following steps: setting a first protection area and a second counting area according to the peak value in the frequency domain; determining the confidence level of the pixel corresponding to the peak value according to the ratio of the energy of the first protection area to the energy of the second counting area; wherein the first protection area is smaller than the second counting area.
Specifically, the confidence is calculated from the ratio of the FFT energy in the first guard region to the FFT energy in the second count region. If most of the energy of a peak is concentrated in a smaller area, this means that the confidence of the frequency bin is higher. High confidence indicates the energy concentration of the FFT around that frequency, indicating that the frequency may be due to some stable periodic process (e.g., breathing). A low confidence level indicates a more diffuse energy distribution of the FFT, meaning that the frequency may be caused by noise or other non-periodic processes. The breathing frequency is filtered using a preset frequency threshold and a calculated confidence threshold. This frequency is considered valid only if both the respiratory frequency and the confidence level are within reasonable ranges.
Fig. 4a is a schematic diagram illustrating an improvement of a computing module 400 according to some embodiments of the invention.
In the structure shown in fig. 4a, in addition to the structure 400 described above in fig. 4, it can be seen from what is shown in the figure that the structure 400 also adds a post-processing unit 407, which can therefore be regarded as one possible implementation of the structure 400. In view of this, the description made above with respect to the structure 400 of fig. 4 applies equally to fig. 4a, and for the sake of clarity and conciseness, the same will not be repeated.
As shown in fig. 4a, the post-processing unit is configured to perform non-maximum suppression on the calculation result data, and screen the calculation result data for a preset threshold of the confidence coefficient.
In order to further reduce the calculation result data, the system resources are saved for subsequent analysis on the basis of ensuring accuracy. In one embodiment, non-maximum suppression (NMS) algorithms are used to filter the calculation result data to remove redundant data. Further, accurate result data is further determined by setting a preset threshold of confidence. The preset confidence threshold may be set to 0.4, for example. When the confidence is greater than 0.4, the pixel is considered to be available for subsequent statistical processing of the results. It should be further noted that, a suitable confidence threshold range is manually tested, and the confidence value is a value ranging from 0 to 1, for example, the confidence threshold may also be set to 0.45, so long as the confidence value meets the requirement of the practical application environment. For example, the tolerance to the active target is high, the threshold may be set smaller, the tolerance to the active target is low, and the threshold may be set larger. And the method can also be adjusted downwards as much as possible on the premise of ensuring the accuracy so as to reserve more pixel points. The invention is therefore not limited in this respect.
Through the structure 400 of the computing module, video analysis equipment, such as a high-performance processor, a GPU and the like, is commonly used, and is loaded with a machine vision algorithm, a deep learning vision algorithm and a signal processing algorithm, so that videos transmitted from the preprocessing module are screened and identified, videos meeting requirements are selected, and the videos are sent to the computing module shown in the structure 400 to calculate the respiratory rate of pigs. After the calculation results on each pixel point are obtained, all the calculation results are screened according to the confidence threshold value on each pixel point. When the feature confidence is higher than the confidence threshold, the feature confidence is considered as a trusted result, so that more pixels can be reserved on the premise of ensuring accuracy.
In summary, according to the above-described scheme for detecting the respiratory rate of pigs, the acquired original videos of pigs are utilized to obtain a mask of pigs through converting gray images, and then the respiratory rate of pigs is calculated. After multidimensional analysis is carried out on the calculated respiratory rate data, final detected porcine respiratory rate result data are formed. In order to ensure the healthy environment in the pig house, the respiratory rate of pigs is visually and automatically obtained, so that the respiratory characteristics of pigs are used, and whether the living temperature of a pig group in the pig house is comfortable or not is proved. According to the detection result, the method is beneficial to breeding personnel, timely adjusts the environmental parameters in the pig house, improves the breeding efficiency of pigs, and is beneficial to the expansion of large-scale pig breeding.
Fig. 5 is a block diagram illustrating an exemplary configuration of an apparatus 500 for detecting respiratory rate in pigs in accordance with an embodiment of the present invention.
As shown in fig. 5, an embodiment of the present invention further provides an apparatus for detecting respiratory rate of pigs, which includes a memory 501 and a processor 502, the memory 501 storing a computer program, and the external processor 502 implementing the method according to any one of the above embodiments when the computer program is executed.
Specifically, in apparatus 500, a memory 501 and a processor 502 are included. Depending on the application scenario, the processor 502 may take the form of various types of chips, such as a Central Processing Unit (CPU), other general purpose microprocessors or any other conventional processor, etc. In addition to this, these processors may be selected and configured as desired. The use environment for safely and efficiently detecting the respiratory rate of pigs can be selected, and the invention is not limited in this respect.
In a further embodiment, the invention also provides a computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements a method according to any of the embodiments of the first aspect. By the method, the detection of the respiratory rate of pigs can be realized, so that the pig raising environment of the pig house can be timely found and adjusted.
The computer readable storage medium may be any suitable magnetic or magneto-optical storage medium, such as, for example, resistive Random Access Memory RRAM (Resistive Random Access Memory), dynamic Random Access Memory DRAM (DynamicRandom Access Memory), static Random Access Memory SRAM (Static Random-Access Memory), enhanced dynamic Random Access Memory EDRAM (Enhanced Dynamic Random Access Memory), high-Bandwidth Memory HBM (High-Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), etc., or any other medium that may be used to store the desired information and that may be accessed by an application, a module, or both. Any such computer storage media may be part of, or accessible by, or connectable to, the device. Any of the applications or modules described herein may be implemented using computer-readable/executable instructions that may be stored or otherwise maintained by such computer-readable media.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. The appended claims are intended to define the scope of the invention and are therefore to cover all equivalents or alternatives falling within the scope of these claims.

Claims (10)

1. A method for detecting respiratory rate in pigs comprising:
obtaining original video data of pigs;
screening out video data to be calculated, which meets preset requirements, according to the original video data;
according to the video data to be calculated, calculating the respiratory rate of pigs by using a gray level image so as to obtain calculation result data;
and carrying out dimension analysis on the respiratory rate of the pig according to the calculation result data so as to obtain detection result data.
2. The method of claim 1, wherein calculating the porcine respiratory rate using the gray scale image based on the video data to be calculated to obtain calculation result data comprises:
converting the video data to be calculated into a gray image;
according to the gray level image, utilizing the pig numbers to divide the pig images in the video so as to generate a pig mask;
extracting pixel intensity values according to the regional image of the pig mask;
the pixel intensity values are used as time series data for signal processing;
converting the time-series data into a frequency domain;
and calculating the respiratory frequency according to the peak value in the frequency domain to obtain calculation result data.
3. The method of claim 2, wherein segmenting the pig image in the video using the pig number based on the gray scale image to generate a pig mask comprises:
according to the change of the gray level image of the pig, identifying the pig so as to obtain the gray level image to be segmented;
dividing the gray image to be divided by using the pig numbers to generate a pig mask; and
and removing pigs above the liveness threshold according to the pig mask.
4. A method according to claim 3, wherein segmenting the gray image to be segmented using pig numbering to generate a pig mask comprises:
selecting a first frame image and a last frame image of the gray level image to be segmented;
respectively generating a pig outline coordinate point set according to the pig numbers by using the first frame image and the last frame image; and
generating a first frame mask and a last frame mask according to the pig outline coordinate point set; wherein,
the mask includes: pig number and pig outline coordinate point set.
5. The method of claim 4, wherein culling pigs above the liveness threshold based on the pig mask comprises:
determining a first center point location of a first frame mask;
Determining a second center point position of the last frame mask;
judging the liveness of pigs according to the distance between the first central point position and the second central point position; and
and removing pigs above the liveness threshold according to the liveness of the pigs.
6. The method of claim 2, wherein calculating the respiratory rate from the peaks in the frequency domain further comprises:
setting a first protection area and a second counting area according to the peak value in the frequency domain;
determining the confidence level of the pixel corresponding to the peak value according to the ratio of the energy of the first protection area to the energy of the second counting area; wherein,
the first protection area is smaller than the second counting area.
7. The method of claim 1, wherein the dimensional analysis comprises one or more of: a field dimension, a swinery unit dimension, a numerical dimension, a proportion dimension and a variation trend dimension; and
the detection result data includes: pig proportion above the preset respiratory frequency, breeding environmental conditions and/or trend of environmental change.
8. A system for detecting respiratory rate in pigs comprising:
the perception module is configured to acquire original video data detected by pigs;
The preprocessing module is configured to screen out video data to be calculated, which meets preset requirements, according to the original video data;
a calculating module configured to calculate a respiratory rate of a pig using a gray scale image according to the video data to be calculated to obtain calculation result data;
and the analysis module is configured to conduct dimension analysis on the respiratory frequency of the pig according to the calculation result data so as to obtain detection result data.
9. An apparatus for detecting respiratory rate of pigs, comprising a memory and a processor, the memory for storing a computer program, the external processor for implementing the method of any one of claims 1-7 when the computer program is executed.
10. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1-7.
CN202311813227.3A 2023-12-26 2023-12-26 Method, system and related products for detecting respiratory rate of pigs Pending CN117788847A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311813227.3A CN117788847A (en) 2023-12-26 2023-12-26 Method, system and related products for detecting respiratory rate of pigs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311813227.3A CN117788847A (en) 2023-12-26 2023-12-26 Method, system and related products for detecting respiratory rate of pigs

Publications (1)

Publication Number Publication Date
CN117788847A true CN117788847A (en) 2024-03-29

Family

ID=90393979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311813227.3A Pending CN117788847A (en) 2023-12-26 2023-12-26 Method, system and related products for detecting respiratory rate of pigs

Country Status (1)

Country Link
CN (1) CN117788847A (en)

Similar Documents

Publication Publication Date Title
Xudong et al. Automatic recognition of dairy cow mastitis from thermal images by a deep learning detector
CN111415339B (en) Image defect detection method for complex texture industrial product
CN111462102B (en) Intelligent analysis system and method based on novel coronavirus pneumonia X-ray chest radiography
CN113343779B (en) Environment abnormality detection method, device, computer equipment and storage medium
CN112926541A (en) Sleeping post detection method and device and related equipment
EP3786881A1 (en) Image processing for stroke characterization
CN116363520B (en) Landscape ecological detection system for urban green land planning
CN116955938A (en) Dry-type waste gas treatment equipment monitoring method and system based on data analysis
CN116229052B (en) Method for detecting state change of substation equipment based on twin network
CN114898261A (en) Sleep quality assessment method and system based on fusion of video and physiological data
CN113197558B (en) Heart rate and respiratory rate detection method and system and computer storage medium
CN117333489B (en) Film damage detection device and detection system
CN113343926A (en) Driver fatigue detection method based on convolutional neural network
CN102201060A (en) Method for tracking and evaluating nonparametric outline based on shape semanteme
CN116091781B (en) Data processing method and device for image recognition
CN117788847A (en) Method, system and related products for detecting respiratory rate of pigs
CN111339873B (en) Passenger flow statistical method and device, storage medium and computing equipment
CN114283280A (en) Water surface floating garbage identification method based on improved convolutional neural network
CN113569806A (en) Face recognition method and device
CN108363974B (en) Foot root detection method for computer vision gait recognition for social security management
CN117689893B (en) Laser scanning ultra-wide-angle fundus image semantic segmentation method, system and terminal
Ding et al. Implementation of behavior recognition based on machine vision
CN109859234B (en) Video human body trajectory tracking method and device and storage medium
CN111160221B (en) Face acquisition method and related device
CN117649415B (en) Cell balance analysis method based on optical flow diagram detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination