CN112598701B - Automatic tracking and monitoring video acquisition system and method for farm targets - Google Patents

Automatic tracking and monitoring video acquisition system and method for farm targets Download PDF

Info

Publication number
CN112598701B
CN112598701B CN202011228884.8A CN202011228884A CN112598701B CN 112598701 B CN112598701 B CN 112598701B CN 202011228884 A CN202011228884 A CN 202011228884A CN 112598701 B CN112598701 B CN 112598701B
Authority
CN
China
Prior art keywords
target
breeding
image
targets
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011228884.8A
Other languages
Chinese (zh)
Other versions
CN112598701A (en
Inventor
田建艳
胥若愚
李济甫
张苏楠
李丽宏
王素钢
翟鑫鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Wanli Technology Co ltd
Taiyuan University of Technology
Original Assignee
Shanxi Wanli Technology Co ltd
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Wanli Technology Co ltd, Taiyuan University of Technology filed Critical Shanxi Wanli Technology Co ltd
Priority to CN202011228884.8A priority Critical patent/CN112598701B/en
Publication of CN112598701A publication Critical patent/CN112598701A/en
Application granted granted Critical
Publication of CN112598701B publication Critical patent/CN112598701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • G06T5/94
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Abstract

The invention relates to a system and a method for automatically tracking and monitoring a target of a farm, wherein the system comprises an inspection robot and an upper computer monitoring system; the track of the inspection robot is arranged above a colony house of a farm, square limiting devices are arranged at two ends of the inspection track, the mechanical arm is arranged on the inspection track through a connecting rod, a camera is fixed at the other end of the mechanical arm, and an upper computer monitoring system controls the inspection robot; the method comprises the following steps: controlling the motion of a mechanical arm and directional video acquisition based on a detection and positioning result of a culture target, judging abnormal behaviors of the culture target by an upper computer monitoring system or culture personnel, determining the position of the suspected abnormal behavior, controlling the robot to move to a specified position, and performing fixed point video acquisition; when the improved SSD detection method and the FD-SSD detection method are adopted to detect that excessive aggregation or fighting action occurs to a breeding target, the video acquisition time is increased, and the video acquisition is timed; the whole system and the method have reasonable design and simple and convenient operation, and can realize real-time and automatic monitoring of the breeding target.

Description

Automatic tracking and monitoring video acquisition system and method for farm targets
Technical Field
The invention belongs to the field of automatic tracking and monitoring of farm videos, and particularly relates to a system and a method for acquiring a farm target automatic tracking and monitoring video.
Background
China is the largest livestock and poultry breeding and consuming country in the world, the total livestock and poultry meat production amount accounts for half of the total meat amount in the world, the traditional livestock and poultry breeding efficiency is low, the ever-increasing market demand cannot be met, and the livestock and poultry breeding in China is developing from a small and medium-sized free-range mode to a large-scale and intensive breeding mode; in intensive cultivation, the traditional method for monitoring the abnormality of the cultivation target depends on a large amount of manpower and material resources, the efficiency is low, the effect is difficult to ensure, and automatic and intelligent monitoring is a future development trend; the machine vision is applied to livestock and poultry breeding, so that the behavior information of a breeding target can be acquired in real time, and the normal life of the livestock and poultry can not be interfered.
However, in the real-time video acquisition process of the breeding target in the farm, due to the limitation of the shooting angle and the shooting range of the camera, a visual blind area exists, which affects the tracking and monitoring of the breeding target, in order to automatically monitor the breeding target and realize the full coverage of the video of the breeding target, the traditional monitoring means needs to install a large number of cameras, which seriously affects the economy of the tracking and monitoring of the breeding target and also brings difficulty for analyzing and processing multi-source video data, a similar system is an inspection robot applied to the identification of an indoor indicator lamp from Zhejiang self-company, the method adopts the steps that the collected images are converted into HSV color space types to calculate the brightness mean value of the indicator light area in each target image, the method for counting the number of times of lighting of the indicator light in the preset acquisition time period can only process a limited number of images in a period of time, and the instantaneity is poor; the Panzhihua college provides a video detection technology for collecting traffic information by integrating a Gaussian model and a Canny operator, when the Canny operator carries out edge extraction, edges in an image can be identified only once, and possible image noise cannot be identified as edges and cannot be automatically tracked and shot; tianjin Asia's science and technology Limited company provides a video monitoring system for target detection and tracking based on a multi-feature deep neural network, and a deep learning model is trained in advance by applying a YOLO frame based on a deep learning algorithm, so that a target to be tracked cannot be automatically judged and selected.
Disclosure of Invention
In order to solve the technical problems, the invention provides a system and a method for automatically tracking and monitoring a video acquisition of a farm target, which adopt Adaptive Contrast Enhancement (ACE) and Bilateral Filtering (BF) to process an image, use network deep migration learning to train an SSD model, and use a moving individual detection method of a breeding target combining FD and the SSD to send target position information obtained by video processing to a patrol robot based on machine vision, thereby realizing the directional, fixed-point and timed high-efficiency video acquisition of a breeding target video.
In order to realize the purpose, the technical scheme is as follows:
an automatic tracking and monitoring video acquisition system for a farm target comprises an inspection robot and an upper computer monitoring system; the inspection robot comprises a connecting rod, a motor driving wheel, a square limiting and buffering device, a conveying belt, an inspection track, a camera and a mechanical arm, wherein the mechanical arm comprises a servo motor, a connecting rod, an electric cylinder, a supporting rod and a tail end clamping device; patrol and examine rail mounting in the top of plant's colony house, establish square spacing and buffer at patrolling and examining orbital both ends, patrol and examine the both ends of track below and establish the motor drive wheel, establish the conveyer belt on the motor drive wheel, one side of connecting rod is installed on patrolling and examining the track, and the connecting rod is connected to the connecting rod opposite side, establishes servo motor between connecting rod and the connecting rod, and the electric jar is established to connecting rod joint support pole between connecting rod and the bracing piece, and the end of bracing piece is through terminal clamping device fixed camera.
The upper computer monitoring system mainly comprises two modules: the upper computer monitoring control system mainly comprises two modules: breed target monitoring module and patrol and examine robot control module, breed target monitoring module includes: the system comprises a breeding target video acquisition unit, an image preprocessing unit, an interval frame difference unit, a breeding target detection unit, a moving breeding target detection unit, a breeding target dispersion calculation unit and a breeding target fighting behavior discrimination unit; the relationship among the units of the breeding target monitoring module is as follows: the breeding target video acquisition unit acquires a breeding target video and sends each frame of image in the acquired video to the image preprocessing unit and the interval frame difference unit; the image preprocessing unit adopts a self-adaptive contrast enhancement and bilateral filtering algorithm to preprocess the acquired video frame, and uses the processed video frame in the culture target detection unit; the breeding target detection unit adopts an improved SSD algorithm to perform target detection on the preprocessed video frames, determines the position of a breeding target, and uses a target detection result in the breeding target dispersion calculation unit; the culture target dispersion degree calculation unit calculates and judges the excessive aggregation of the culture targets; the interval frame difference unit processes the collected video frames by adopting an inter-frame difference method and is used for the moving culture target detection unit after processing; the moving cultivation target detection unit detects moving cultivation target individuals by using the SSD, determines the positions of the moving cultivation targets, and uses the results in the cultivation target fighting behavior judgment unit; the fighting behavior judging unit of the breeding target judges the fighting behavior of the breeding target.
Patrol and examine robot control module and include: the system comprises a culture target position information processing unit, an inspection robot motion control unit and the like; the relation between the working principle of each unit of the inspection robot control module and the breeding target monitoring module is as follows: the method comprises the following steps that a breeding target monitoring module firstly sends identified breeding target position information needing tracking monitoring to a breeding target position information processing unit in an inspection robot control module, the breeding target position information processing unit judges whether a specified breeding target is in the central area of a video image according to an image detection positioning result based on machine vision, if yes, a motion control unit of the inspection robot sends an instruction to control a mechanical arm to drive a camera to keep follow shooting, and if not, the offset of the average mass center of the specified breeding target from the image center is calculated in the breeding target position information processing unit; the inspection robot motion control unit calculates the deflection angle and the pitch angle of each joint of the mechanical arm by taking the offset from the average mass center of the specified cultivation target to the center of the image as a control quantity, sends the information of the deflection angle to the inspection robot controller, controls the mechanical arm to drive the camera to place the specified cultivation target in the central area of the image again, and finally realizes directional and timed video acquisition; or the breeding personnel directly send out the instruction, and the inspection robot motion control unit directly controls the inspection robot according to the instruction of the breeding personnel to realize fixed-point video acquisition.
The working principle is as follows: the inspection robot acquires the living state of a farm target and transmits the acquired image to the upper computer monitoring system in time; the upper computer or the field culture personnel send instructions to the inspection robot to control the robot to rapidly move to a specified position, so that fixed-point video acquisition is realized; finally, the detected position information of the breeding target with excessive aggregation and fighting behaviors is sent to the inspection robot, and directional video collection is realized by controlling the motion of the mechanical arm and adjusting the visual angle of the camera; and the designated target video acquisition time is increased, and the timing video acquisition is realized.
On one hand, the upper computer monitoring system judges whether a cultivation target has suspected abnormal behaviors or not and determines the position of the suspected abnormal behaviors by observing the real-time video by cultivation personnel and combining real-time monitoring results of other monitoring technologies, then sends an instruction to the inspection robot, controls the robot to rapidly move to a specified position and realizes fixed-point video acquisition; on the other hand, the collected real-time images are preprocessed, the breeding target is detected through an SSD, the position of the breeding target is determined, a breeding target dispersion calculation method is designed to detect excessive aggregation of the breeding target, a breeding target fighting behavior is detected by combining a Frame Difference method (FD) and a breeding target fighting behavior detection method of the SSD, finally, the detected target position information of the excessive aggregation and the fighting behavior of the breeding target is sent to a routing inspection robot, and directional video collection is realized by controlling the movement of a mechanical arm and adjusting the visual angle of a camera; and the designated target video acquisition time is increased, and the timing video acquisition is realized.
A method for acquiring a farm target automatic tracking monitoring video comprises the following specific steps:
s1: the inspection robot pauses inspection action at the central point of each colony house, and adjusts the visual angle of the camera through the translation, deflection and pitching motion of the mechanical arm, so that the inspection robot can shoot all-around dead angles of the breeding colony house, and all breeding targets (breeding target video acquisition units) of the colony house are ensured to be contained in a shot image;
s2: preprocessing each frame of image in the collected video, including adaptive contrast enhancement and bilateral filtering, so as to enhance the detail information of the image, make the edge of the cultured target clearer, filter noise, reduce the environmental factors such as illumination and the like and the interference of water stain, urine stain, excrement and the like in the culture colony on the premise of keeping the edge information;
s3: detecting breeding targets in the preprocessed image by using an improved SSD, determining the centroid coordinate of each breeding target in the image, calculating the dispersion of the breeding targets by using a proposed breeding target dispersion formula, and judging whether the breeding targets excessively aggregate or not by comparing the dispersion with a dispersion threshold;
s4: differentiating two frames at intervals in the collected breeding target video, extracting pixels of the moving breeding target, and eliminating the interference of a breeding background environment and a static breeding target; then detecting the individual moving breeding target by using the SSD, and determining the position of the moving breeding target; finally, identifying whether the fighting action occurs to the moving cultivation target through a cultivation target fighting action judging method;
s5: according to other detection means of a farm or when a suspected abnormal breeding target is found by a breeding person, sending the position information of the abnormal breeding target to the inspection robot, so that the inspection robot can quickly reach a specified colony house, and realizing fixed-point video acquisition;
s6: according to the detection result of the abnormal behavior of the breeding target and in combination with other detection means or instructions sent by breeding personnel, the mechanical arm is controlled to drive the camera to automatically track the abnormal breeding target, so that directional video acquisition is realized, the shooting time of the abnormal breeding target is prolonged, and timing video acquisition is realized.
Further, the specific process of step S2 is as follows:
due to the fact that the cultivated target video is interfered by different environmental factors in the acquisition process, the outline of the cultivated target is fuzzy, the image contrast is reduced, and certain noise can be generated in the extracted cultivated target image due to the interference of the environmental factors such as illumination and the like and the interference of water stains, urine stains, excrement and the like in the cultivation colony house. Therefore, each frame of image in the acquired video needs to be preprocessed, including Adaptive Contrast Enhancement (ACE) and Bilateral Filtering (BF). By adopting the self-adaptive contrast enhancement, the contrast can be enhanced on the premise of keeping local information of the cultured target image; bilateral filtering can keep the edge information of the image on the basis of ensuring that the image noise of the breeding target is effectively filtered.
The self-adaptive contrast enhancement of the cultivation target image is to divide the cultivation target image into a low frequency part and a high frequency part, wherein the low frequency part is obtained through low-pass filtering of the image, the high frequency part is obtained through difference between an original image and the low frequency part, and the high frequency part of the cultivation target image contains detail information, so that the high frequency part needs to be multiplied by gain to be enhanced and then recombined to obtain the enhanced cultivation target image.
Let x (i, j) be the gray value at the coordinate (i, j) of the pixel point in the cultivation target image, and in the region with (i, j) as the center and the window size of (2n +1) × (2n +1), the local mean and variance can be expressed as:
Figure BDA0002764506260000031
Figure BDA0002764506260000032
in the formula, mx(i, j) is a local mean; sigmax 2(i, j) is the local variance;
local mean mx(i, j) can be approximated as the low frequency part of the image of the breeding target, and the enhanced pixel values for x (i, j) can be expressed as:
f(i,j)=mx(i,j)+G(i,j)[x(i,j)-mx(i,j)]
wherein f (i, j) is the pixel value after enhancing x (i, j); g (i, j) is the gain at (i, j).
The bilateral filter is one of important methods in edge preserving filtering, has good smooth edge preserving performance, preserves edge information of a breeding target image on the basis of ensuring effective noise filtering, and can be defined as:
Figure BDA0002764506260000041
in the formula, BIpThe filtered culture target image pixel value is obtained; s is a filtering window range; p is a pixel point of the image to be filtered; q is a neighborhood pixel point; i ispIs a p-point pixel value; i isqIs a q-point pixel value; gsIs a spatial proximity function; grIs a gray level similarity function; wpTo normalize the coefficients, define:
Figure BDA0002764506260000042
in the formula, GsAnd GrThe spatial similarity and the gray level similarity of the neighborhood pixels are respectively determined, and can be calculated through a Gaussian function, and the spatial similarity and the gray level similarity are respectively as follows:
Figure BDA0002764506260000043
Figure BDA0002764506260000044
in the formula, d (p, q) is the Euclidean distance between a pixel point p and a pixel point q; sigmasIs GsA function standard deviation; sigmarIs GrFunction standard deviation.
Further, the specific process of step S3 is as follows:
the upper computer monitoring system adopts the improved SSD to carry out target detection on the preprocessed video frames, and the position of a breeding target is determined; multiplexing the trained network parameters into a culture target detection model as initial network parameters by adopting a network-based deep migration learning method; and (3) replacing a Visual Geometry Group Network (VGG 16) with MobileNet _ v2 as a basic Network to obtain feature maps with different sizes. In order to improve the training effect of the model, a focus Loss function (Focal local, FL) is adopted to replace a Cross Entropy function (Cross Entropy, CE) as a confidence Loss function.
(1) Basic network MobileNet _ v2
The MobileNet _ v2 introduces Inverted Residuals (IR) to carry out dimension increasing and dimension reducing on the feature graph, so that the memory occupation is reduced;
(2) deep migration learning based on network
In order to train an SSD model by using the limited samples, network deep migration learning is introduced, trained model parameters are migrated to a new network, so that the convergence speed of model training is increased, the trained target detection model is used for reference in the process of constructing a moving culture target individual detection model, and the learned parameters are applied to the moving culture target individual detection model for training, so that the model training efficiency can be improved;
(3) focal loss function
In the training process of the SSD, a prior frame which is successfully matched with a real target labeling frame of a breeding target is used as a positive sample, a prior frame which is not successfully matched is used as a negative sample, FL is used as a confidence coefficient loss function, a balance factor is introduced, alpha belongs to [0,1], and the FL function can be expressed as:
Figure BDA0002764506260000045
wherein y is the sample type; p is the prediction probability of the model to the breeding target; gamma is an adjustable parameter;
defining:
Figure BDA0002764506260000051
Figure BDA0002764506260000052
substituting the above equation into the FL function equation can be converted to:
FL(pt)=-αt(1-pt)γlog(pt)
the method comprises the steps of utilizing improved SSD to detect the mass center coordinates of breeding targets in a breeding colony house to calculate the standard deviation of the mass center coordinates of all the breeding targets to serve as the breeding target dispersion, judging the excessive aggregation of the breeding targets by setting a dispersion threshold, and adopting the calculating method of the breeding target dispersion as follows:
let (x)i,yi) Detecting the mass center coordinate of the ith breeding target for the SSD, wherein i is 1,2, …, N is the number of the breeding targets in the breeding colony house detected by the SSD, and the average coordinate of the breeding targets
Figure BDA0002764506260000053
The calculation formula of (2) is as follows:
Figure BDA0002764506260000054
the dispersion of the barycenter coordinates of the breeding targets is as follows:
Figure BDA0002764506260000055
in the formula, σ represents dispersion.
Further, the specific process of step S4 is as follows:
the upper computer monitoring system combines FD and SSD, and provides an FD-SSD identification method for fighting behaviors of the breeding target; firstly, differentiating two spaced frames in a cultivation target video, extracting pixels of a mobile cultivation target, and eliminating the interference of a cultivation background environment and a static cultivation target; then detecting the individual moving breeding target by using the SSD, and determining the position of the moving breeding target; and finally, identifying whether the fighting action occurs to the moving cultivation target through a cultivation target fighting action judging method.
(1) FD-based moving pixel extraction of breeding target
Because the breeding target can move rapidly and violently when fighting, the FD extraction of the moving pixels can avoid the interference of the breeding background environment and provide a basis for the detection of the moving breeding target.
Let the gray values of the n frame image and the n +1 frame image in the video at (x, y) be fn(x,y)、fn+1(x, y). And subtracting the gray values of the corresponding pixel points of the two frames of images, and solving an absolute value to obtain a difference image of the two frames. The specific calculation method comprises the following steps:
Dn+1(x,y)=|fn+1(x,y)-fn(x,y)|
in the formula, Dn+1(x, y) is a difference image.
(2) SSD-based individual detection of moving breeding targets
In order to determine the moving breeding target individuals and the positions and further locate the breeding targets suspected of having fighting behaviors, the improved SSD is adopted to detect the moving breeding target individuals and determine the positions of the moving breeding targets.
(3) Method for judging fighting behavior of breeding target
Aiming at the characteristics of fighting behaviors of the breeding targets, a method for judging the fighting behaviors of the breeding targets is designed, and the specific identification steps are as follows:
step 1: setting the number of the moving cultivation targets detected by the improved SSD as n, and setting the length of the long side of the prediction frame for detecting the kth moving cultivation target as LkCalculating the Euclidean distance d between the ith moving cultivation target and the jth moving cultivation targetijWherein k is 1,2, …, n, i is 1,2, …, n, j is 1,2, …, n;
step 2: determining the Euclidean distance d between the breeding targetsijWhether or not less than a distance threshold dth(i, j), the distance threshold calculation method is as follows:
Figure BDA0002764506260000061
in the formula, LiIs the ithThe length of the long side of the moving cultivation target prediction frame; l isjPredicting the length of the long edge of the frame for the jth moving cultivation target;
if d isijIs less than dth(i, j), if the distance between the two moving cultivation targets and the other moving cultivation targets is greater than the corresponding distance threshold, the fighting behavior is suspected to occur, the frame is defined as an initial frame, and the frame number num of the fighting behavior which is suspected to occur is recorded as 1, namely: num is 1. Otherwise, reading the next frame image, and returning to Step 1;
step 3: merging two breeding target prediction frames suspected to have fighting behaviors, constructing a rectangular frame surrounding the two breeding targets suspected to have fighting behaviors by taking the outermost peripheral edges of the two target frames as new boundaries, and constructing a circular Active Area (AR) of the breeding targets suspected to have fighting behaviors by taking the center of the newly constructed rectangular frame as the center of a circle and the length of a diagonal as the radius;
step 4: reading the next frame of image, judging whether the frame of image contains suspected fighting behaviors of the culture target according to suspected fighting behaviors judgment methods in Step1 and Step2, if so, judging whether the centroid mean value of the culture target of the suspected fighting behaviors is in the AR, otherwise, executing the Step again;
step 5: if the suspected fighting behavior culture target mass center mean value is in the AR, num is num +1, the AR is updated according to a Step3 method, then the Step4 is returned, otherwise, the Step4 is directly returned, the process is executed in a circulating mode until all frames in the video are detected completely, and the ratio of the number of the frames of the suspected fighting behavior to the total number of the frames is calculated to serve as fighting probability RAB; because a certain error exists in the moving cultivation target detection and suspected fighting behavior judgment process, and not every frame in the video conforms to the fighting behavior characteristics of the cultivation target, the RAB is smaller than 1; in order to increase the fault-tolerant capability of fighting behavior discrimination of the breeding target, whether RAB is larger than a fighting behavior discrimination threshold RAB is judgedth(ii) a If the value is larger than the preset value, the fighting behavior of the breeding target is judged.
Further, the specific process of step S5 is as follows:
on one hand, the upper computer monitoring system judges whether suspected abnormal behaviors occur in the breeding target or not and determines the position of the suspected abnormal behaviors by observing the real-time video by the breeding personnel and combining the real-time monitoring results of other monitoring technologies, then sends an instruction to the track type video inspection robot, controls the robot to rapidly move to the designated position, and realizes fixed-point video acquisition.
Further, the specific process of step S6 is as follows:
judging whether the designated culture target is in the central area of the video image or not according to the image detection positioning result based on machine vision, if so, keeping shooting for a period of time by a camera until the shooting time is over or the target moves out of the central area; if the cultured target is not in the target area, the offset from the target center to the image center is used as a control quantity, and the deflection angle and the pitch angle of the mechanical arm are adjusted to realize motion control on the mechanical arm, so that the target is placed in the center area of the image again, and directional video acquisition is finally realized.
In a pixel coordinate system generated by a camera to acquire an image: the central point of the image is m (u, v), the centroid of the breeding target is o (a, b), and alpha is the side length of the square frame in the center of the image. The criteria for determining whether the centroid is in the center region are as follows:
Figure BDA0002764506260000062
Figure BDA0002764506260000063
if and only if the two formulas are both true, the mass center is judged to be in the central area of the image, and the camera keeps shooting at the moment; if any one of the two formulas is not established, the deflection angle and the pitch angle of the mechanical arm are adjusted to realize motion control on the mechanical arm, so that the target is placed in the central area of the image again, and directional video acquisition is finally realized; if fighting behaviors or excessive aggregation and the like exist, more than one mass center is needed, the average mass center needs to be solved, the distance between the average mass center and the central point is used for controlling, and the two formulas are still universal under the condition of a plurality of mass centers.
After the deviation between the average mass center and the central point is determined, the mechanical arm can be controlled by taking the deviation as a control quantity, and the relation between the offset of the U axis and the rotation angle theta of the servo motor (6) is controlled by using a function f1And (Δ u) ═ θ. The relation between the offset of the V axis and the expansion length l of the electric cylinder (8) is expressed as a function f2(Δ v) ═ l. If it is
Figure BDA0002764506260000071
The servo motor (6) drives the camera to rotate towards the positive direction of the U axis; if it is
Figure BDA0002764506260000072
The electric cylinder (8) drives the camera to rotate towards the positive direction of the V axis.
And directional video acquisition is realized by taking the deviation value between the center of mass of the cultured target and the central point of the image as a control value. Meanwhile, in the colony house of the abnormal breeding target, the video acquisition time is prolonged according to the instruction, and the timing video acquisition is realized.
Compared with the prior art, the invention has the following beneficial effects: (1) normal routing inspection and target position orientation, fixed point and timing video acquisition can be completed, the flow arrangement is reasonable, and effective video monitoring on the cultured target can be realized; (2) the system is reasonable in design and simple and convenient to operate, the inspection robot is adopted to carry the camera to detect the breeding target, the shooting blind area can be effectively eliminated, the purchase and use cost of a large number of shooting equipment is saved, and the overall cost of video acquisition is reduced; (3) the farm is automatically monitored in real time through the upper computer monitoring system, so that the efficiency is improved; (4) the self-adaptive contrast enhancement is adopted to process the culture target video, so that the local information of the image is considered, the detail loss is avoided, and the effectiveness of the image is ensured; (5) filtering by adopting an edge-preserving filtering method, improving the accuracy of the detection of the breeding target, and avoiding the interference of fuzzy detail information due to environmental factors such as light, noise and the like and water stain, urine stain, excrement and the like in a breeding colony in the video acquisition process of the breeding target; (6) the SSD is adopted to detect the excessive aggregation of the breeding targets, and the means of combining FD and the SSD is adopted to detect the fighting behavior of the breeding targets, so that the method has the advantages of algorithm advancement and instantaneity.
Drawings
FIG. 1 is an overall structure diagram of the inspection robot;
FIG. 2 is a supervisory computer system;
FIG. 3 is a flow chart of an acquisition method;
FIG. 4 is a diagram of a modified SSD based breeding target detection model;
FIG. 5 is a schematic diagram of a network structure of Mobilene _ v 2;
FIG. 6 is a diagram of a fighting behavior recognition process of a cultivation target;
1. a connecting rod; 2. a motor transmission wheel; 3. a square limiting and buffering device; 4. a conveyor belt; 5. inspecting the track; 6. a servo motor; 7 connecting rods; 8. a hydraulic transmission device; 9. a support bar; 10. a terminal holding device; 11. a camera.
Detailed Description
The invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1, an automatic tracking and monitoring video acquisition system for farm targets comprises an inspection robot and an upper computer monitoring system; the inspection robot comprises a connecting rod 1, a motor driving wheel 2, a square limiting and buffering device 3, a conveyor belt 4, an inspection track 5, a camera 11 and a mechanical arm, wherein the mechanical arm comprises a servo motor 6, a connecting rod 7, an electric cylinder 8, a supporting rod 9 and a tail end clamping device 10; the inspection track 5 is installed above a colony house of a farm, square limiting and buffering devices 3 are arranged at two ends of the inspection track 5, motor driving wheels 2 are arranged at two ends below the inspection track 5, conveying belts 4 are arranged on the motor driving wheels 2, one side of a connecting rod 1 is installed on the inspection track 5, the other side of the connecting rod 1 is connected with a connecting rod 7, a servo motor 6 is arranged between the connecting rod 1 and the connecting rod 7, the connecting rod 7 is connected with a supporting rod 9, an electric cylinder 8 is arranged between the connecting rod 7 and the supporting rod 9, and a camera 11 is fixed at the tail end of the supporting rod 9 through a tail end clamping device 10; through the inspection track, the robot can realize inspection along the track in the horizontal direction and deflection and pitching motion of a mechanical arm joint, and inspection driving of the track is realized by adopting a transmission mode of combining a motor and a belt; the camera visual angle is adjusted through the translation, deflection and pitching motion of the mechanical arm, the inspection robot has 4 active degrees of freedom, deflection and pitching in all directions can be achieved, the inspection robot can move along the inspection track to shoot all-around dead angles of the breeding colony house, the breeding target appointed by the monitoring system can be automatically tracked, and directional video collection is achieved.
As shown in fig. 2, the upper computer monitoring and controlling system mainly includes two modules: breed target monitoring module and patrol and examine robot control module, breed target monitoring module includes: the system comprises a breeding target video acquisition unit, an image preprocessing unit, an interval frame difference unit, a breeding target detection unit, a moving breeding target detection unit, a breeding target dispersion calculation unit and a breeding target fighting behavior discrimination unit; the relationship among the units of the breeding target monitoring module is as follows: the breeding target video acquisition unit acquires a breeding target video and sends each frame of image in the acquired video to the image preprocessing unit and the interval frame difference unit; the image preprocessing unit adopts a self-adaptive contrast enhancement and bilateral filtering algorithm to preprocess the acquired video frame, and uses the processed video frame in the culture target detection unit; the breeding target detection unit adopts an improved SSD algorithm to perform target detection on the preprocessed video frames, determines the position of a breeding target, and uses a target detection result in the breeding target dispersion calculation unit; then, the culture target dispersion degree calculation unit calculates and judges the excessive aggregation of the culture targets; the interval frame difference unit processes the collected video frames by adopting an inter-frame difference method and is used for the moving culture target detection unit after processing; the moving cultivation target detection unit detects moving cultivation target individuals by using the SSD, determines the positions of the moving cultivation targets, and uses the results in the cultivation target fighting behavior judgment unit; the fighting behavior judging unit of the breeding target judges the fighting behavior of the breeding target.
Patrol and examine robot control module and include: the system comprises a culture target position information processing unit, an inspection robot motion control unit and the like; the relation between the working principle of each unit of the inspection robot control module and the breeding target monitoring module is as follows: the method comprises the following steps that a breeding target monitoring module firstly sends identified breeding target position information needing tracking monitoring to a breeding target position information processing unit in an inspection robot control module, the breeding target position information processing unit judges whether a specified breeding target is in the central area of a video image according to an image detection positioning result based on machine vision, if yes, a motion control unit of the inspection robot sends an instruction to control a mechanical arm to drive a camera to keep follow shooting, and if not, the offset of the average mass center of the specified breeding target from the image center is calculated in the breeding target position information processing unit; the inspection robot motion control unit calculates the deflection angle and the pitch angle of each joint of the mechanical arm by taking the offset from the average mass center of the specified cultivation target to the center of the image as a control quantity, sends the information of the deflection angle to the inspection robot controller, controls the mechanical arm to drive the camera to place the specified cultivation target in the central area of the image again, and finally realizes directional and timed video acquisition; or the breeding personnel directly send out the instruction, and the inspection robot motion control unit directly controls the inspection robot according to the instruction of the breeding personnel to realize fixed-point video acquisition.
Referring to fig. 3, a method for acquiring a farm target automatic tracking monitoring video includes the following steps:
s1: the inspection robot pauses inspection action at the central point of each colony house, and adjusts the visual angle of the camera through the translation, deflection and pitching motion of the mechanical arm, so that the inspection robot can shoot the breeding colony house in an all-dimensional and dead-angle-free manner, and all breeding targets of the colony house are ensured to be contained in a shot image;
s2: preprocessing each frame of image in the collected video, including adaptive contrast enhancement and bilateral filtering, so as to enhance the detail information of the image, make the edge of the cultured target clearer, filter noise, reduce the environmental factors such as illumination and the like and the interference of water stain, urine stain, excrement and the like in the culture colony on the premise of keeping the edge information;
s3: detecting breeding targets in the preprocessed image by using an improved SSD, determining the centroid coordinate of each breeding target in the image, calculating the dispersion of the breeding targets by using a proposed breeding target dispersion formula, and judging whether the breeding targets excessively aggregate or not by comparing the dispersion with a dispersion threshold;
s4: differentiating two frames at intervals in the collected breeding target video, extracting pixels of the moving breeding target, and eliminating the interference of a breeding background environment and a static breeding target; then detecting the individual moving breeding target by using the SSD, and determining the position of the moving breeding target; finally, identifying whether the fighting action occurs to the moving cultivation target through a cultivation target fighting action judging method;
s5: meanwhile, when suspected abnormal breeding targets are found by other detection means of the farm or breeding personnel, the position information of the abnormal breeding targets is sent to the inspection robot, so that the inspection robot can quickly reach a specified colony house, and fixed-point video collection is realized;
s6: according to the detection result of the abnormal behavior of the breeding target and in combination with other detection means or instructions sent by breeding personnel, the mechanical arm is controlled to drive the camera to automatically track the abnormal breeding target, so that directional video acquisition is realized, the shooting time of the abnormal breeding target is prolonged, and timing video acquisition is realized.
The specific process of step S2 is as follows:
the method comprises the steps that the outline of a cultured target is fuzzy due to the fact that the cultured target video is interfered by different environmental factors in the process of acquiring the cultured target video, the image Contrast is reduced, certain noise is generated in the extracted cultured target image due to the environmental factors such as illumination and the interference of water stain, urine stain, excrement and the like in a culture colony, each frame of image in the acquired video needs to be preprocessed, and the preprocessing comprises Adaptive Contrast Enhancement (ACE) and Bilateral Filtering (BF).
The self-adaptive contrast enhancement of the cultivation target image is to divide the cultivation target image into a low frequency part and a high frequency part, wherein the low frequency part is obtained through low-pass filtering of the image, the high frequency part is obtained through difference between an original image and the low frequency part, and the high frequency part of the cultivation target image contains detail information, so that the high frequency part needs to be multiplied by gain to be enhanced and then recombined to obtain the enhanced cultivation target image.
Let x (i, j) be the gray value at the coordinate (i, j) of the pixel point in the cultivation target image, and in the region with (i, j) as the center and the window size of (2n +1) × (2n +1), the local mean and variance can be expressed as:
Figure BDA0002764506260000091
Figure BDA0002764506260000092
in the formula, mx(i, j) is a local mean; sigmax 2(i, j) is the local variance.
Local mean mx(i, j) can be approximated as the low frequency part of the image of the breeding target, and the enhanced pixel values for x (i, j) can be expressed as:
f(i,j)=mx(i,j)+G(i,j)[x(i,j)-mx(i,j)]
wherein f (i, j) is the pixel value after enhancing x (i, j); g (i, j) is the gain at (i, j).
The bilateral filter is one of important methods in edge preserving filtering, has good smooth edge preserving performance, preserves edge information of a breeding target image on the basis of ensuring effective noise filtering, and can be defined as:
Figure BDA0002764506260000093
in the formula, BIpThe filtered culture target image pixel value is obtained; s is a filtering window range; p is a pixel point of the image to be filtered; q is a neighborhood pixel point; i ispIs a p-point pixel value; i isqIs a q-point pixel value; gsIs a spatial proximity function; grIs a gray level similarity function; wpTo normalize the coefficients, define:
Figure BDA0002764506260000101
in the formula, GsAnd GrThe spatial similarity and the gray level similarity of the neighborhood pixels are respectively determined, and can be calculated through a Gaussian function, and the spatial similarity and the gray level similarity are respectively as follows:
Figure BDA0002764506260000102
Figure BDA0002764506260000103
in the formula, d (p, q) is the Euclidean distance between a pixel point p and a pixel point q; sigmasIs GsA function standard deviation; sigmarIs GrFunction standard deviation.
The specific process of step S3 is as follows:
the upper computer monitoring system adopts the improved SSD to carry out target detection on the preprocessed video frames, and the position of a breeding target is determined; multiplexing the trained network parameters into a culture target detection model as initial network parameters by adopting a network-based deep migration learning method; using MobileNet _ v2 to replace a Visual Geometry Group Network (VGG 16) as a basic Network to obtain feature maps with different sizes; in order to improve the training effect of the model, a focus Loss function (Focal local, FL) is adopted to replace a Cross Entropy function (Cross Entropy, CE) as a confidence Loss function.
(1) Basic network MobileNet _ v2
As shown in fig. 4, MobileNet _ v2 introduces Inverted Residuals (IR) to perform dimension ascending and dimension descending on the feature map, so as to reduce the memory occupation;
(2) deep migration learning based on network
In order to train an SSD model by using the limited samples, network deep migration learning is introduced, trained model parameters are migrated to a new network, the convergence speed of the model during training is improved, in the process of constructing a moving culture target individual detection model, the trained target detection model is used for reference, and the learned parameters are applied to the moving culture target individual detection model for training, so that the model training efficiency can be improved;
(3) focal loss function
In the training process of the SSD, a prior frame which is successfully matched with a real target labeling frame of a breeding target is used as a positive sample, a prior frame which is not successfully matched is used as a negative sample, FL is used as a confidence coefficient loss function, a balance factor is introduced, alpha belongs to [0,1], and the FL function can be expressed as:
Figure BDA0002764506260000104
wherein y is the sample type; p is the prediction probability of the model to the breeding target; gamma is an adjustable parameter;
defining:
Figure BDA0002764506260000105
Figure BDA0002764506260000111
substituting the above equation into the FL function equation can be converted to:
FL(pt)=-αt(1-pt)γlog(pt)
the method comprises the steps of utilizing improved SSD to detect the mass center coordinates of breeding targets in a breeding colony house to calculate the standard deviation of the mass center coordinates of all the breeding targets to serve as the breeding target dispersion, judging the excessive aggregation of the breeding targets by setting a dispersion threshold, and adopting the calculating method of the breeding target dispersion as follows:
let (x)i,yi) Detecting the centroid coordinate of the ith breeding target for the SSD, wherein i is 1,2, …, N is the breeding target detected by the SSDThe number of the breeding objects in the breeding colony house is the average coordinate of the breeding objects
Figure BDA0002764506260000112
The calculation formula of (2) is as follows:
Figure BDA0002764506260000113
the dispersion of the barycenter coordinates of the breeding targets is as follows:
Figure BDA0002764506260000114
in the formula, σ represents dispersion.
The specific process of step S4 is as follows:
as shown in fig. 5, the upper computer monitoring system combines an FD and an SSD to provide an FD-SSD identification method of a fighting behavior of a breeding target; firstly, differentiating two spaced frames in a cultivation target video, extracting pixels of a mobile cultivation target, and eliminating the interference of a cultivation background environment and a static cultivation target; then detecting the individual moving breeding target by using the SSD, and determining the position of the moving breeding target; and finally, identifying whether the fighting action occurs to the moving cultivation target through a cultivation target fighting action judging method.
(1) FD-based moving pixel extraction of breeding target
Because the breeding target can move rapidly and violently when fighting, the FD extraction of the moving pixels can avoid the interference of the breeding background environment and provide a basis for the detection of the moving breeding target.
Let the gray values of the n frame image and the n +1 frame image in the video at (x, y) be fn(x,y)、fn+1(x, y). And subtracting the gray values of the corresponding pixel points of the two frames of images, and solving an absolute value to obtain a difference image of the two frames. The specific calculation method comprises the following steps:
Dn+1(x,y)=|fn+1(x,y)-fn(x,y)|
in the formula, Dn+1(x,y) Are differential images.
(2) SSD-based individual detection of moving breeding targets
In order to determine the moving breeding target individuals and the positions and further locate the breeding targets suspected of having fighting behaviors, the improved SSD is adopted to detect the moving breeding target individuals and determine the positions of the moving breeding targets.
(3) Method for judging fighting behavior of breeding target
Aiming at the characteristics of fighting behaviors of the breeding targets, a method for judging the fighting behaviors of the breeding targets is designed, and the specific identification steps are as follows:
step 1: setting the number of the moving cultivation targets detected by the improved SSD as n, and setting the length of the long side of the prediction frame for detecting the kth moving cultivation target as Lk. Calculating the Euclidean distance d between the ith moving cultivation target and the jth moving cultivation targetijWherein k is 1,2, …, n, i is 1,2, …, n, j is 1,2, …, n;
step 2: determining the Euclidean distance d between the breeding targetsijWhether or not less than a distance threshold dth(i, j), the distance threshold calculation method is as follows:
Figure BDA0002764506260000121
in the formula, LiPredicting the length of the long side of the frame for the ith moving breeding target; l isjAnd predicting the length of the long edge of the frame for the jth moving breeding target.
If d isijIs less than dth(i, j), if the distance between the two moving cultivation targets and the other moving cultivation targets is greater than the corresponding distance threshold, the fighting behavior is suspected to occur, the frame is defined as an initial frame, and the frame number num of the fighting behavior which is suspected to occur is recorded as 1, namely: num is 1. Otherwise, reading the next frame image, and returning to Step 1;
step 3: merging two breeding target prediction frames suspected to have fighting behaviors, constructing a rectangular frame surrounding the two breeding targets suspected to have fighting behaviors by taking the outermost peripheral edges of the two target frames as new boundaries, and constructing a circular Active Area (AR) of the breeding targets suspected to have fighting behaviors by taking the center of the newly constructed rectangular frame as the center of a circle and the length of a diagonal as the radius;
step 4: reading the next frame of image, and judging whether the frame of image contains suspected fighting behaviors of the breeding target according to a suspected fighting behavior judgment method in Step1 and Step 2; if yes, judging whether the suspected fighting behavior culture target mass center mean value is in the AR, otherwise, executing the step again;
step 5: if the suspected fighting behavior breeding target centroid mean value is in AR, num is num +1, AR is updated according to the Step3 method, then the Step4 is returned, and otherwise, the Step4 is directly returned. And circularly executing the process until all frames in the video are detected, and calculating the ratio of the number of frames suspected to have fighting behaviors to the total number of frames to be used as a fighting probability RAB. Because a certain error exists in the moving cultivation target detection and suspected fighting behavior judgment process, and not every frame in the video conforms to the fighting behavior characteristics of the cultivation target, the RAB is smaller than 1; in order to increase the fault-tolerant capability of fighting behavior discrimination of the breeding target, whether RAB is larger than a fighting behavior discrimination threshold RAB is judgedth(ii) a If the value is larger than the preset value, the fighting behavior of the breeding target is judged.
The specific process of step S5 is as follows:
on one hand, the upper computer monitoring system judges whether suspected abnormal behaviors occur in the breeding target or not and determines the position of the suspected abnormal behaviors by observing the real-time video by the breeding personnel and combining the real-time monitoring results of other monitoring technologies, then sends an instruction to the track type video inspection robot, controls the robot to rapidly move to the designated position, and realizes fixed-point video acquisition.
The specific process of step S6 is as follows:
as shown in fig. 6, according to the image detection positioning result based on machine vision, it is determined whether the designated cultivation target is in the central area of the video image, and if so, the camera keeps shooting for a period of time until the shooting time is over or the target moves out of the central area; if the cultured target is not in the target area, the offset from the target center to the image center is used as a control quantity, and the deflection angle and the pitch angle of the mechanical arm are adjusted to realize motion control on the mechanical arm, so that the target is placed in the center area of the image again, and directional video acquisition is finally realized.
In a pixel coordinate system generated by a camera to acquire an image: the central point of the image is m (u, v), the centroid of the breeding target is o (a, b), and alpha is the side length of the square frame in the center of the image. The criteria for determining whether the centroid is in the center region are as follows:
Figure BDA0002764506260000122
Figure BDA0002764506260000123
if and only if the two formulas are both true, the mass center is judged to be in the central area of the image, and the camera keeps shooting at the moment; if any one of the two formulas is not established, the deflection angle and the pitch angle of the mechanical arm are adjusted to realize motion control on the mechanical arm, so that the target is placed in the central area of the image again, directional video acquisition is finally realized, when fighting behaviors or excessive aggregation and other conditions occur, more than one mass center is needed, the average mass center is required to be solved, the distance between the average mass center and the central point is used for controlling, and the two formulas are still universal under the condition of a plurality of mass centers.
After the deviation between the average mass center and the central point is determined, the mechanical arm can be controlled by taking the deviation as a control quantity, and the relation between the offset of the U axis and the rotation angle theta of the servo motor 6 is controlled by using a function f1And (Δ u) ═ θ. The relation between the offset of the V-axis and the extension length l of the electric cylinder 8 is expressed as a function f2(Δ v) ═ l. If it is
Figure BDA0002764506260000131
The servo motor 6 drives the camera to rotate towards the positive direction of the U axis; if it is
Figure BDA0002764506260000132
The electric cylinder 8 drives the camera to rotate towards the positive direction of the V axis.
The directional video acquisition is realized by taking the deviation value between the center of mass of the cultured target and the central point of the image as a control value, and meanwhile, in the colony house of the abnormal cultured target, the video acquisition time is prolonged according to the instruction, so that the timing video acquisition is realized.

Claims (5)

1. A method for collecting automatic tracking and monitoring videos of objects in a farm is characterized by comprising the following steps:
s1: the inspection robot pauses inspection action at the central point of each colony house, and adjusts the visual angle of the camera through the translation, deflection and pitching motion of the mechanical arm, so that the inspection robot can shoot the breeding colony house in an all-dimensional and dead-angle-free manner, and all breeding targets of the colony house are ensured to be contained in a shot image;
s2: preprocessing each frame of image in the collected video, including adaptive contrast enhancement and bilateral filtering, so as to enhance the detail information of the image, make the edge of the cultured target clearer, filter noise, reduce the environmental factors such as illumination and the like and the interference of water stain, urine stain, excrement and the like in the culture colony on the premise of keeping the edge information;
s3: detecting the breeding targets in the preprocessed image by adopting an improved single-point multi-box Detector (SSD), determining the centroid coordinate of each breeding target in the image, calculating the dispersion of the breeding targets, and judging whether the breeding targets excessively aggregate or not by comparing the dispersion with a dispersion threshold;
the upper computer monitoring system adopts the improved SSD to carry out target detection on the preprocessed video frames, and the position of a breeding target is determined; multiplexing the trained network parameters into a culture target detection model as initial network parameters by adopting a network-based deep migration learning method; the method comprises the steps of adopting MobileNet _ v2 to replace a Visual Geometry Group Network (VGG 16) as a basic Network to obtain feature maps with different sizes, and adopting a Focal Loss function (Focal Loss, FL) to replace a cross entropy function (Cross entropy, CE) as a confidence coefficient Loss function in order to improve a model training effect;
(1) basic network MobileNet _ v2
The MobileNet _ v2 introduces Inverted Residuals (IR) to carry out dimension increasing and dimension reducing on the feature graph, so that the memory occupation is reduced;
(2) deep migration learning based on network
The SSD model is trained by using limited samples, network deep migration learning is introduced, the convergence speed of the model during training is improved by migrating the trained model parameters to a new network, the trained target detection model is used for reference in the process of constructing the moving culture target individual detection model, the learned parameters are applied to the moving culture target individual detection model for training, and the model training efficiency can be improved;
(3) focal loss function
In the training process of the SSD, a prior frame which is successfully matched with a real target labeling frame of a breeding target is used as a positive sample, a prior frame which is not successfully matched is used as a negative sample, FL is used as a confidence coefficient loss function, a balance factor is introduced, alpha belongs to [0,1], and the FL function can be expressed as:
Figure FDA0003491338320000011
wherein y is the sample type; p is the prediction probability of the model to the breeding target; gamma is an adjustable parameter;
defining:
Figure FDA0003491338320000012
Figure FDA0003491338320000013
substituting the above equation into the FL function equation can be converted to:
FL(pt)=-αt(1-pt)γlog(pt)
the method comprises the steps of utilizing improved SSD to detect the mass center coordinates of breeding targets in a breeding colony house to calculate the standard deviation of the mass center coordinates of all the breeding targets to serve as the breeding target dispersion, judging the excessive aggregation of the breeding targets by setting a dispersion threshold, and adopting the calculating method of the breeding target dispersion as follows:
let (x)i,yi) Detecting the mass center coordinate of the ith breeding target for the SSD, wherein i is 1,2, …, N is the number of the breeding targets in the breeding colony house detected by the SSD, and the average coordinate of the breeding targets
Figure FDA0003491338320000021
The calculation formula of (2) is as follows:
Figure FDA0003491338320000022
the dispersion of the barycenter coordinates of the breeding targets is as follows:
Figure FDA0003491338320000023
wherein σ is dispersion;
s4: differentiating two frames at intervals in the collected breeding target video, extracting pixels of the moving breeding target, and eliminating the interference of a breeding background environment and a static breeding target; detecting a moving breeding target individual by using the SSD, and determining the position of the moving breeding target; finally, identifying whether the fighting action occurs to the moving cultivation target through a cultivation target fighting action judging method;
(1) FD-based moving pixel extraction of breeding target
The FD extraction of the moving pixels can avoid the interference of the breeding background environment and provide a basis for the detection of moving breeding targets;
let the gray values of the n frame image and the n +1 frame image in the video at (x, y) be fn(x,y)、fn+1(x, y), subtracting the gray values of the corresponding pixel points of the two frames of images, and solving an absolute value to obtain a differential image of the two frames; the specific calculation method comprises the following steps:
Dn+1(x,y)=|fn+1(x,y)-fn(x,y)|
in the formula, Dn+1(x, y) are difference images;
(2) SSD-based individual detection of moving breeding targets
Detecting a moving breeding target individual by adopting an improved SSD, and determining the position of the moving breeding target;
(3) method for judging fighting behavior of breeding target
Aiming at the characteristics of fighting behaviors of the breeding targets, a method for judging the fighting behaviors of the breeding targets is designed, and the specific identification steps are as follows:
step 1: setting the number of the moving cultivation targets detected by the improved SSD as n, and setting the length of the long side of the prediction frame for detecting the kth moving cultivation target as LkCalculating the Euclidean distance d between the ith moving cultivation target and the jth moving cultivation targetijWherein k is 1,2, …, n, i is 1,2, …, n, j is 1,2, …, n;
step 2: determining the Euclidean distance d between the breeding targetsijWhether or not less than a distance threshold dth(i, j), the distance threshold calculation method is as follows:
Figure FDA0003491338320000024
in the formula, LiPredicting the length of the long side of the frame for the ith moving breeding target; l isjPredicting the length of the long edge of the frame for the jth moving cultivation target;
if d isijIs less than dth(i, j), if the distance between the two moving cultivation targets and the other moving cultivation targets is greater than the corresponding distance threshold, the fighting behavior is suspected to occur, the frame is defined as an initial frame, and the frame number num of the fighting behavior which is suspected to occur is recorded as 1, namely: num is 1; otherwise, reading the next frame image, and returning to Step 1;
step 3: merging two breeding target prediction frames suspected to have fighting behaviors, constructing a rectangular frame surrounding the two breeding targets suspected to have fighting behaviors by taking the outermost peripheral edges of the two target frames as new boundaries, and constructing a circular Active Area (AR) of the breeding targets suspected to have fighting behaviors by taking the center of the newly constructed rectangular frame as the center of a circle and the length of a diagonal as the radius;
step 4: reading the next frame of image, judging whether the frame of image contains suspected fighting behaviors of the culture target according to suspected fighting behaviors judgment methods in Step1 and Step2, if so, judging whether the centroid mean value of the culture target of the suspected fighting behaviors is in the AR, otherwise, executing the Step again;
step 5: if the mean value of the centers of mass of the suspected fighting behavior culture targets is in the AR, num is num +1, the AR is updated according to a Step3 method, then the Step4 is returned, otherwise, the Step4 is directly returned, the process is executed in a circulating mode until all frames in the video are detected completely, the ratio of the number of frames suspected to have the fighting behavior is calculated to serve as a fighting probability RAB, and the RAB is smaller than 1 because a certain error exists in the processes of detecting the moving culture targets and judging the suspected fighting behavior and each frame in the video does not accord with the characteristics of the fighting behavior of the culture targets; in order to increase the fault-tolerant capability of fighting behavior discrimination of the breeding target, whether RAB is larger than a fighting behavior discrimination threshold RAB is judgedth(ii) a If the value is larger than the preset value, judging that the fighting action of the breeding target occurs;
s5: according to other detection means of a farm or when a suspected abnormal breeding target is found by a breeding person, sending the position information of the abnormal breeding target to the inspection robot, so that the inspection robot can quickly reach a specified colony house, and realizing fixed-point video acquisition;
s6: according to the detection result of the abnormal behavior of the breeding target and in combination with other detection means or instructions sent by breeding personnel, the mechanical arm is controlled to drive the camera to automatically track the abnormal breeding target, so that directional video acquisition is realized, the shooting time of the abnormal breeding target is prolonged, and timing video acquisition is realized.
2. The method for collecting the video for automatically tracking and monitoring the farm targets according to claim 1, wherein the specific process of the step S2 is as follows:
preprocessing each frame image in the acquired video, and enhancing the contrast by adopting adaptive contrast enhancement on the premise of keeping local information of a breeding target image; bilateral filtering can keep the edge information of the image on the basis of ensuring that the image noise of the breeding target is effectively filtered;
the self-adaptive contrast enhancement of the culture target image is to divide the culture target image into a low-frequency part and a high-frequency part, wherein the low-frequency part is obtained through low-pass filtering of the image, the high-frequency part is obtained through difference between an original image and the low-frequency part, the high-frequency part of the culture target image contains detail information, and the high-frequency part needs to be multiplied by gain to be enhanced and then recombined to obtain an enhanced culture target image;
let x (i, j) be the gray value at the coordinate (i, j) of the pixel point in the cultivation target image, and in the region with (i, j) as the center and the window size of (2n +1) × (2n +1), the local mean and variance can be expressed as:
Figure FDA0003491338320000031
Figure FDA0003491338320000032
in the formula, mx(i, j) is a local mean;
Figure FDA0003491338320000033
is the local variance;
local mean mx(i, j) can be approximated as the low frequency part of the image of the breeding target, and the enhanced pixel values for x (i, j) can be expressed as:
f(i,j)=mx(i,j)+G(i,j)[x(i,j)-mx(i,j)]
where f (i, j) is the pixel value after enhancement for x (i, j), and G (i, j) is the gain at (i, j);
the bilateral filter keeps the edge information of the breeding target image on the basis of ensuring the effective filtering of noise, and can be defined as:
Figure FDA0003491338320000034
in the formula, BIpThe filtered culture target image pixel value is obtained; s is a filtering window range; p is a pixel point of the image to be filtered; q is a neighborhood pixel point; i ispIs a p-point pixel value; i isqIs a q-point pixel value; gsIs a spatial proximity function; grIs a gray level similarity function; wpTo normalize the coefficients, define:
Figure FDA0003491338320000041
in the formula, GsAnd GrThe spatial similarity and the gray level similarity of the neighborhood pixels are respectively determined, and can be calculated through a Gaussian function, and the spatial similarity and the gray level similarity are respectively as follows:
Figure FDA0003491338320000042
Figure FDA0003491338320000043
in the formula, d (p, q) is the Euclidean distance between a pixel point p and a pixel point q; sigmasIs GsA function standard deviation; sigmarIs GrFunction standard deviation.
3. The method for collecting the video for automatically tracking and monitoring the farm targets according to claim 1, wherein the specific process of the step S5 is as follows:
on one hand, the upper computer monitoring system judges whether suspected abnormal behaviors occur in the breeding target or not and determines the position of the suspected abnormal behaviors by observing the real-time video through the breeding personnel and combining the real-time monitoring results of other monitoring technologies, then sends an instruction to the rail type video inspection robot, controls the inspection robot to move to the designated position quickly, and achieves fixed-point video acquisition.
4. The method for collecting the video for automatically tracking and monitoring the farm targets according to claim 1, wherein the specific process of the step S6 is as follows:
judging whether the designated culture target is in the central area of the video image or not according to the image detection positioning result based on machine vision, if so, keeping shooting for a period of time by a camera until the shooting time is over or the target moves out of the central area; if the breeding target is not in the target area, adjusting the deflection angle and the pitch angle of the mechanical arm by taking the offset from the target center to the image center as a control quantity to realize motion control on the mechanical arm, placing the target in the central area of the image again, and finally realizing directional video acquisition:
in a pixel coordinate system generated by a camera to acquire an image: the central point of the image is m (u, v), the centroid of the breeding target is o (a, b), alpha is the side length of the square frame in the center of the image, and the standard for judging whether the centroid is in the central area is as follows:
Figure FDA0003491338320000044
Figure FDA0003491338320000045
if and only if the two formulas are both true, the mass center is judged to be in the central area of the image, and the camera keeps shooting at the moment; if any one of the two modes is not established, adjusting the deflection angle and the pitch angle of the mechanical arm to realize motion control on the mechanical arm, and putting the target in the central area of the image again to realize directional video acquisition; if fighting behaviors or excessive aggregation and the like exist, more than one mass center is needed, the average mass center needs to be solved, the distance between the average mass center and the central point is used for controlling, and the two formulas are still universal under the condition of a plurality of mass centers;
after the deviation of the average center of mass and the central point is determined, the mechanical arm can be controlled by taking the deviation as a control quantity, and the deviation of the U axisFunction f for the relationship between displacement and the rotation angle theta of the servo motor (6)1(Δ u) ═ θ represents; the relation between the offset of the V axis and the expansion length l of the electric cylinder (8) is expressed as a function f2(Δ v) ═ l, if
Figure FDA0003491338320000051
The servo motor (6) drives the camera to rotate towards the positive direction of the U axis; if it is
Figure FDA0003491338320000052
The electric cylinder (8) drives the camera to rotate towards the positive direction of the V axis;
the directional video acquisition is realized by taking the deviation value between the center of mass of the cultured target and the central point of the image as a control value, and meanwhile, in the colony house of the abnormal cultured target, the video acquisition time is prolonged according to the instruction, so that the timing video acquisition is realized.
5. An automatic tracking and monitoring video acquisition system for farm targets for implementing the acquisition method according to any one of claims 1 to 4, characterized in that: the inspection robot comprises an inspection robot and an upper computer monitoring system; the inspection robot comprises a connecting rod (1), a motor driving wheel (2), a square limiting and buffering device (3), a conveying belt (4), an inspection track (5), a camera (11) and a mechanical arm, wherein the mechanical arm comprises a servo motor (6), a connecting rod (7), an electric cylinder (8), a supporting rod (9) and a tail end clamping device (10); the system is characterized in that the inspection track (5) is arranged above a colony house of a farm, square limiting and buffering devices (3) are arranged at two ends of the inspection track (5), motor driving wheels (2) are arranged at two ends below the inspection track (5), conveying belts (4) are arranged on the motor driving wheels (2), one side of the connecting rod (1) is arranged on the inspection track (5), the other side of the connecting rod (1) is connected with a connecting rod (7), a servo motor (6) is arranged between the connecting rod (1) and the connecting rod (7), the connecting rod (7) is connected with a supporting rod (9), an electric cylinder (8) is arranged between the connecting rod (7) and the supporting rod (9), and the tail end of the supporting rod (9) is used for fixing a camera (11) through a tail end clamping device (10);
the upper computer monitoring system mainly comprises two modules: the system comprises a breeding target monitoring module and a patrol robot control module; the breeding target monitoring module comprises a breeding target video acquisition unit, an image preprocessing unit, an interval frame difference unit, a breeding target detection unit, a moving breeding target detection unit, a breeding target dispersion calculation unit, a breeding target fighting behavior discrimination unit and the like; the relationship among the units of the breeding target monitoring module is as follows: firstly, a breeding target video acquisition unit acquires a breeding target video and sends each frame of image in the acquired video to an image preprocessing unit and an interval frame difference unit; the image preprocessing unit adopts a self-adaptive contrast enhancement and bilateral filtering algorithm to preprocess the acquired video frame, and uses the processed video frame in the culture target detection unit; the breeding target detection unit adopts an improved SSD algorithm to perform target detection on the preprocessed video frames, determines the position of a breeding target, and uses a target detection result in the breeding target dispersion calculation unit; then, the culture target dispersion degree calculation unit calculates and judges the excessive aggregation of the culture targets; the interval frame difference unit processes the collected video frames by adopting an inter-frame difference method and is used for the moving culture target detection unit after processing; the moving cultivation target detection unit detects moving cultivation target individuals by using the SSD, determines the positions of the moving cultivation targets, and uses the results in the cultivation target fighting behavior judgment unit; then, the fighting behavior judgment unit of the cultivation target judges the fighting behavior of the cultivation target;
patrol and examine robot control module and include: the system comprises a culture target position information processing unit, an inspection robot motion control unit and the like; the relation between the working principle of each unit of the inspection robot control module and the breeding target monitoring module is as follows: the method comprises the following steps that a breeding target monitoring module firstly sends identified breeding target position information needing tracking monitoring to a breeding target position information processing unit in an inspection robot control module, the breeding target position information processing unit judges whether a specified breeding target is in the central area of a video image according to an image detection positioning result based on machine vision, if yes, a motion control unit of the inspection robot sends an instruction to control a mechanical arm to drive a camera to keep follow shooting, and if not, the offset of the average mass center of the specified breeding target from the image center is calculated in the breeding target position information processing unit; the inspection robot motion control unit calculates the deflection angle and the pitch angle of each joint of the mechanical arm by taking the offset from the average mass center of the specified cultivation target to the center of the image as a control quantity, sends the information of the deflection angle to the inspection robot controller, controls the mechanical arm to drive the camera to place the specified cultivation target in the central area of the image again, and finally realizes directional and timed video acquisition; or the breeding personnel directly send out the instruction, and the inspection robot motion control unit directly controls the inspection robot according to the instruction of the breeding personnel to realize fixed-point video acquisition.
CN202011228884.8A 2020-11-06 2020-11-06 Automatic tracking and monitoring video acquisition system and method for farm targets Active CN112598701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011228884.8A CN112598701B (en) 2020-11-06 2020-11-06 Automatic tracking and monitoring video acquisition system and method for farm targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011228884.8A CN112598701B (en) 2020-11-06 2020-11-06 Automatic tracking and monitoring video acquisition system and method for farm targets

Publications (2)

Publication Number Publication Date
CN112598701A CN112598701A (en) 2021-04-02
CN112598701B true CN112598701B (en) 2022-03-11

Family

ID=75182833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011228884.8A Active CN112598701B (en) 2020-11-06 2020-11-06 Automatic tracking and monitoring video acquisition system and method for farm targets

Country Status (1)

Country Link
CN (1) CN112598701B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741255A (en) * 2021-08-24 2021-12-03 科宇智能环境技术服务有限公司 Building automatic control system and automatic control method thereof
CN114120185B (en) * 2021-11-16 2022-08-09 东莞先知大数据有限公司 Three-bird-gear cage clearing determination method, electronic device and storage medium
CN114167922B (en) * 2021-11-22 2022-07-22 郑州宝诺电子科技有限公司 Farming and pasturing intelligent analysis method and system based on multi-sensor data acquisition
CN114937175A (en) * 2022-05-17 2022-08-23 浙江大学 Device and method for inspecting birds died of sick birds suitable for cage-rearing poultry house
CN115250950B (en) * 2022-08-02 2024-01-19 苏州数智赋农信息科技有限公司 Method and system for inspecting livestock and poultry pig farm based on artificial intelligence
CN115793588B (en) * 2022-12-21 2023-09-08 广东暨通信息发展有限公司 Data acquisition method and system based on industrial Internet of things

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101888479B (en) * 2009-05-14 2012-05-02 汉王科技股份有限公司 Method and device for detecting and tracking target image
CN104320618B (en) * 2014-10-23 2018-04-03 西北农林科技大学 A kind of apparatus and method of the calf status monitoring of Behavior-based control characteristic spectrum linkage
CN105678806B (en) * 2016-01-07 2019-01-08 中国农业大学 A kind of live pig action trail automatic tracking method differentiated based on Fisher
US10582211B2 (en) * 2016-06-30 2020-03-03 Facebook, Inc. Neural network to optimize video stabilization parameters
CN107220983B (en) * 2017-04-13 2019-09-24 中国农业大学 A kind of live pig detection method and system based on video
CN109190473A (en) * 2018-07-29 2019-01-11 国网上海市电力公司 The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN111223125B (en) * 2020-01-06 2023-05-09 江苏大学 Target motion video tracking method based on Python environment

Also Published As

Publication number Publication date
CN112598701A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112598701B (en) Automatic tracking and monitoring video acquisition system and method for farm targets
CN109489724B (en) Tunnel train safe operation environment comprehensive detection device and detection method
CN105447853A (en) Flight device, flight control system and flight control method
CN106248686A (en) Glass surface defects based on machine vision detection device and method
CN109821763B (en) Fruit sorting system based on machine vision and image identification method thereof
CN105262991B (en) A kind of substation equipment object identifying method based on Quick Response Code
CN108491807B (en) Real-time monitoring method and system for oestrus of dairy cows
CN104056790A (en) Intelligent potato sorting method and apparatus
CN111767794A (en) Cage-rearing poultry abnormal behavior detection method and detection system based on machine vision
CN103357672A (en) Online strip steel boundary detection method
CN114170598B (en) Colony height scanning imaging device, and automatic colony counting equipment and method capable of distinguishing atypical colonies
CN109682326A (en) Pot seedling upright degree detection device and detection method based on depth image
CN108109154A (en) A kind of new positioning of workpiece and data capture method
CN109765931B (en) Near-infrared video automatic navigation method suitable for breakwater inspection unmanned aerial vehicle
CN112000094A (en) Single-and-double-eye combined high-voltage transmission line hardware fitting online identification and positioning system and method
CN115953719A (en) Multi-target recognition computer image processing system
CN115018872A (en) Intelligent control method of dust collection equipment for municipal construction
CN106713701A (en) Cluster motion data acquisition method and system based on image processing technology
CN116686545B (en) Litchi picking robot shade removing method based on machine vision control
CN114782561B (en) Smart agriculture cloud platform monitoring system based on big data
CN104820818A (en) Fast recognition method for moving object
CN110502037A (en) Nematode Tracking Imaging analytical equipment
CN107590822A (en) A kind of effective colliery intelligent monitor system
CN105738376A (en) Automatic cloth inspection machine using contact image sensor
CN110781758A (en) Dynamic video monitoring method and device for abnormal pantograph structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant