CN114549371B - Image analysis method and device - Google Patents

Image analysis method and device Download PDF

Info

Publication number
CN114549371B
CN114549371B CN202210442535.9A CN202210442535A CN114549371B CN 114549371 B CN114549371 B CN 114549371B CN 202210442535 A CN202210442535 A CN 202210442535A CN 114549371 B CN114549371 B CN 114549371B
Authority
CN
China
Prior art keywords
image
ith
target animal
video
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210442535.9A
Other languages
Chinese (zh)
Other versions
CN114549371A (en
Inventor
刘际
李中中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202210442535.9A priority Critical patent/CN114549371B/en
Publication of CN114549371A publication Critical patent/CN114549371A/en
Application granted granted Critical
Publication of CN114549371B publication Critical patent/CN114549371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention provides an image analysis method which can be applied to the field of animal behavior analysis and the technical field of image processing. The image analysis method comprises the following steps: acquiring a video sequence image of a target video; determining a background frame image according to n frames of video images in a video sequence; calculating an ith difference image corresponding to the ith video image by using a background difference method according to the background frame image and the ith video image; filtering the ith difference image to obtain an ith processed image; performing opening operation on the ith processing image to obtain an ith intermediate image; determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal, wherein the maximum connected domain of the overlapping area on the ith difference image is the ith mask image of the target animal; and determining the motion track of the target animal according to the n mask images, and analyzing the behavior of the target animal. The invention also provides an image analysis device.

Description

Image analysis method and device
Technical Field
The invention relates to the field of animal behavior analysis, in particular to an image processing technology, and more particularly to an image analysis method and device.
Background
With the rapid development of machine learning, most of the detection modules of the current animal behavior analysis systems adopt a machine learning method, which is roughly divided into correlation filtering and deep learning, so as to realize effective tracking and posture estimation of animals.
However, the above two analysis methods consume a lot of time for generating and training the sample, and once the tracking object is replaced, the model needs to be trained again, so that these methods are not suitable for high-throughput automatic analysis of different animals.
Disclosure of Invention
In view of the above, the present invention provides an image analysis method and apparatus.
According to a first aspect of the present invention, there is provided an image analysis method comprising:
acquiring a video sequence image of a target video, wherein the video sequence image comprises n frames of video images, and n is more than or equal to 1;
determining a background frame image corresponding to the video sequence image according to the n frames of video images;
calculating an ith difference image corresponding to the ith video image by using a background difference method according to the background frame image and the ith video image;
filtering the ith difference image to obtain an ith processed image;
performing an opening operation on the ith processing image to obtain an ith intermediate image, wherein i is more than or equal to 2 and less than or equal to n;
determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal to obtain an ith overlapping area;
calculating the maximum connected domain of the ith overlapping region on the ith difference image, wherein the maximum connected domain on the ith difference image is the ith mask image of the target animal;
determining the motion trail of the target animal according to the n mask images of the target animal;
and analyzing the behavior of the target animal according to the motion trail to obtain a behavior analysis result.
According to an embodiment of the present invention, the image analysis method further includes:
before determining the overlapping area of the ith intermediate image and the ith-1 mask image of the target animal, judging whether the overlapping area exists between the ith intermediate image and the ith-1 mask image or not;
recording abnormal information when there is no overlapping area between the ith intermediate image and the (i-1) th mask image;
and under the condition that the recording frequency of the abnormal information exceeds a preset threshold value, the video image of the current frame is initialized again.
According to an embodiment of the present invention, the image analysis method further includes:
and obtaining an ith mask image of the target animal by labeling a position region of the target animal in the ith processed image after the ith difference image is filtered to obtain the ith processed image, wherein i = 1.
According to an embodiment of the present invention, the determining the background frame image corresponding to the video sequence image from the n frames of video images includes:
obtaining a median of the n frames of video images along a time dimension to obtain a median image;
and determining the median image as the background frame image.
According to an embodiment of the present invention, the determining the motion trajectory of the target animal according to the n mask images of the target animal includes:
determining n central positions corresponding to the target animal according to the n mask images;
and determining the motion trail of the target animal according to the n central positions.
According to an embodiment of the present invention, the analyzing the behavior of the target animal according to the motion trajectory includes:
determining the total path of the motion track by using the Euclidean distance of the central position corresponding to the target animal in the adjacent mask image and the frame number of the video image;
and determining target parameters according to the total path of the motion track, the video frame rate and the frame number of the video image.
According to an embodiment of the present invention, the determining the target parameter according to the total distance of the motion trajectory, the video frame rate, and the frame number of the video image includes:
determining a central movement distance, a first central area latency time, a rest time, an edge movement distance, an edge movement time, a first central area-edge area residence time ratio and an edge area-to-first central area shuttle frequency of the target animal according to the total movement track path, the video frame rate, the frame number of the video images, a preset first central area and a preset edge area, wherein the central movement distance comprises a movement path of the target animal with a central position in the first central area, the first central area latency time comprises a residence time of the target animal with the central position in the first central area, the rest time comprises a duration time that an instantaneous speed of the target animal is less than a preset value, and the edge movement distance comprises a movement path of the target animal with the central position in the edge area, the edgewise movement time comprises a residence time of a central position of the target animal in the edge zone, the first central zone to edge zone residence time ratio comprises a ratio of the first central zone latency to the edgewise movement time, and the edge zone to first central zone shuttle frequency comprises a number of times the central position of the target animal enters the first central zone or exits the first central zone.
According to an embodiment of the present invention, the determining target parameters of the total path of the motion trajectory, the video frame rate, and the frame number of the video image include:
determining the times of the target animal entering the arm opening area, the arm opening activity, the times of entering the arm closing area, the arm closing activity and the second central area activity according to the total path of the motion track, the video frame rate, the frame number of the video images and the preset second central area, the arm opening area and the arm closing area,
wherein the number of times of entering the open arm area includes the number of times the center position of the target animal enters the open arm area, the arm opening movement includes a duration of time for the center position of the target animal to enter the open arm area, the number of times of entering the closed arm area includes the number of times the target animal enters the closed arm area from the second center area, the arm closing movement includes a duration of time for the center position of the target animal to enter the closed arm area, and the second center area movement includes a movement path and a residence time for the center position of the target animal in the second center area.
A second aspect of the present invention provides an image analysis apparatus comprising:
the acquisition module is used for acquiring video sequence images of a target video, wherein the video sequence images comprise n frames of video images, and n is more than or equal to 1;
a first determining module, configured to determine, according to the n frames of video images, a background frame image corresponding to the video sequence image;
a calculating module, configured to calculate, according to the background frame image and an ith video image, an ith difference image corresponding to the ith video image by using a background difference method;
the processing module is used for carrying out filtering processing on the ith difference image to obtain an ith processing image, carrying out opening operation on the ith processing image to obtain an ith intermediate image, wherein i is more than or equal to 2 and less than or equal to n; determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal to obtain an ith overlapping area; calculating the maximum connected domain of the ith overlapped area on the ith difference image, wherein the maximum connected domain on the ith difference image is the ith mask image of the target animal;
the second determining module is used for determining the motion trail of the target animal according to the n mask images of the target animal;
and the analysis module is used for analyzing the behavior of the target animal according to the motion trail to obtain a behavior analysis result.
According to the embodiment of the invention, a video sequence image of a target video is obtained, a background frame image is determined according to the video sequence image, then a difference image corresponding to each video image in the video sequence image is determined by using a background difference method, then the difference image is subjected to filtering processing and opening operation processing to obtain an intermediate image, then the current intermediate image is compared with a mask image corresponding to the previous frame of video image to determine an overlapping area between the current intermediate image and the mask image, the maximum connected domain of the overlapping area on the current difference image is used as the mask image of a target animal, after the mask images of the target animal corresponding to all the video images are determined, the motion track of the target animal is determined according to all the mask images, and then analysis is performed according to the motion track. Therefore, the technical scheme provided by the invention does not need to generate and train a sample, has high analysis speed, can be used after the tracked object is replaced, and can be applied to high-throughput automatic analysis.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following description of embodiments of the invention, which proceeds with reference to the accompanying drawings, in which:
FIG. 1 schematically shows a flow diagram of an image analysis method according to an embodiment of the invention;
FIG. 2 schematically shows a flow chart of a method of image analysis according to another embodiment of the invention;
FIG. 3 schematically shows a block diagram of the configuration of an image analysis apparatus according to an embodiment of the present invention;
FIG. 4 schematically illustrates a schematic diagram of an open field tracking interface in accordance with one embodiment of the present invention;
FIG. 5 schematically illustrates a schematic view of a cross-field tracking interface in accordance with another embodiment of the invention;
fig. 6 schematically shows a block diagram of an electronic device adapted to implement the image analysis method according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. It is to be understood that this description is made only by way of example and not as a limitation on the scope of the invention. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The commercial animal behavior analysis system in the market can meet the analysis requirements of semi-automation and high flux, but the existing analysis system depends on the definition degree of a behavioristics video, cannot provide reliable guarantee for the accuracy of results, and restricts the test conditions of a laboratory. In addition, most animal behavior analysis systems are based on a frame difference detection algorithm, and for the case of wired connection, the positioning of target animals is influenced by the existence of the wires, so that the analysis result is influenced.
The target tracking algorithms generally adopted in the prior art are mainly classified into the following two categories: a Generative (Generative) model method and a discriminant (discriminant) model method. The generation-like algorithms are represented by kalman filtering, particle filtering, Mean-Shift, etc., which all need to initialize a first frame of a video (i.e., manually frame a first frame target), and then find a region in the next frame closest to the target as a tracking result of the next frame. For the low-resolution behavioural video, the characteristic difference between the target animal region and the background is small, the form of the target can be changed continuously, and the tracking target is easy to lose by using the generation method. In addition, in the generation-type method, once the target animal is lost in the picture of one frame in the video, the target animal can only be corrected in the following process through human beings.
The discriminant method is a popular method at present, and is also called a Tracking-by-Tracking method (Tracking-by-Detection). The existing detection module generally adopts a machine learning method, which is roughly divided into correlation filtering and deep learning. The classical related filtering tracking algorithm includes CSK, KCF, CN, etc., although the above machine learning method can realize effective tracking and attitude estimation for animals. However, the above two methods require a lot of time for the generation and training of the sample, and the training of the model needs to be repeated once the tracking object is replaced, so that the methods are not suitable for the high-throughput automatic analysis of the rodent.
In view of the above, the present invention, aiming at the above technical problems, captures the behavior of a target animal through a video, acquires a video sequence image of the target video, extracts a background frame image, performs a difference between the video image and the background frame by frame to obtain a difference image, and performs filtering processing on the difference image to obtain a maximum connected domain, where the maximum connected domain is a mask image of the target animal, determines a motion trajectory of the target animal according to the mask image after determining mask images of the target animal corresponding to all the video images, and then performs analysis according to the motion trajectory, so that there is no need to generate and train a sample, the analysis speed is fast, and the method can be used after a tracked object is replaced, and can be applied to high-throughput automatic analysis.
Specifically, an embodiment of the present invention provides an image analysis method, including: acquiring a video sequence image of a target video, wherein the video sequence image comprises n frames of video images, and n is more than or equal to 1; determining a background frame image corresponding to the video sequence image according to the n frames of video images; calculating an ith difference image corresponding to the ith video image by using a background difference method according to the background frame image and the ith video image; carrying out filtering processing on the ith difference image to obtain an ith processed image; performing an opening operation on the ith processing image to obtain an ith intermediate image, wherein i is more than or equal to 2 and less than or equal to n; determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal to obtain an ith overlapping area; calculating the maximum connected domain of the ith overlapped area on the ith difference image, wherein the maximum connected domain on the ith difference image is the ith mask image of the target animal; determining the motion trail of the target animal according to the n mask images of the target animal; and analyzing the behavior of the target animal according to the motion trail to obtain a behavior analysis result.
It should be noted that the image analysis method and apparatus provided by the embodiment of the present invention can be used in the field of animal behavior analysis or in the field of image processing technology. The image analysis method and the image analysis device provided by the embodiment of the invention can also be used in any fields except the field of animal behavior analysis and the field of image processing technology. The application fields of the image analysis method and the image analysis device provided by the embodiment of the invention are not limited.
In the technical scheme of the invention, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
In the technical scheme of the invention, the data acquisition, collection, storage, use, processing, transmission, provision, disclosure, application and other processing are all in accordance with the regulations of relevant laws and regulations, necessary security measures are taken, and the public order and good custom are not violated.
Fig. 1 schematically shows a flow chart of an image analysis method according to an embodiment of the invention.
As shown in FIG. 1, the image analysis method of the embodiment includes operations S110 to S190.
In operation S110, a video sequence image of a target video is obtained, where the video sequence image includes n frames of video images, and n is greater than or equal to 1.
In operation S120, a background frame image corresponding to the video sequence image is determined according to the n frames of video images.
According to an embodiment of the present invention, the determining the background frame image corresponding to the video sequence image from the n frames of video images includes: obtaining a median of the n frames of video images along a time dimension to obtain a median image; and determining the median image as the background frame image.
In one embodiment, the method for extracting the background frame image comprises the following steps: for input target videoI(x,y, t)Calculating a gray value for each pixel position in the video imageI(x,y)In the time dimensiontThe median is added to obtain a background frame image of the target video and is recorded asB(x,y). In the case where the imaging apparatus of the experimental scene is fixed,the method is simple and convenient for extracting the background frame image.
In operation S130, an ith difference image corresponding to the ith video image is calculated using a background difference method based on the background frame image and the ith video image.
According to the embodiment of the invention, n frames of video images in the video sequence images are subjected to frame-by-frame difference with the background frame image to obtain n difference value images.
In operation S140, the ith difference image is filtered to obtain an ith processed image.
According to an embodiment of the present invention, the filtering process on the ith difference image includes: and carrying out thresholding operation on the ith difference image, wherein fixed threshold operation or self-adaptive threshold operation can be adopted to eliminate pixel points which do not meet preset pixels in the ith difference image to obtain an ith processed image so as to determine the ith mask image of the target animal.
In operation S150, an opening operation is performed on the ith processed image to obtain an ith intermediate image, where i is greater than or equal to 2 and less than or equal to n.
According to an embodiment of the invention, in mathematical morphology, an opening operation is defined as erosion-first and then dilation, and is used for removing morphological noise in the fields of computer vision and image processing.
In operation S160, an overlapping region of the ith intermediate image and the (i-1) th mask image of the target animal is determined, resulting in an ith overlapping region.
In operation S170, a maximum connected component of the i-th overlapping area on the i-th difference image is calculated, wherein the maximum connected component on the i-th difference image is the i-th mask image of the target animal.
According to the embodiment of the invention, in the images connected by the wire, the opening operation is carried out on the processed images, so that small areas and slender areas in the processed images are removed, the areas of the target animal and the wire are disconnected, then the intermediate images obtained after the opening operation processing and the mask images of the target animal in the previous frame are subjected to the AND operation, so that the overlapping area of the two is obtained, the overlapping area is a partial mask area of the target animal, and then the maximum communication area of the overlapping area in the current intermediate image is calculated, and the maximum communication area is the mask image of the current frame of the target animal.
In operation S180, a motion trajectory of the target animal is determined according to the n mask images of the target animal.
According to an embodiment of the present invention, the determining the motion trajectory of the target animal according to the n mask images of the target animal includes: determining n central positions corresponding to the target animal according to the n mask images; and determining the motion trail of the target animal according to the n central positions.
According to an embodiment of the present invention, the method may further include performing smooth filtering on the motion trajectory to reduce noise influence and eliminate outliers.
In operation S190, the behavior of the target animal is analyzed according to the motion trajectory to obtain a behavior analysis result.
According to the embodiment of the invention, a video sequence image of a target video is obtained, a background frame image is determined according to the video sequence image, then a difference image corresponding to each video image in the video sequence image is determined by using a background difference method, then the difference image is subjected to filtering processing and opening operation processing to obtain an intermediate image, then the current intermediate image is compared with a mask image corresponding to the previous frame of video image to determine an overlapping area between the current intermediate image and the mask image, the maximum connected domain of the overlapping area on the current difference image is used as the mask image of a target animal, after the mask images of the target animal corresponding to all the video images are determined, the motion track of the target animal is determined according to all the mask images, and then analysis is performed according to the motion track. Therefore, the technical scheme provided by the invention does not need to generate and train a sample, has high analysis speed, can be used after the tracking object is replaced, and can be applied to high-throughput automatic analysis.
According to an embodiment of the present invention, the image analysis method further includes: before determining the overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal, judging whether the overlapping area exists between the ith intermediate image and the (i-1) th mask image; recording abnormal information when there is no overlapping area between the ith intermediate image and the (i-1) th mask image; and under the condition that the recording frequency of the abnormal information exceeds a preset threshold value, the video image of the current frame is initialized again.
According to the embodiment of the invention, when the method is adopted to analyze the image, because the dependency relationship exists between the adjacent frames of the target video, in order to reduce the influence of the error of a certain frame on the tracking of the subsequent frame, a tracking monitoring mechanism is added in the method. Suppose that the t-1 mask image of the target animal is erroneous, so thattIf the overlap area between the middle image and the t-1 th mask image is not existed, the event is recorded by the monitor, and when the event is continuously recorded and exceeds the preset threshold value, the initialization operation is performed again, namely the current frame video image is initialized again.
According to an embodiment of the present invention, the image analysis method further includes: and obtaining an ith mask image of the target animal by labeling a position region of the target animal in the ith processed image after the ith difference image is filtered to obtain the ith processed image, wherein i = 1.
According to the embodiment of the invention, the step of marking the position area of the target animal in the first difference image comprises the step of determining the first mask image by manually designating the position area of the target area, and the first difference image can also be directly used as the first mask image of the target animal.
Fig. 2 schematically shows a flow chart of an image analysis method according to another embodiment of the present invention.
As shown in FIG. 2, the image analysis method of this embodiment includes operations S201 to S213.
In operation S201, video sequence images of a target video are obtained, where the video sequence images include n frames of video images, the n frames of video images are numbered sequentially, and n is greater than or equal to 1.
In operation S202, a background frame image corresponding to a video sequence image is determined from the n frames of video images.
In operation S203, a first difference image is obtained by subtracting a first frame of video image in the video sequence image from a background frame of image.
In operation S204, a position region of the target animal in the first difference image is marked, resulting in a first mask image.
In operation S205, the ith frame video image is subtracted from the background frame image to obtain an ith difference image.
In operation S206, the ith difference image is filtered to obtain an ith processed image.
In operation S207, an opening operation is performed on the ith processed image to obtain an ith intermediate image.
In operation S208, an overlapping region of the ith intermediate image and the ith-1 mask image is determined, resulting in an ith overlapping region.
In operation S209, a maximum connected component of the ith overlapping region on the ith difference image is calculated, and the maximum connected component on the ith difference image is taken as an ith mask image of the target animal.
In operation S210, it is determined whether a number corresponding to the ith frame video image is equal to n. Operation S211 is performed when the number corresponding to the ith frame video image is equal to n, and operations S205 to S209 are performed when the number corresponding to the ith frame video image is not equal to n.
In operation S211, n center positions corresponding to the target animal are determined from the n mask images.
In operation S212, a motion trajectory of the target animal is determined according to the n center positions.
In operation S213, behavior of the target animal is analyzed according to the motion trajectory to obtain a behavior analysis result.
According to an embodiment of the present invention, the analyzing the behavior of the target animal according to the motion trajectory includes: determining the total path of the motion track by using the Euclidean distance of the central position corresponding to the target animal in the adjacent mask image and the frame number of the video image; and determining target parameters according to the total path of the motion track, the video frame rate and the frame number of the video image.
According to the embodiment of the invention, the number of frames of the video image is N, and the obtained set of the central positions of the target animals isP={(x t , y t ) , t∈[0,N]And f, the total path of the motion trail of the target animalDCan be expressed by the following formula (1):
Figure DEST_PATH_IMAGE001
according to an embodiment of the invention, the target parameters may comprise an average movement speed and an instantaneous movement speed, the average movement speedv avg Can be expressed by the following equation (2):
Figure 719921DEST_PATH_IMAGE002
instantaneous speed of motionv i Can be expressed by the following equation (3):
Figure DEST_PATH_IMAGE003
where F denotes a video frame rate.
According to an embodiment of the present invention, the determining the target parameter according to the total distance of the motion trajectory, the video frame rate, and the frame number of the video image includes: determining a central movement distance, a first central area latency time, a rest time, an edge movement distance, an edge movement time, a first central area-edge area residence time ratio and an edge area-to-first central area shuttle frequency of the target animal according to the total movement track path, the video frame rate, the frame number of the video images, a preset first central area and a preset edge area, wherein the central movement distance comprises a movement path of the target animal with a central position in the first central area, the first central area latency time comprises a residence time of the target animal with the central position in the first central area, the rest time comprises a duration time that an instantaneous speed of the target animal is less than a preset value, and the edge movement distance comprises a movement path of the target animal with the central position in the edge area, the edgewise movement time includes a residence time of a central position of the target animal in the edge area, the first central area-to-edge area residence time ratio includes a ratio of the first central area latency to the edgewise movement time, and the edge area-to-first central area shuttle frequency includes a number of times the central position of the target animal enters the first central area or leaves the first central area.
According to an embodiment of the present invention, a method for determining a preset first central region and a preset edge region includes: when the open field experiment is carried out, the coordinates of four vertexes of a central area are marked by an artificial clockwise needle in the open field area and are recorded asa 1 a 2 a 3 a 4 And then, automatically determining a first central area and an edge area by the system, and then determining target parameters required by the open field experiment according to the first central area and the edge area, the total path of the motion track, the video frame rate and the frame number of the video images.
According to an embodiment of the present invention, the target parameters required for the open field experiment may include the central movement distance, the first central zone latency, the rest time, the edgewise movement distance, the edgewise movement time, the first central zone to edgewise dwell time ratio, the edgewise to first central zone shuttle frequency of the target animal.
According to an embodiment of the present invention, the determining the target parameters of the total path of the motion trajectory, the video frame rate, and the frame number of the video image includes: determining the times of the target animal entering the arm opening area, the arm opening activity, the times of entering the arm closing area, the arm closing activity and the second central area activity according to the total path of the motion track, the video frame rate, the frame number of the video images and the preset second central area, the arm opening area and the arm closing area, wherein the number of times of entering the open arm region includes the number of times of entering the open arm region by the center position of the target animal, said arm opening activity comprises a duration of time for a central position of said target animal to enter said arm opening area, the number of times of entering the arm closure zone includes the number of times of entering the arm closure zone from the second central zone by the target animal, the closed arm movement comprises a duration of time for which the central position of the target animal enters the closed arm area, said second central zone activity comprises an activity path and a residence time of said target animal centrally located within said second central zone.
According to an embodiment of the present invention, a method for determining the preset second central area, the open arm area and the closed arm area includes: when a cross field experiment is carried out, eight vertexes of the cross-shaped area are marked by manual work clockwise and recorded asa 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 (ii) a Then the system can be automatically divided into a second central area, an arm opening area and an arm closing area, wherein the arm opening area comprises an upper arm opening area and a lower arm opening area, and the arm closing area comprises a lower arm closing area and an upper arm closing area; and then determining target parameters required by the cross field experiment according to a preset second central area, an arm opening area, an arm closing area, the total path of the motion track, the video frame rate, the frame number of the video image and the target parameters.
According to an embodiment of the invention, determining the target parameters required for the cross-field experiment comprises the number of times the target animal enters the open arm zone, the arm opening activity, the number of times the target animal enters the closed arm zone, the arm closing activity and the second central zone activity.
Based on the image analysis method, the invention also provides an image analysis device. The apparatus will be described in detail below with reference to fig. 3.
Fig. 3 schematically shows a block diagram of the configuration of an image analysis apparatus according to an embodiment of the present invention.
As shown in fig. 3, the image analysis apparatus 300 of this embodiment includes an acquisition module 310, a first determination module 320, a calculation module 330, a processing module 340, a second determination module 350, and an analysis module 360.
The obtaining module 310 is configured to obtain a video sequence image of a target video, where the video sequence image includes n frames of video images, and n is greater than or equal to 1. In an embodiment, the obtaining module 310 may be configured to perform the operation S110 described above, which is not described herein again.
The first determining module 320 is configured to determine a background frame image corresponding to the video sequence image according to the n frames of video images. In an embodiment, the first determining module 320 may be configured to perform the operation S120 described above, which is not described herein again.
The calculating module 330 is configured to calculate an ith difference image corresponding to the ith video image by using a background difference method according to the background frame image and the ith video image. In an embodiment, the calculating module 330 may be configured to perform the operation S130 described above, which is not described herein again.
The processing module 340 is configured to perform filtering processing on the ith difference image to obtain an ith processed image, and perform an opening operation on the ith processed image to obtain an ith intermediate image, where i is greater than or equal to 2 and less than or equal to n; determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal to obtain an ith overlapping area; and calculating the maximum connected domain of the ith overlapped area on the ith difference image, wherein the maximum connected domain on the ith difference image is the ith mask image of the target animal. In an embodiment, the processing module 340 may be configured to perform the operations S140 to S170 described above, which are not described herein again.
The second determining module 350 is configured to determine a motion trajectory of the target animal according to the n mask images of the target animal. In an embodiment, the second determining module 350 may be configured to perform the operation S180 described above, and is not described herein again.
The analysis module 360 is configured to analyze the behavior of the target animal according to the motion trajectory to obtain a behavior analysis result. In an embodiment, the analysis module 350 may be configured to perform the operation S190 described above, which is not described herein again.
Any of the modules, sub-modules, units, sub-units, or at least part of the functionality of any of them according to embodiments of the invention may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present invention may be implemented by being divided into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present invention may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present invention may be at least partially implemented as computer program modules, which, when executed, may perform the corresponding functions.
According to the embodiment of the present invention, any plurality of the obtaining module 310, the first determining module 320, the calculating module 330, the processing module 340, the second determining module 350, and the analyzing module 360 may be combined into one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present invention, at least one of the obtaining module 310, the first determining module 320, the calculating module 330, the processing module 340, the second determining module 350, and the analyzing module 360 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by any other reasonable manner of integrating or packaging a circuit, such as hardware or firmware, or implemented by any one of three implementations of software, hardware, and firmware, or by any suitable combination of any of them. Alternatively, at least one of the obtaining module 310, the first determining module 320, the calculating module 330, the processing module 340, the second determining module 350 and the analyzing module 360 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
It should be noted that, the image analysis apparatus portion in the embodiment of the present invention corresponds to the image analysis method portion in the embodiment of the present invention, and the description of the image analysis apparatus portion specifically refers to the image analysis method portion, which is not repeated herein.
According to the embodiment of the invention, the image analysis method provided by the invention reduces the requirements on experimental equipment; the method avoids complex setting before the behavioural experiment, such as setting grouping, dividing regions, strict camera positions and image acquisition requirements in software, can analyze the video only by recording, and greatly reduces the software and hardware thresholds required by related experiments.
According to the embodiment of the invention, the image analysis method improves the test speed. Taking 10 videos as an example, under a CPU platform, the processing speed of 60 frame rate or 120 frame rate (twice acceleration) can be maintained, and the total duration of serially processing 10 videos is within 15 minutes, calculated according to that the frame rate of each video is 30 and the total length of the videos is 5 minutes. If the CPU permits, parallel operation can be performed.
According to the embodiment of the invention, the method has low requirements on video quality, mainly aims at the background and target animal colors and influences of record lines, and the video format can be AVI and MPEG format, the image pick-up resolution is not lower than 320 × 240, and the video stream encoding rate is not lower than 928 Kps.
According to the embodiment of the invention, the analysis accuracy of the method is high, the retention time of 50 open-field videos and 50 cross-field videos (each video is 5 minutes in length) in a specified area is counted manually, and the time error is found to be within a range of +/-1 s by comparing with the result obtained by the method.
According to the embodiment of the invention, the analysis software developed based on PyQT5 in the method has better system compatibility and can be compatible with a Windows system and a Linux system.
It should be noted that, unless explicitly indicating that different operations have execution sequences or different operations have execution sequences in technical implementation, the operations shown in the flowchart in the embodiment of the present invention may not be executed in sequence, or multiple operations may be executed at the same time.
The image analysis method of the present invention is further illustrated by the following specific examples.
Example 1: open field experiment
The field-open experiment is also called a box-open experiment, and is a method for evaluating the autonomous behavior of an experimental animal in a new and different environment and exploring behavior and tensity. Animal fear of new open environment is mainly in peripheral area and less in central area, but the exploration characteristics of animal can also promote its motivation to move in central area, and the anxiety psychology caused by it can be observed. Central stimulants can significantly increase autonomic activity and reduce exploratory behavior, and doses of antipsychotics can reduce exploratory behavior without affecting autonomic activity.
Preparation of the experiment:
in the open field experiment, a mouse is generally used as an experimental animal, the height of a rat open field reaction box is 30-40 cm, the length of the bottom edge of the rat open field reaction box is 100 cm, the height of a mouse open field reaction box is 25-30 cm, and the length of the bottom edge of the mouse open field reaction box is 72 cm; divide into 16 little squares on average the bottom surface of spacious field reaction box to 2m department frame cameras directly over the reaction box, the field of vision of this camera can cover inside whole spacious field, and the colour of experimental animals needs to be different with the background color, and if experimental animals is white, then spacious field box is with black suitable. The video time is not limited, generally 6 minutes, the resolution is not lower than 320 x 240 pixels, and the video stream encoding rate is not lower than 928 Kps.
FIG. 4 schematically shows a schematic diagram of a field-open tracking interface in accordance with one embodiment of the present invention.
As shown in fig. 4, an "open field" option is selected on an open field tracking interface, then a scale (i.e., the side length of the open field), a time for starting tracking and a time for finishing tracking are set, and then four vertex coordinates of a reaction box are manually positioned (if "whether the file is the same day" is selected, only the coordinate of the first video needs to be marked, otherwise, the coordinate of each video needs to be marked); if the mouse position marking option is selected, marking the mouse position of the first frame of each video, which starts to be tracked; otherwise, directly entering a tracking interface after clicking is started; the tracking result of the mouse can be displayed in real time by checking whether the tracking video is displayed in real time or not and whether the tracking template is displayed in real time or not; if the '2 x acceleration' is selected, the video can be tracked by frame skipping, after the setting is finished, a start button is clicked, and the video is subjected to batch processing according to the image processing method.
In the open field experiment, the main measurement parameters include: the method comprises the steps of obtaining the central movement distance of a target animal, the central area latency, the rest time, the edgewise movement distance, the edgewise movement time, the ratio of the central area residence time to the edge area residence time and the shuttle frequency from the edge area to the central area of the target animal, wherein the central movement distance comprises the movement path of the central position of the target animal in the central area, the central area latency comprises the residence time of the central position of the target animal in the central area, the rest time comprises the duration that the instantaneous speed of the target animal is less than a preset value, the edgewise movement distance comprises the movement path of the central position of the target animal in the edge area, the edgewise movement time comprises the residence time of the central position of the target animal in the edge area, the ratio of the central area residence time to the edge area comprises the ratio of the central area latency to the edgewise movement time, and the shuttle frequency from the edge area to the central area comprises the number of times that the central position of the target animal enters the central area or leaves the central area The number of times.
After the analysis is completed, a "save" button can be clicked to save the tracking result. The tracking result data generation format is a json file which can be converted into an excel file or a txt file, and the track is generated into a picture format file (TIFF, PDF, JPG).
Example 2: cross field experiment
The cross-field experiment is to investigate the anxiety state of animals by exploiting the animal's fear of open environment and the contradiction between the exploratory properties of new environment. The cross field comprises two open arms and two closed (wall-shielding) arms. Animals with high anxiety tend to stay in the closed arms for a longer period of time than animals with low anxiety.
Preparation of the experiment:
in the cross field experiment, a mouse is generally used as an experimental animal, and for a rat, the arm width of the cross field is 10cm, the arm length is 50cm, the closed arm height is 40cm, and the ground clearance of the cross field is 60-70 cm; for the mouse, the arm width is 5cm, the arm length is 35cm, the arm closing height is 15cm, and the ground clearance of the cross field is about 40-55 cm; a camera is erected 2m above the cross field, the visual field of the camera can cover the inside of the whole cross field, the color of the experimental animal needs to be different from the background color, and if the experimental animal is white, the box body of the cross field is black. The video time is not limited, generally 6 minutes, the resolution is not lower than 320 x 240 pixels, and the video stream encoding rate is not lower than 928 Kps.
FIG. 5 schematically illustrates a cross-field tracking interface in accordance with another embodiment of the invention.
As shown in fig. 5, a cross-shaped field option is selected on a cross-shaped field tracking interface, then the scale (namely the arm length of the cross-shaped field), the time for starting tracking and the time for finishing tracking are set, and then eight vertex coordinates of the cross-shaped field are manually positioned (if 'whether the file is the same day' or not is selected, only the coordinate of the first video needs to be marked, otherwise, the coordinate of each video needs to be marked); if the 'whether to mark a mouse position option' is selected, marking the mouse position of the first frame of each video, which starts to track; otherwise, directly entering a tracking interface after clicking is started; the tracking result of the mouse can be displayed in real time by checking whether the tracking video is displayed in real time or not and whether the tracking template is displayed in real time or not; if the '2 x acceleration' is selected, the video can be tracked by frame skipping, after the setting is finished, a start button is clicked, and the video is subjected to batch processing according to the image processing method.
In the cross field experiment, the main measurement parameters include: the number of times of the experimental animal entering an arm opening area, the arm opening activity, the number of times of the experimental animal entering an arm closing area, the arm closing activity and the central area activity are determined, wherein the number of times of the experimental animal entering the arm opening area comprises the number of times of the target animal entering the arm opening area, and the condition that four limbs of the experimental animal enter the arms or that 80% of the animal body enters the arms is determined; the open arm movement comprises the duration of the four limbs of the experimental animal entering the open arm or 80% of the body entering the open arm, and the unit is second; the frequency of entering the arm closing area comprises the frequency of entering the arm closing area from the central area of the target animal, and the frequency is determined according to the condition that the four limbs of the experimental animal enter the arms or the condition that 80% of the animal body enters the arms; the arm closing movement comprises the duration of the four limbs of the experimental animal entering the arm closing completely or 80% of the body entering the arm closing, and the unit is second; the central area refers to the area where the open arms and the closed arms are connected and crossed, and the central area activity refers to the activity distance and the residence time of the experimental animal in the area.
After the analysis is completed, a "save" button can be clicked to save the tracking result. The tracking result data generation format is a json file which can be converted into an excel file or a txt file, and the track is generated into a picture format file (TIFF, PDF, JPG).
Fig. 6 schematically shows a block diagram of an electronic device adapted to implement the image analysis method according to an embodiment of the invention.
As shown in fig. 6, an electronic device 600 according to an embodiment of the present invention includes a processor 601 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present invention.
In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flow according to the embodiments of the present invention by executing programs in the ROM 602 and/or RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM 602 and RAM 603. The processor 601 may also perform various operations of method flows according to embodiments of the present invention by executing programs stored in the one or more memories.
Electronic device 600 may also include input/output (I/O) interface 605, where input/output (I/O) interface 605 is also connected to bus 604, according to an embodiment of the invention. The electronic device 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
The present invention also provides a computer-readable storage medium, which may be embodied in the device/apparatus/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the present invention.
According to embodiments of the present invention, the computer readable storage medium may be a non-volatile computer readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to an embodiment of the present invention, a computer-readable storage medium may include the above-described ROM 602 and/or RAM 603 and/or one or more memories other than the ROM 602 and RAM 603.
Embodiments of the invention also include a computer program product comprising a computer program comprising program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to implement the image analysis method provided by the embodiment of the invention.
Which when executed by the processor 601 performs the above-described functions defined in the system/apparatus of embodiments of the invention. The above described systems, devices, modules, units, etc. may be implemented by computer program modules according to embodiments of the invention.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 609, and/or installed from the removable medium 611. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiment of the present invention. The above described systems, devices, apparatuses, modules, units, etc. may be implemented by computer program modules according to embodiments of the present invention.
According to embodiments of the present invention, program code for executing a computer program provided by embodiments of the present invention may be written in any combination of one or more programming languages, and in particular, the computer program may be implemented using a high level procedural and/or object oriented programming language, and/or an assembly/machine language. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be appreciated by a person skilled in the art that various combinations and/or combinations of features described in the various embodiments and/or in the claims of the invention are possible, even if such combinations or combinations are not explicitly described in the invention. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present invention may be made without departing from the spirit and teachings of the invention. All such combinations and/or associations are within the scope of the present invention.
The embodiments of the present invention have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the invention is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the invention, and these alternatives and modifications are intended to fall within the scope of the invention.

Claims (8)

1. An image analysis method, comprising:
acquiring a video sequence image of a target video, wherein the video sequence image comprises n frames of video images, and n is more than or equal to 1;
determining a background frame image corresponding to the video sequence image according to the n frames of video images;
calculating an ith difference image corresponding to the ith video image by using a background difference method according to the background frame image and the ith video image;
filtering the ith difference image to obtain an ith processed image;
marking a position area of the target animal in the ith processing image to obtain an ith mask image of the target animal, wherein i = 1;
performing an opening operation on the ith processing image to obtain an ith intermediate image, wherein i is more than or equal to 2 and less than or equal to n;
determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal to obtain an ith overlapping area;
calculating the maximum connected domain of the ith overlapping region on the ith difference image, wherein the maximum connected domain on the ith difference image is the ith mask image of the target animal;
determining the motion trail of the target animal according to the n mask images of the target animal;
and analyzing the behavior of the target animal according to the motion trail to obtain a behavior analysis result.
2. The method of claim 1, further comprising:
before determining the overlapping area of the ith intermediate image and the ith-1 mask image of the target animal, judging whether the overlapping area exists between the ith intermediate image and the ith-1 mask image;
recording abnormal information under the condition that no overlapping region exists between the ith intermediate image and the (i-1) th mask image;
and under the condition that the recording frequency of the abnormal information exceeds a preset threshold value, initializing the current frame video image again.
3. The method of claim 1, wherein said determining from the n-frame video images a background frame image corresponding to the video sequence image comprises:
obtaining a median of the n frames of video images along a time dimension to obtain a median image;
and determining the median image as the background frame image.
4. The method of claim 1, wherein said determining a motion trajectory of the target animal from the n mask images of the target animal comprises:
determining n central positions corresponding to the target animal according to the n mask images;
and determining the motion trail of the target animal according to the n central positions.
5. The method of claim 1, wherein said analyzing the behavior of the target animal according to the motion trajectory comprises:
determining the total path of the motion track by using the Euclidean distance of the central position corresponding to the target animal in the adjacent mask image and the frame number of the video image;
and determining target parameters according to the total distance of the motion track, the video frame rate and the frame number of the video image.
6. The method of claim 5, wherein the determining target parameters from the total path of the motion trajectory, a video frame rate, and a number of frames of the video image comprises:
determining a central movement distance, a first central area latency time, a rest time, an edge movement distance, an edge movement time, a first central area to edge area residence time ratio and an edge area to first central area shuttle frequency of the target animal according to the total movement track path, the video frame rate, the frame number of the video images, a preset first central area and a preset edge area, wherein the central movement distance comprises a movement path of the target animal with a central position in the first central area, the first central area latency time comprises a residence time of the target animal with the central position in the first central area, the rest time comprises a duration time that the instantaneous speed of the target animal is less than a preset value, and the edge movement distance comprises a movement path of the target animal with the central position in the edge area, the edgewise movement time comprises a residence time of the central position of the target animal in the fringe zone, the first central zone to fringe zone residence time ratio comprises a ratio of the first central zone latency time to the edgewise movement time, and the fringe zone to first central zone shuttle frequency comprises a number of times the central position of the target animal enters the first central zone or exits the first central zone.
7. The method of claim 5, wherein the determining target parameters of the total path of the motion trajectory, a video frame rate, and a number of frames of the video image comprises:
determining the times of the target animal entering the arm opening area, the arm opening activity, the times of entering the arm closing area, the arm closing activity and the second central area activity according to the total path of the motion track, the video frame rate, the frame number of the video images and the preset second central area, the arm opening area and the arm closing area,
wherein the number of times of entering the open arm area comprises the number of times of entering the open arm area by the central position of the target animal, the open arm activity comprises the duration of entering the open arm area by the central position of the target animal, the number of times of entering the closed arm area comprises the number of times of entering the closed arm area by the target animal from the second central area, the closed arm activity comprises the duration of entering the closed arm area by the central position of the target animal, and the second central area activity comprises the activity path and the residence time of the central position of the target animal in the second central area.
8. An image analysis apparatus comprising:
the acquisition module is used for acquiring a video sequence image of a target video, wherein the video sequence image comprises n frames of video images, and n is more than or equal to 1;
a first determining module, configured to determine, according to the n frames of video images, a background frame image corresponding to the video sequence image;
the computing module is used for computing an ith difference image corresponding to the ith video image by utilizing a background difference method according to the background frame image and the ith video image;
the processing module is used for carrying out filtering processing on the ith difference image to obtain an ith processing image; marking a position area of the target animal in the ith processing image to obtain an ith mask image of the target animal, wherein i = 1; performing an opening operation on the ith processing image to obtain an ith intermediate image, wherein i is more than or equal to 2 and less than or equal to n; determining an overlapping area of the ith intermediate image and the (i-1) th mask image of the target animal to obtain an ith overlapping area; calculating the maximum connected domain of the ith overlapping region on the ith difference image, wherein the maximum connected domain on the ith difference image is the ith mask image of the target animal;
the second determination module is used for determining the motion trail of the target animal according to the n mask images of the target animal;
and the analysis module is used for analyzing the behavior of the target animal according to the motion trail to obtain a behavior analysis result.
CN202210442535.9A 2022-04-26 2022-04-26 Image analysis method and device Active CN114549371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210442535.9A CN114549371B (en) 2022-04-26 2022-04-26 Image analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210442535.9A CN114549371B (en) 2022-04-26 2022-04-26 Image analysis method and device

Publications (2)

Publication Number Publication Date
CN114549371A CN114549371A (en) 2022-05-27
CN114549371B true CN114549371B (en) 2022-09-09

Family

ID=81667139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210442535.9A Active CN114549371B (en) 2022-04-26 2022-04-26 Image analysis method and device

Country Status (1)

Country Link
CN (1) CN114549371B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115024244B (en) * 2022-06-17 2023-02-24 曲阜师范大学 Black hamster sleep-wake detection system and method based on infrared open field and Python analysis and application
CN115103120A (en) * 2022-06-30 2022-09-23 Oppo广东移动通信有限公司 Shooting scene detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222346A (en) * 2011-05-23 2011-10-19 北京云加速信息技术有限公司 Vehicle detecting and tracking method
CN104700430A (en) * 2014-10-05 2015-06-10 安徽工程大学 Method for detecting movement of airborne displays
CN110599520A (en) * 2019-08-30 2019-12-20 深圳先进技术研究院 Open field experiment data analysis method, system and terminal equipment
WO2021227704A1 (en) * 2020-05-11 2021-11-18 腾讯科技(深圳)有限公司 Image recognition method, video playback method, related device, and medium
CN113793366A (en) * 2021-08-31 2021-12-14 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222346A (en) * 2011-05-23 2011-10-19 北京云加速信息技术有限公司 Vehicle detecting and tracking method
CN104700430A (en) * 2014-10-05 2015-06-10 安徽工程大学 Method for detecting movement of airborne displays
CN110599520A (en) * 2019-08-30 2019-12-20 深圳先进技术研究院 Open field experiment data analysis method, system and terminal equipment
WO2021227704A1 (en) * 2020-05-11 2021-11-18 腾讯科技(深圳)有限公司 Image recognition method, video playback method, related device, and medium
CN113793366A (en) * 2021-08-31 2021-12-14 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
序列图像运动目标的检测与提取;邹策千等;《内蒙古农业大学学报(自然科学版)》;20100415(第02期);全文 *

Also Published As

Publication number Publication date
CN114549371A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
JP7130368B2 (en) Information processing device and information processing system
CN114549371B (en) Image analysis method and device
Maddalena et al. Towards benchmarking scene background initialization
US6678413B1 (en) System and method for object identification and behavior characterization using video analysis
CN111539273B (en) Traffic video background modeling method and system
US20080137956A1 (en) Fast Human Pose Estimation Using Appearance And Motion Via Multi-Dimensional Boosting Regression
Datcu et al. Noncontact automatic heart rate analysis in visible spectrum by specific face regions
CN109886195B (en) Skin identification method based on near-infrared monochromatic gray-scale image of depth camera
CN113379789B (en) Moving target tracking method in complex environment
CN113689412A (en) Thyroid image processing method and device, electronic equipment and storage medium
CN110738149A (en) Target tracking method, terminal and storage medium
KR101438451B1 (en) Method of providing fast detection of moving objects from non-stationary camera video by dual-mode SGM, and computer-readable recording medium for the same
JP6893812B2 (en) Object detector
CN113420667B (en) Face living body detection method, device, equipment and medium
US20210287051A1 (en) Methods and systems for recognizing object using machine learning model
CN115331162A (en) Cross-scale infrared pedestrian detection method, system, medium, equipment and terminal
US20160364604A1 (en) Subject tracking apparatus, control method, image processing apparatus, and image pickup apparatus
JP6851246B2 (en) Object detector
Davies et al. Using CART to segment road images
CN117238039B (en) Multitasking human behavior analysis method and system based on top view angle
US11257238B2 (en) Unsupervised object sizing method for single camera viewing
Linares-Sánchez et al. Follow-me: A new start-and-stop method for visual animal tracking in biology research
JP4890495B2 (en) Gaze position estimation method, gaze position estimation apparatus, computer program, and recording medium
Cai et al. Driver Fatigue Detection System Based on DM3730
Yu et al. An enhancement algorithm for head characteristics of caged chickens detection based on cyclic consistent migration neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant