WO2020078114A1 - 动物分娩的识别方法、装置和设备 - Google Patents

动物分娩的识别方法、装置和设备 Download PDF

Info

Publication number
WO2020078114A1
WO2020078114A1 PCT/CN2019/103333 CN2019103333W WO2020078114A1 WO 2020078114 A1 WO2020078114 A1 WO 2020078114A1 CN 2019103333 W CN2019103333 W CN 2019103333W WO 2020078114 A1 WO2020078114 A1 WO 2020078114A1
Authority
WO
WIPO (PCT)
Prior art keywords
animal
image data
delivered
delivery
frame image
Prior art date
Application number
PCT/CN2019/103333
Other languages
English (en)
French (fr)
Inventor
陈奕名
Original Assignee
京东数字科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东数字科技控股有限公司 filed Critical 京东数字科技控股有限公司
Publication of WO2020078114A1 publication Critical patent/WO2020078114A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the embodiments of the present application relate to the field of computer vision technology, and in particular, to an animal delivery identification method, device, and equipment.
  • Embodiments of the present application provide an animal delivery identification method, device, and equipment, which are used to realize automatic identification of animal delivery and reduce labor costs.
  • an embodiment of the present application provides an identification method for animal delivery, including:
  • the image data is the image data to monitor the animal to be born;
  • the image data of the delivery site area of the animal to be delivered is obtained in each frame of the preset number of consecutive frame image data
  • the continuous frame image data of the delivery site area of the animal to be delivered it is determined whether the animal to be delivered is delivering.
  • an animal delivery identification device including:
  • the acquisition module is used to acquire a preset number of continuous frame image data, and the image data is image data for monitoring the animal to be born;
  • the positioning module locates the position information of the animal to be born according to a preset number of continuous frame image data
  • the processing module acquires the image data of the delivery site area of the animal to be delivered in each of the preset number of consecutive frame image data;
  • the identification module determines whether the animal to be delivered is being delivered according to the continuous frame image data of the delivery site area of the animal to be delivered.
  • an electronic device including:
  • At least one processor and memory At least one processor and memory
  • the memory stores computer execution instructions
  • the at least one processor executes computer-executed instructions stored in the memory, so that the at least one processor executes the animal delivery identification method according to any one of the first aspects.
  • an embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executed instructions are executed by a processor, they are used to implement any of the first aspects.
  • the method, device and equipment for recognizing the delivery of animals provided by the embodiments of the present application, by obtaining a preset number of continuous frame image data, and according to the preset number of continuous frame image data, obtain the continuous frame image data of the delivery site area of the animal to be determined Whether the animal to be delivered is being delivered, the automatic identification of animal delivery is realized, and the labor cost is effectively reduced. And by using continuous frame image data, the process of childbirth is better reflected, and the accuracy of animal delivery is improved; by using the region of the delivery site for identification, not only the amount of data processing is reduced, but also the speed of animal delivery is recognized. The interference of other parts to the identification of delivery is avoided, and the accuracy of identification of animal delivery is further improved.
  • FIG. 1 is a schematic diagram of an application scenario according to an embodiment of this application.
  • FIG. 2 is a flowchart of an embodiment of an identification method for animal delivery provided by the present application
  • FIG. 3 is a schematic diagram of a nursery room divided according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of an image stamping mask according to an embodiment of the present application.
  • 5 is a schematic diagram of location information of an animal to be delivered in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a delivery site region of an animal to be delivered taken according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of a sample to be identified in an embodiment of an identification method for animal delivery provided by this application;
  • FIGS. 8A-8C are schematic diagrams of training samples in an embodiment of an identification method of animal delivery provided by the present application.
  • FIG. 9 is a schematic structural diagram of an embodiment of an identification device for animal delivery provided by the present application.
  • FIG. 10 is a schematic structural diagram of an embodiment of an electronic device provided by this application.
  • first and second in this application are only for identification purposes, and cannot be understood as indicating or implying a sequence relationship, relative importance, or implicitly indicating the number of technical features indicated.
  • Multiple means two or more.
  • “And / or” describes the relationship of the related objects, indicating that there can be three relationships, for example, A and / or B, which can indicate: there are three conditions: A exists alone, A and B exist simultaneously, and B exists alone.
  • the character "/" generally indicates that the related object is a "or" relationship.
  • the method for identifying the delivery of animals provided in the embodiments of the present application can be used to identify the delivery of various animals.
  • sow delivery is used as an example for illustration, which does not mean that the present application is limited to this.
  • FIG. 1 is a schematic diagram of an application scenario according to an embodiment of the present application.
  • each camera simultaneously monitors the production of two sows to be delivered.
  • Each sow waiting to be born is confined to a fixed tripod in the waiting pen, and the camera is monitoring the dividing fence between the two sows from top to bottom.
  • the white frame in the middle of the picture is the nursery room. In this scene, the lid of the nursery room is red.
  • FIG. 2 is a flowchart of an embodiment of an identification method for animal delivery provided by the present application. As shown in FIG. 2, the method of this embodiment may include:
  • the preset number of continuous frame image data in this embodiment may be obtained from video data for real-time monitoring of the animal to be produced.
  • a camera can be added to the field to be produced for monitoring to obtain real-time monitoring video data of the animal to be produced.
  • the specific value of the preset number can be set according to actual needs. For example, it can be determined according to the recognition accuracy requirements and the frame rate of the image data to be monitored for the animals to be born. For example, if it is desired to perform identification once per second, and the frame rate of the image data of the animal to be monitored is 16 frames per second, the specific value of the preset number may be set to 16. When identifying at the current time, the current frame and 15 frames before the current frame can be selected to form 16 frames of continuous frame image data.
  • the animal to be delivered can be detected from the image data to locate the position information of the animal to be delivered.
  • the target detection network may be pre-trained according to the category of the animal to be delivered, for detecting the target animal to be delivered from the image data. If the sow is identified for delivery, the pre-trained sow detection network can be used to determine the position information of the sow to be delivered in the image data; if the cow is identified for delivery, the pre-trained cow detection network can be used to locate the Location information in image data.
  • the position information in this embodiment is used to indicate the specific position of the animal to be delivered in continuous frame image data. For example, if it is represented by a rectangular frame, the coordinate values of the diagonal points of the rectangular frame can be used. For example, the coordinates of the upper left corner and the lower right corner can be used to indicate the position of the animal to be delivered in the image. Each coordinate value is in pixels.
  • the position of the animal to be delivered has a small change in position when it is to be delivered, so in order to increase the recognition speed.
  • S203 Acquire the image data of the delivery site area of the animal to be delivered in each frame of the preset number of consecutive frame image data according to the position information.
  • the region of the delivery site of the animal to be delivered can be segmented from the image data according to the location information.
  • the delivery area of the animal to be delivered needs to be determined according to the category of the animal to be delivered. Taking sow delivery as an example, the delivery area is the hip area.
  • the position information of the delivery site area of the animal to be delivered can be located in any frame of the preset number of consecutive frame image data according to the position information, and then the For position information, the image data of the delivery site area of the animal to be delivered is obtained in each frame of the preset number of consecutive frame image data.
  • S204 Determine, according to the continuous frame image data of the delivery site area of the animal to be delivered, whether the animal to be delivered is delivering.
  • Animal delivery is a process.
  • the continuous frame image data of the delivery part area of the animal to be delivered can better reflect the delivery process and help to improve the accuracy of animal delivery identification.
  • the image data of the delivery area of the animal to be delivered is used instead of the global image data of the animal to be delivered, which not only reduces the workload of data processing, improves the recognition speed, but also avoids the interference of other parts to the delivery identification, Improve the accuracy of identification.
  • the continuous frame image data of the delivery site area of the animal to be delivered is determined to determine whether the animal to be delivered is delivering , Realize the automatic identification of animal delivery, effectively reduce labor costs.
  • the process of childbirth is better reflected, and the accuracy of animal delivery is improved; by using the region of the delivery site for identification, not only the amount of data processing is reduced, but the speed of animal delivery The interference of other parts to the identification of delivery is avoided, and the accuracy of identification of animal delivery is further improved.
  • an implementation manner of positioning the position information of the animal to be delivered may be:
  • a mask is added to the preset number of consecutive frame image data according to the position of the nursery room.
  • the centroid of the nursery can be determined based on the color characteristics of the nursery; the position and width of the mask can be determined based on the distance of the centroid from the two sides of the image data; each of the preset number of consecutive frame image data One frame stamped mask.
  • each camera is used to monitor two sows due to birth. Due to the influence of some factors during the installation and use of the camera, adjacent surveillance fields may appear in the surveillance area of the camera
  • the image of the sow to be born in Figure 1 is shown at the far right in Figure 1.
  • the interference can be eliminated by covering the mask to improve the accuracy of identification.
  • the nursery room usually has a specific color or a specific shape, and is located in the center of the target monitoring area. Therefore, the position and width of the mask to be covered can be determined according to the position of the nursery room.
  • FIG. 3 is a schematic diagram of a nursery room divided according to an embodiment of the present application. As shown in Figure 3, the white part of the figure is the divided nursery room area.
  • the area of the nursery room can also be divided according to the shape of the nursery room.
  • the position of the center of mass of the nursery room area divided by FIG. 3 can be determined, and then the abscissa of the center of mass is used as the vertical line to serve as the ideal dividing line of the two target fields to be monitored.
  • the position and width of the mask are determined by judging the distance from the dividing line to the left and right ends of FIG. 3, that is, the distances from the centroid to the two sides of the image data. For example, if the dividing line is 500 pixels from the left and 600 pixels from the right, a mask is added to the area 100 pixels wide on the far right. Further, in order to remove as much interference as possible, the width of the mask can also be increased.
  • FIG. 4 is a schematic diagram of an image stamping mask according to an embodiment of the present application. As shown in Figure 4, by covering the right area with a mask, the images of the sows waiting to be delivered in the adjacent fields are eliminated, and interference is avoided.
  • a pre-trained animal detection network is used to locate the position information of the animal to be born according to the image data after being masked.
  • the image data after the masking is used as the input of the pre-trained animal detection network to locate the position information of the animal to be produced in the image, avoiding the interference of adjacent fields, and improving the accuracy of the position information Sex.
  • a pre-trained animal detection network is used, for example, a YOLOv3-tiny network with pre-training weights (for faster training in the following).
  • YOLOv3-tiny is a simplified version of the YOLOv3 network.
  • the reorg layer and the route layer are removed.
  • the main functions are not reduced.
  • the model size and training cost are greatly reduced, which can improve the efficiency of animal detection.
  • the YOLO network trains each set of data at different scales, it has better scale invariance, that is, a higher recall rate.
  • the pre-trained YOLOv3-tiny network is used to quickly and accurately determine the location information of the animal to be delivered.
  • the output vector of YOLOv3-tiny network includes the target category information and the coordinate information of the target frame. If it is used for identification of sow farrowing, the coordinate information of the target frame corresponding to the vector of the pig in the output vector is used as the position information of the sow to be born.
  • FIG. 5 is a schematic diagram of location information of an animal to be delivered in an embodiment of the present application. As shown in Fig. 5, the white rectangular frame shows the specific positions of the two sows in Fig. 5 to be delivered.
  • an implementation manner of acquiring the image data of the delivery site area of the animal to be delivered in each frame of the preset number of consecutive frame image data may be:
  • the location information locate the location information of the delivery site area of the animal to be delivered.
  • the image data of the preset size is intercepted in each frame of the image data of the preset number of consecutive frames.
  • the location information of the region of the delivery site of the animal to be delivered can also be located according to the location information of the animal to be delivered.
  • the area of its delivery part that is, the area of its buttocks
  • the delivery area of the sow to be located can be located in the lower portion of FIG. 5 after detection. If the size of the delivery area is preset to 175 ⁇ 350, it can be represented in FIG. 5.
  • the coordinates of the lower left point (x, y) of the white rectangular frame of the position information of the sow to be born, define the area (x-25: x + 150, y: y + 350) as the delivery site area.
  • FIG. 6 is a schematic diagram of a delivery site region of an animal to be delivered taken according to an embodiment of the present application.
  • an implementation manner for determining whether the animal to be delivered is being delivered may be:
  • the continuous frame image data of the delivery site area of the animal to be delivered is used as a sample to be identified, and the category of the sample to be identified is determined according to the pre-trained behavior recognition network.
  • the behavior recognition network may be trained based on a 3D convolutional neural network 3DCNN.
  • the specific value of the preset number is set to 16
  • the size of the delivery area is set to 175 ⁇ 350
  • the image data uses RGB three-channel color pictures
  • the continuous 16 frames of image data of the delivery area of the animal to be delivered can be The frames are superimposed in chronological order to form the input tensor of (16,175,350,3) as a sample to be identified.
  • 7 is a schematic diagram of a sample to be identified in an embodiment of an identification method for animal delivery provided by the present application.
  • the behavior recognition network in this embodiment can be obtained based on 3DCNN training.
  • 3DCNN includes time dimension information, which is used to identify continuous frame image data with higher accuracy. By adding a one-dimensional time vector, it is more suitable for the identification of the delivery process.
  • Both the convolutional layer and the pooling layer in the 3DCNN in this embodiment need to use three-dimensional calculation, that is, three-dimensional convolution and three-dimensional pooling are used.
  • the behavior recognition network needs to be trained using training data, that is, the behavior recognition network needs to be trained using labeled training samples.
  • the training sample and the sample to be recognized need to use the same data format, that is, if the sample to be recognized is a tensor of (16,175,350,3), the training sample must also use the tensor of (16,175,350,3).
  • 8A-8C are schematic diagrams of training samples in an embodiment of an identification method of animal delivery provided by the present application. For ease of presentation, single-frame images are used for display in FIGS. 8A-8C.
  • the training sample is a preset number of continuous-frame image data, such as continuous 16-frame image data.
  • the training samples are divided into three types: undelivered (FIG. 8A), being delivered (FIG. 8B), and completed delivery (FIG. 8C).
  • Sow delivery is a process. The sows to be delivered will go through the three processes of unbirth, delivery and delivery as shown in Figures 8A-8C. Special attention should be paid to avoid sows being delivered as shown in Figure 8B.
  • Accidentally For the delivery shown in FIG. 8C, that is, a newborn piglet will move around in the detection area, which may interfere with the identification of delivery.
  • multiple types of samples are used for training, and the detection scenes are divided into three categories: unbirth, delivery, and delivery completion for training, which can more effectively detect the delivery problem in multiple scenarios.
  • the training samples are divided into three categories: unbirth, delivery and completion of delivery, which can effectively reduce the interference caused by pup activities and improve the accuracy of recognition. Reduce the probability of false alarms.
  • this embodiment may also: if it is determined that the animal to be delivered is giving birth, an early warning is performed.
  • the method provided in this embodiment can effectively reduce the economic loss of the farm and improve the economic benefit.
  • various methods can be used for early warning. For example, when it is determined that an animal to be delivered is giving birth, an early warning message or a blinking early warning indicator can be played in the monitoring room. Further, the early warning information can also be sent to relevant staff through an instant communication tool, where the early warning information can include identification information of the animal to be delivered and the current status of the animal to be delivered.
  • the method for identifying the delivery of animals realizes the automatic identification of the delivery of animals through data analysis of the monitoring image data of the animals to be delivered, which can replace the original manual guarding method, which greatly saves Operation and labor costs.
  • the delivery of animals is recognized based on the continuous frame image data of the delivery site area, and the recognition accuracy is high; the data training based on multiple types and multiple scenes effectively reduces the impact of various interference factors and reduces The false alarm probability of the system.
  • the identification device 90 for delivery of animals provided in this embodiment may include: an acquisition module 901, a determination module 902, a processing module 903, and an identification module 904.
  • the obtaining module 901 is used to obtain a preset number of consecutive frame image data, and the image data is image data for monitoring the animal to be delivered.
  • the positioning module 902 locates the position information of the animal to be born according to a preset number of consecutive frame image data.
  • the processing module 903 obtains the image data of the delivery site area of the animal to be delivered in each frame of the preset number of consecutive frame image data according to the position information.
  • the identification module 904 determines whether the animal to be delivered is delivering according to the continuous frame image data of the delivery site area of the animal to be delivered.
  • the device of this embodiment may be used to execute the technical solution of the method embodiment shown in FIG. 2, and its implementation principles and technical effects are similar, and are not described here again.
  • the positioning module 902 may be specifically used to: add a mask to a preset number of continuous frame image data according to the position of the nursery room; use a pre-trained animal detection network, based on the image data after the mask is added Locate the location information of the animal to be delivered.
  • the positioning module 902 may be specifically used to: determine the center of mass of the nursery room according to the color characteristics of the nursery room; determine the position and width of the mask according to the distance of the center of mass from the image data on both sides; Each frame in the continuous frame image data is stamped with a mask.
  • the processing module 903 may be specifically used to: locate the position information of the delivery site area of the animal to be delivered according to the position information; according to the position information of the delivery site area of the animal to be delivered, each of the preset number of consecutive frame image data Image data of a preset size is intercepted in one frame.
  • the recognition module 904 may be specifically used to: use continuous frame image data of the delivery site area of the animal to be delivered as a sample to be recognized, and determine the category of the sample to be recognized according to a pre-trained behavior recognition network.
  • the category includes: not delivered , Childbirth and delivery are being completed.
  • the behavior recognition network is trained based on a three-dimensional convolutional neural network.
  • an early warning is performed.
  • the electronic device 10 is a schematic structural diagram of an embodiment of an electronic device provided by this application.
  • the electronic device provided in this embodiment includes but is not limited to a computer, a single server, a server group composed of multiple servers, or a cloud based on cloud computing composed of a large number of computers or servers.
  • cloud computing is a type of distributed computing.
  • a super virtual computer composed of a group of loosely coupled computers.
  • the electronic device 10 may include:
  • At least one processor 102 and memory 101 At least one processor 102 and memory 101;
  • the memory 101 stores computer execution instructions
  • the at least one processor 102 executes computer-executed instructions stored in the memory 101, so that the at least one processor 102 executes the animal delivery identification method as described above.
  • the processor 102 For the specific implementation process of the processor 102, reference may be made to the method embodiment of the animal delivery identification method described above. The implementation principle and technical effects are similar, and this embodiment will not be described here. Among them, the processor 102 and the memory 101 may be connected through a bus 103.
  • Embodiments of the present application also provide a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executed instructions are executed by a processor, are used to implement the delivery of any of the above animals Identification method.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules is only a division of logical functions.
  • there may be other divisions for example, multiple modules may be combined or integrated To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or modules, and may be in electrical, mechanical, or other forms.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • the functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules may be integrated into one unit.
  • the unit formed by the above modules can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above-mentioned integrated modules implemented in the form of software function modules may be stored in a computer-readable storage medium.
  • the above software function modules are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (English: processor) to perform the various embodiments of the application Part of the method.
  • processor may be a central processing unit (English: Central Processing Unit, referred to as: CPU), or other general-purpose processors, digital signal processors (English: Digital Signal Processor, referred to as: DSP), special integrated circuits (English: Application Specific Integrated Circuit, ASIC for short), etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the invention may be directly embodied and completed by a hardware processor, or may be implemented and completed by a combination of hardware and software modules in the processor.
  • the memory may include a high-speed RAM memory, or may also include a non-volatile storage NVM, such as at least one magnetic disk memory, and may also be a U disk, a mobile hard disk, a read-only memory, a magnetic disk, or an optical disk.
  • NVM non-volatile storage
  • the bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnection (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus in the drawings of this application does not limit to only one bus or one type of bus.
  • the above storage medium may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable In addition to programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable In addition to programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • optical disk any available medium that can be accessed by a general-purpose or special-purpose computer.
  • An exemplary storage medium is coupled to the processor so that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short).
  • ASIC Application Specific Integrated Circuits
  • the processor and the storage medium may also exist as discrete components in the terminal or server.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供一种动物分娩的识别方法、装置和设备。该方法包括:获取预设数量的连续帧图像数据,图像数据为对待产动物进行监控的图像数据;根据预设数量的连续帧图像数据,定位待产动物的位置信息;根据位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据;根据待产动物的分娩部位区域的连续帧图像数据,确定待产动物是否正在分娩。本申请实施例提供的方法实现了对动物分娩的自动识别,降低了人力成本。

Description

动物分娩的识别方法、装置和设备
本申请要求于2018年10月16日提交中国专利局、申请号为201811200395.4、申请名称为“动物分娩的识别方法、装置和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机视觉技术领域,尤其涉及一种动物分娩的识别方法、装置和设备。
背景技术
随着现代科学技术的不断发展,我国的养殖产业正逐步向规模化生产和自动化生产方向发展。在哺乳动物如猪、马、牛、羊等的养殖过程中,动物的分娩对于养殖产业的发展有着重要意义。
以对人类生活水平提高有很大影响的生猪养殖为例,在生猪养殖的过程中,母猪产仔数量是其中最上游的环节,也是决定猪场利润的关键点。在母猪生产过程中如不能及时发现预警,可能会出现母猪压住小猪或者寒冷导致小猪死亡的现象,造成无法挽回的损失。目前为了避免这种情况发生,只能人工日夜守候,及时处理,耗费了大量的人力成本。
发明内容
本申请实施例提供一种动物分娩的识别方法、装置和设备,用于实现动物分娩的自动识别,降低人力成本。
第一方面,本申请实施例提供一种动物分娩的识别方法,包括:
获取预设数量的连续帧图像数据,图像数据为对待产动物进行监控的图像数据;
根据预设数量的连续帧图像数据,定位待产动物的位置信息;
根据位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据;
根据待产动物的分娩部位区域的连续帧图像数据,确定待产动物是否正在分娩。
第二方面,本申请实施例提供一种动物分娩的识别装置,包括:
获取模块,用于获取预设数量的连续帧图像数据,图像数据为对待产动物进行监控的图像数据;
定位模块,根据预设数量的连续帧图像数据,定位待产动物的位置信息;
处理模块,根据位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据;
识别模块,根据待产动物的分娩部位区域的连续帧图像数据,确定待产动物是否正在分娩。
第三方面,本申请实施例提供一种电子设备,包括:
至少一个处理器和存储器;
存储器存储计算机执行指令;
至少一个处理器执行存储器存储的计算机执行指令,使得至少一个处理器执行如第一方面任一项所述的动物分娩的识别方法。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,计算机执行指令被处理器执行时用于实现如第一方面任一项所述的动物分娩的识别方法。
本申请实施例提供的动物分娩的识别方法、装置和设备,通过获取预设数量的连续帧图像数据,根据预设数量的连续帧图像数据,获取待产动物的分娩部位区域连续帧图像数据,确定待产动物是否正在分娩,实现了动物分娩的自动化识别,有效降低了人力成本。且通过采用连续帧图像数据,更好的体现了分娩的过程,提高了动物分娩的识别准确性;通过采用分娩部位区域进行识别,不仅减少了数据处理量,提高了动物分娩的识别速度,而且避免了其他部位对于分娩识别的干扰,进一步提高了动物分娩的识别准确性。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本 申请的实施例,并与说明书一起用于解释本申请的原理。
图1为本申请一实施例的应用场景示意图;
图2为本申请提供的动物分娩的识别方法一实施例的流程图;
图3为本申请一实施例分割出的育婴暖房的示意图;
图4为本申请一实施例为图像加盖蒙板的示意图;
图5为本申请一实施例中待产动物的位置信息示意图;
图6为本申请一实施例截取出的待产动物分娩部位区域的示意图;
图7为本申请提供的动物分娩的识别方法一实施例中待识别样本的示意图;
图8A-8C为本申请提供的动物分娩的识别方法一实施例中训练样本的示意图;
图9为本申请提供的动物分娩的识别装置一实施例的结构示意图;
图10为本申请提供的电子设备一实施例的结构示意图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
本申请的说明书和权利要求书中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,一个示例中还包括没有列出的步骤或单元,或一个示例中还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请中的“第一”和“第二”只起标识作用,而不能理解为指示或暗示顺序关系、相对重要性或者隐含指明所指示的技术特征的数量。“多个”是 指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本申请的说明书中通篇提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本申请实施例提供的动物分娩的识别方法可以用于各种动物分娩的识别,本申请中以母猪分娩为例进行说明,并不表示本申请仅限于此。
图1为本申请一实施例的应用场景示意图。如图1所示,在该应用场景中,每个摄像头同时监控两只待产母猪的生产情况。每只待产母猪被限制在待产栏位中固定的脚架中,摄像头从上至下正对两只母猪中间的分割栏进行监控。图片中间白色框内为育婴暖房,该场景中育婴暖房的盖子为红色。
图2为本申请提供的动物分娩的识别方法一实施例的流程图。如图2所示,本实施例的方法可以包括:
S201、获取预设数量的连续帧图像数据,图像数据为对待产动物进行监控的图像数据。
本实施例中预设数量的连续帧图像数据可以从对待产动物进行实时监控的视频数据中获取。在现代化养殖厂中,例如可以为动物的待产栏位加装摄像头进行监控,以获取对待产动物的实时监控视频数据。
本实施例中预设数量的具体数值可以根据实际需要进行设置。例如,可以根据识别精度要求以及对待产动物进行监控的图像数据的帧率确定。举例来说,若期望每秒中进行一次识别,对待产动物进行监控的图像数据的帧率为16帧每秒,则预设数量的具体数值可以设置为16。在当前时刻进行识别时,可以选取当前帧以及当前帧之前的15帧,形成16帧的连续帧图像数据。
S202、根据预设数量的连续帧图像数据,定位待产动物的位置信息。
本实施例在获取到预设数量的连续帧图像数据后,可以从图像数据中 检测出待产动物,以定位待产动物的位置信息。例如,可以根据待产动物的类别,预先训练目标检测网络,用于从图像数据中检测出目标待产动物。若识别母猪分娩,则可以采用预先训练的母猪检测网络确定待产母猪在图像数据中的位置信息;若识别母牛分娩,则可以采用预先训练的母牛检测网络定位待产母牛的在图像数据中的位置信息。
本实施例中的位置信息用于指示待产动物在连续帧图像数据中所处的具体位置。例如,若以矩形框表示,则可以采用矩形框对角点的坐标值表示,如可以采用左上角坐标和右下角坐标表示待产动物在图像中所处的位置。其中,各坐标值均以像素为单位。
可以理解的是,待产动物在待产时位置改变幅度较小,因此为了提高识别速度。
S203、根据位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据。
本实施例在确定了待产动物的位置信息之后,可以根据该位置信息从图像数据中分割出待产动物的分娩部位区域。其中,待产动物的分娩部位区域需要根据待产动物的类别进行确定,以母猪分娩为例,其分娩部位区域即为臀部区域。
一个示例中,为了提高识别速度,可以根据位置信息,在预设数量的连续帧图像数据中的任意一帧中定位待产动物分娩部位区域的位置信息,然后根据所定位的待产动物分娩部位区域的位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据。
S204、根据待产动物的分娩部位区域的连续帧图像数据,确定待产动物是否正在分娩。
动物分娩是一个过程,本实施例中采用待产动物的分娩部位区域的连续帧图像数据可以更好的体现分娩的过程,有助于提高动物分娩识别的准确性。
本实施例中采用待产动物的分娩部位区域的图像数据,而并非采用待产动物的全局图像数据,不仅减少了数据处理的工作量,提高了识别速度,而且避免了其他部位对于分娩识别的干扰,提高了识别的准确性。
本实施例提供的动物分娩的识别方法,通过获取预设数量的连续帧图像数据,根据预设数量的连续帧图像数据,获取待产动物的分娩部位区域 连续帧图像数据,确定待产动物是否正在分娩,实现了动物分娩的自动化识别,有效降低了人力成本。且通过采用连续帧图像数据,更好的体现了分娩的过程,提高了动物分娩的识别准确性;通过采用分娩部位区域进行识别,不仅减少了数据处理量,提高了动物分娩的识别速度,而且避免了其他部位对于分娩识别的干扰,进一步提高了动物分娩的识别准确性。
在一些实施例中,根据预设数量的连续帧图像数据,定位待产动物的位置信息的一种实现方式可以是:
根据育婴暖房的位置,为预设数量的连续帧图像数据加盖蒙板。
一个示例中,可以根据育婴暖房的颜色特征,确定育婴暖房的质心;根据质心距离图像数据两侧的距离,确定蒙板的位置和宽度;为预设数量的连续帧图像数据中的每一帧加盖蒙板。
以图1所示应用场景为例,每个摄像头用于监控两只待产母猪,由于摄像头在安装以及使用过程中一些因素的影响,可能会导致摄像头的监控区域中出现了相邻待产栏位中的待产母猪图像,如图1中最右侧所示。为了避免相邻待产栏位的干扰,可以采用加盖蒙板的方式消除该干扰,以提高识别的准确率。
育婴暖房通常具有特定的颜色或者特定的形状,且位于目标监控区域的正中央,因此可以根据育婴暖房的位置,确定所需加盖的蒙板的位置和宽度。
以图1所示应用场景为例,由于其中的育婴暖房呈现红色,因此可以利用YCbCr色彩空间中的Cr通道,即红色偏移色度来进行判别,例如阈值可以设定为170,图1所示图像的Cr通道中,大于170的像素点分割为育婴暖房区域,展示于图3。图3为本申请一实施例分割出的育婴暖房的示意图。如图3所示,图中白色部分为分割出的育婴暖房区域。若育婴暖房呈现蓝色,则可以利用YCbCr色彩空间中的Cb通道,即篮色偏移色度来进行判别;若为其他颜色,可以利用Cr通道与Cb通道的组合进行判别。一个示例中,还可以根据育婴暖房的形状,分割育婴暖房区域。
然后可以确定图3所分割出的育婴暖房区域的质心位置,然后以该质心的横坐标作垂线,作为所监控的两个目标待产栏位的理想分割线。通过判断分割线到图3左右两端的距离即质心距离图像数据两侧的距离,确定蒙板的位置和宽度。举例来说,若分割线距离左侧500个像素点,距离右 侧600个像素点,则在最右侧100个像素点宽的区域加盖蒙板。进一步的,为了尽可能多的去除多于干扰,还可以增加蒙板的宽度,例如可以设置两侧距离差值的1.1倍作为蒙板的宽度,即110个像素点宽,加盖蒙板后的图片显示于图4。图4为本申请一实施例为图像加盖蒙板的示意图。如图4所示,通过在右侧区域加盖蒙板,消除了相邻待产栏位中的待产母猪图像,避免了干扰。
采用预先训练的动物检测网络,根据加盖蒙板后的图像数据定位待产动物的位置信息。
本实施例中以加盖蒙板后的图像数据作为预先训练的动物检测网络的输入,定位图像中待产动物的位置信息,避免了相邻栏位的干扰,可以提高所定位的位置信息的准确性。
本实施例中采用预先训练的动物检测网络,例如可以采用有预训练权重的YOLOv3-tiny网络(为了接下来训练更快)。其中,YOLOv3-tiny是YOLOv3网络的简化版,去除了重构(reorg)层与融合(route)层,主要功能未减少,在模型体积与训练成本上降低了许多,可以提高动物检测的效率。同时由于YOLO网络在不同尺度下训练每一组数据,因而具有较好的尺度不变性,即有较高的召回率。综上所述,本实施例中采用预先训练的YOLOv3-tiny网络,可以快速、准确的确定出待产动物的位置信息。YOLOv3-tiny网络的输出向量包括目标类别信息和目标框的坐标信息。若用于母猪分娩的识别,则将输出向量中目标类别信息为猪的向量对应的目标框的坐标信息作为待产母猪的位置信息。
以图4所示的加盖蒙板的图像数据作为预先训练的YOLOv3-tiny网络的输入,所确定的待产母猪的位置信息如图5所示。图5为本申请一实施例中待产动物的位置信息示意图。如图5所示,其中白色矩形框示出了图5中两只待产母猪的具体位置。
需要说明的是,本实施例虽然以单帧图像为例进行说明,但是可以理解的是,可以针对预设数量的连续帧图像数据中的每一帧进行相同或者相似的操作。
在一些实施例中,根据位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据的一种实现方式可以是:
根据位置信息,定位待产动物的分娩部位区域的位置信息。
根据待产动物的分娩部位区域的位置信息,在预设数量的连续帧图像数据中的每一帧中截取预设大小的图像数据。
本实施例中在定位了待产动物的位置信息后,为了避免非分娩部位区域的干扰,还可以根据所述待产动物的位置信息,定位所述待产动物的分娩部位区域的位置信息。
以待产母猪为例,其分娩部位区域即其臀部区域可以采用基于团序列的检测方法或者采用圆弧匹配的方法进行确定。根据图5所定位的待产母猪的位置信息,经过检测可以定位待产母猪的分娩部位区域位于图5的下部,若分娩部位区域的大小预先设置为175×350,则可以以图5中表示待产母猪位置信息的白色矩形框的左下点坐标(x,y),定义区域(x-25:x+150,y:y+350)作为分娩部位区域。在预设数量的连续帧图像数据中的每一帧中截取区域(x-25:x+150,y:y+350)的图像数据,构成分娩部位区域的连续帧图像数据。对图5中示出的右侧母猪的分娩部位区域进行截取,所截取出的分娩部位区域示于图6中。图6为本申请一实施例截取出的待产动物分娩部位区域的示意图。
本实施例通过对分娩部位区域进行截取,不仅可以避免非分娩部位的干扰,提高准确率,而且可以大幅降低数据量,提高处理速度。
在一些实施例中,根据待产动物的分娩部位区域的连续帧图像数据,确定待产动物是否正在分娩的一种实现方式可以是:
以待产动物的分娩部位区域的连续帧图像数据作为一个待识别样本,根据预先训练的行为识别网络,确定待识别样本的类别,类别包括:未分娩、正在分娩和分娩完成。一个示例中,行为识别网络可以基于三维卷积神经网络3DCNN训练得到。
举例来说,若预设数量的具体数值设为16,分娩部位区域的大小设为175×350,图像数据采用RGB三通道彩色图片,则可以将待产动物的分娩部位区域的连续16帧图像数据按照帧的时间顺序进行叠加,形成(16,175,350,3)的输入张量,作为一个待识别样本。图7为本申请提供的动物分娩的识别方法一实施例中待识别样本的示意图。
本实施例中的行为识别网络可以基于3DCNN训练得到。3DCNN包括了时间维度信息,用于识别连续帧图像数据,准确率更高。通过增加一维时间向量,更加适用于分娩过程的识别。本实施例中的3DCNN中的卷积 层与池化层都需要使用三维计算,即采用三维卷积与三维池化。
需要说明的是,在使用行为识别网络进行动物分娩的识别之前,首先需要采用训练数据对行为识别网络进行训练,即需要采用标记好的训练样本对行为识别网络进行训练。为了确保识别的准确率,训练样本与待识别样本需要采用相同的数据格式,即若待识别样本为(16,175,350,3)的张量,则训练样本也要采用(16,175,350,3)的张量。图8A-8C为本申请提供的动物分娩的识别方法一实施例中训练样本的示意图。为便于展示,图8A-8C中使用了单帧图像进行展示,可以理解的是,训练样本为预设数量的连续帧图像数据,如连续16帧图像数据。如图8A-8C所示,本实施例中将训练样本分为未分娩(图8A)、正在分娩(图8B)和分娩完成(图8C)3类。母猪分娩为一个过程,待产母猪会依次经历图8A-8C中所示的未分娩、正在分娩和分娩完成3个过程,对于如图8B所示的正在分娩的母猪需要格外关注,避免意外产生。而对于图8C所示的分娩完成,即有新生小猪会在检测区域走动,会对分娩的识别造成干扰。因此本实施例中使用多种类样本进行训练,将检测场景分为未分娩、正在分娩和分娩完成三类加以训练,能够更有效地检测多情景下的分娩问题。相较于将训练样本分为未分娩和正在分娩两类来说,将训练样本分为未分娩、正在分娩和分娩完成3类,可以有效降低幼崽活动造成的干扰,提高识别的准确率,降低虚警概率。
在一些实施例中,在上述任一实施例的基础上,本实施例还可以:若确定待产动物正在分娩,则进行预警。
本实施例在确定待产动物正在分娩时,则进行预警,以提示相关工作人员及时处理,避免因各种原因导致幼崽死亡,造成经济损失。因此,本实施例提供的方法,可以有效的减小养殖场的经济损失,提高经济效益。
本实施例中进行预警可以采用多种方式。例如,在确定有待产动物正在分娩时,可以在监控室中播放预警提示语、闪烁预警指示灯等。进一步的,还可以通过即时通讯工具将预警信息发送至相关工作人员,其中,预警信息可以包括待产动物标识信息以及待产动物当前状态等。
综上所述,本申请实施例提供的动物分娩的识别方法,通过对待产动物的监控图像数据进行数据分析,实现了对动物分娩的自动识别,可以替代原有人工看守方式,极大地节约了运营与人力成本。通过使用深度行为 识别网络,依据分娩部位区域的连续帧图像数据对动物分娩进行识别,识别准确率高;通过基于多种类、多场景的数据训练有效地降低了各种干扰因素的影响,降低了系统的虚警概率。
图9为本申请提供的动物分娩的识别装置一实施例的结构示意图。如图9所示,本实施例提供的动物分娩的识别装置90可以包括:获取模块901、确定模块902、处理模块903和识别模块904。
获取模块901,用于获取预设数量的连续帧图像数据,图像数据为对待产动物进行监控的图像数据。
定位模块902,根据预设数量的连续帧图像数据,定位待产动物的位置信息。
处理模块903,根据位置信息,在预设数量的连续帧图像数据中的每一帧中获取待产动物的分娩部位区域图像数据。
识别模块904,根据待产动物的分娩部位区域的连续帧图像数据,确定待产动物是否正在分娩。
本实施例的装置,可以用于执行图2所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
一个示例中,定位模块902具体可以用于:根据育婴暖房的位置,为预设数量的连续帧图像数据加盖蒙板;采用预先训练的动物检测网络,根据加盖蒙板后的图像数据定位待产动物的位置信息。
一个示例中,定位模块902具体可以用于:根据育婴暖房的颜色特征,确定育婴暖房的质心;根据质心距离图像数据两侧的距离,确定蒙板的位置和宽度;为预设数量的连续帧图像数据中的每一帧加盖蒙板。
一个示例中,处理模块903具体可以用于:根据位置信息,定位待产动物的分娩部位区域的位置信息;根据待产动物的分娩部位区域的位置信息,在预设数量的连续帧图像数据中的每一帧中截取预设大小的图像数据。
一个示例中,识别模块904具体可以用于:以待产动物的分娩部位区域的连续帧图像数据作为一个待识别样本,根据预先训练的行为识别网络,确定待识别样本的类别,类别包括:未分娩、正在分娩和分娩完成。
一个示例中,行为识别网络是基于三维卷积神经网络训练的。
一个示例中,若确定待产动物正在分娩,则进行预警。
图10为本申请提供的电子设备一实施例的结构示意图。本实施例提供 的电子设备包括但不限于计算机、单个服务器、多个服务器组成的服务器组或基于云计算的由大量计算机或服务器构成的云,其中,云计算是分布式计算的一种,由一群松散耦合的计算机组成的一个超级虚拟计算机。如图10所示,该电子设备10可以包括:
至少一个处理器102和存储器101;
所述存储器101存储计算机执行指令;
所述至少一个处理器102执行所述存储器101存储的计算机执行指令,使得所述至少一个处理器102执行如上所述的动物分娩的识别方法。
处理器102的具体实现过程可参见上述动物分娩的识别方法的方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。其中,处理器102和存储器101可以通过总线103连接。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,所述计算机执行指令被处理器执行时用于实现上述任一项所述的动物分娩的识别方法。
在上述的实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个单元中。上述模块成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若 干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行本申请各个实施例所述方法的部分步骤。
应理解,上述处理器可以是中央处理单元(英文:Central Processing Unit,简称:CPU),还可以是其他通用处理器、数字信号处理器(英文:Digital Signal Processor,简称:DSP)、专用集成电路(英文:Application Specific Integrated Circuit,简称:ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合发明所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
存储器可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,还可以为U盘、移动硬盘、只读存储器、磁盘或光盘等。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
上述存储介质可以是由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。存储介质可以是通用或专用计算机能够存取的任何可用介质。
一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于专用集成电路(Application Specific Integrated Circuits,简称:ASIC)中。当然,处理器和存储介质也可以作为分立组件存在于终端或服务器中。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步 骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (10)

  1. 一种动物分娩的识别方法,其特征在于,包括:
    获取预设数量的连续帧图像数据,所述图像数据为对待产动物进行监控的图像数据;
    根据所述预设数量的连续帧图像数据,定位所述待产动物的位置信息;
    根据所述位置信息,在所述预设数量的连续帧图像数据中的每一帧中获取所述待产动物的分娩部位区域图像数据;
    根据所述待产动物的分娩部位区域的连续帧图像数据,确定所述待产动物是否正在分娩。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述预设数量的连续帧图像数据,定位所述待产动物的位置信息包括:
    根据育婴暖房的位置,为所述预设数量的连续帧图像数据加盖蒙板;
    采用预先训练的动物检测网络,根据加盖蒙板后的图像数据定位所述待产动物的位置信息。
  3. 根据权利要求2所述的方法,其特征在于,所述根据育婴暖房的位置,为所述预设数量的连续帧图像数据加盖蒙板包括:
    根据所述育婴暖房的颜色特征,确定所述育婴暖房的质心;
    根据所述质心距离所述图像数据两侧的距离,确定蒙板的位置和宽度;
    为所述预设数量的连续帧图像数据中的每一帧加盖蒙板。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述根据所述位置信息,在所述预设数量的连续帧图像数据中的每一帧中获取所述待产动物的分娩部位区域图像数据包括:
    根据所述位置信息,确定所述待产动物的分娩部位区域的位置信息;
    根据所述待产动物的分娩部位区域的位置信息,在所述预设数量的连续帧图像数据中的每一帧中截取预设大小的图像数据。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述根据所述待产动物的分娩部位区域的连续帧图像数据,确定所述待产动物是否正在分娩包括:
    以所述待产动物的分娩部位区域的连续帧图像数据作为一个待识别样本,根据预先训练的行为识别网络,确定所述待识别样本的类别,所述类 别包括:未分娩、正在分娩和分娩完成。
  6. 根据权利要求5所述的方法,其特征在于,所述行为识别网络是基于三维卷积神经网络训练的。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    若确定所述待产动物正在分娩,则进行预警。
  8. 一种动物分娩的识别装置,其特征在于,包括:
    获取模块,用于获取预设数量的连续帧图像数据,所述图像数据为对待产动物进行监控的图像数据;
    定位模块,根据所述预设数量的连续帧图像数据,定位所述待产动物的位置信息;
    处理模块,根据所述位置信息,在所述预设数量的连续帧图像数据中的每一帧中获取所述待产动物的分娩部位区域图像数据;
    识别模块,根据所述待产动物的分娩部位区域的连续帧图像数据,确定所述待产动物是否正在分娩。
  9. 一种电子设备,其特征在于,包括:至少一个处理器和存储器;
    所述存储器存储计算机执行指令;
    所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1-7任一项所述的动物分娩的识别方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,所述计算机执行指令被处理器执行时用于实现如权利要求1-7任一项所述的动物分娩的识别方法。
PCT/CN2019/103333 2018-10-16 2019-08-29 动物分娩的识别方法、装置和设备 WO2020078114A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811200395.4 2018-10-16
CN201811200395.4A CN109460713B (zh) 2018-10-16 2018-10-16 动物分娩的识别方法、装置和设备

Publications (1)

Publication Number Publication Date
WO2020078114A1 true WO2020078114A1 (zh) 2020-04-23

Family

ID=65607627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103333 WO2020078114A1 (zh) 2018-10-16 2019-08-29 动物分娩的识别方法、装置和设备

Country Status (2)

Country Link
CN (1) CN109460713B (zh)
WO (1) WO2020078114A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460713B (zh) * 2018-10-16 2021-03-30 京东数字科技控股有限公司 动物分娩的识别方法、装置和设备
CN111904653A (zh) * 2019-05-08 2020-11-10 阿里巴巴集团控股有限公司 目标对象分娩监测方法、装置和系统、及监测系统和方法
CN110895694A (zh) * 2019-11-20 2020-03-20 北京海益同展信息科技有限公司 产仔监测方法、装置、电子设备及计算机可读存储介质
CN111249029A (zh) * 2020-02-28 2020-06-09 上海明略人工智能(集团)有限公司 对象检测方法、装置、存储介质以及电子装置
CN116935439A (zh) * 2023-07-18 2023-10-24 河北农业大学 一种孕羊分娩自动监测预警方法及自动监测预警系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150320010A1 (en) * 2012-12-20 2015-11-12 Schippers Europe B.V. Method and barn for keeping livestock
CN106296738A (zh) * 2016-08-09 2017-01-04 南京农业大学 一种基于fpga的母猪分娩智能检测系统以及方法
CN106791592A (zh) * 2015-11-23 2017-05-31 竹溪县塔二湾种猪扩繁场 母猪产床分娩监控系统
CN109460713A (zh) * 2018-10-16 2019-03-12 北京京东金融科技控股有限公司 动物分娩的识别方法、装置和设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184016B (zh) * 2011-05-13 2012-10-03 大连民族学院 基于视频序列识别的无接触式鼠标控制方法
CN104021599B (zh) * 2014-05-19 2016-08-17 沈阳工业大学 基于校园WiFi的课堂考勤管理系统及其人数统计方法
CN105160310A (zh) * 2015-08-25 2015-12-16 西安电子科技大学 基于3d卷积神经网络的人体行为识别方法
CN106919891B (zh) * 2015-12-26 2019-08-23 腾讯科技(深圳)有限公司 一种图像处理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150320010A1 (en) * 2012-12-20 2015-11-12 Schippers Europe B.V. Method and barn for keeping livestock
CN106791592A (zh) * 2015-11-23 2017-05-31 竹溪县塔二湾种猪扩繁场 母猪产床分娩监控系统
CN106296738A (zh) * 2016-08-09 2017-01-04 南京农业大学 一种基于fpga的母猪分娩智能检测系统以及方法
CN109460713A (zh) * 2018-10-16 2019-03-12 北京京东金融科技控股有限公司 动物分娩的识别方法、装置和设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU, LONGSHEN ET AL.: "Sows Parturition Detection Method Based on Machine Vision", TRANSACTION OF THE CHINESE SOCIETY FOR AGRICULTURAL MACHINERY, 31 March 2014 (2014-03-31) *
ZHANG, CHI ET AL.: "Newborn Piglets Recognition Method Based on Machine Vision", JOURNAL OF NANJING AGRICULTURAL UNIVERSITY, 3 January 2017 (2017-01-03), pages 169 - 175 *

Also Published As

Publication number Publication date
CN109460713B (zh) 2021-03-30
CN109460713A (zh) 2019-03-12

Similar Documents

Publication Publication Date Title
WO2020078114A1 (zh) 动物分娩的识别方法、装置和设备
US20210153479A1 (en) Monitoring livestock in an agricultural pen
US20210161193A1 (en) System and method of estimating livestock weight
CN108812407B (zh) 动物健康状态监测方法、设备及存储介质
CN111709421B (zh) 鸟类识别方法、装置、计算机设备及存储介质
Zhang et al. Development and validation of a visual image analysis for monitoring the body size of sheep
US20210216758A1 (en) Animal information management system and animal information management method
CN108229351B (zh) 一种动物养殖方法、动物养殖装置及电子设备
CN109086696B (zh) 一种异常行为检测方法、装置、电子设备及存储介质
CN111797831A (zh) 基于bim和人工智能的家禽进食并行异常检测方法
Gan et al. Automated detection and analysis of piglet suckling behaviour using high-accuracy amodal instance segmentation
CN108460370A (zh) 一种固定式家禽生命信息报警装置
CN114511820A (zh) 货架商品检测方法、装置、计算机设备及存储介质
WO2023041904A1 (en) Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes
WO2022111271A1 (zh) 服装规范化检测方法及装置
CN113743261A (zh) 一种猪体外伤的检测方法、装置和可读存储介质
JP7396076B2 (ja) 番号認識装置、方法及び電子機器
Ong et al. CattleEyeView: A Multi-task Top-down View Cattle Dataset for Smarter Precision Livestock Farming
WO2020258977A1 (zh) 一种对象统计方法和装置
CN111079617A (zh) 家禽识别方法、装置、可读存储介质及电子设备
CN115100683B (zh) 估重方法、装置、设备及存储介质
US20230111876A1 (en) System and method for animal disease management
CN114724067A (zh) 一种养殖场饲料监控方法、装置、电子设备及存储介质
CN113762089A (zh) 基于人工智能的牲畜左脸识别系统及使用方法
Chang et al. Using deep learning to accurately detect sow vulva size in a group pen with a single camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872308

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19872308

Country of ref document: EP

Kind code of ref document: A1