KR20170072491A - Apparatus and method for detecting entity in pen - Google Patents
Apparatus and method for detecting entity in pen Download PDFInfo
- Publication number
- KR20170072491A KR20170072491A KR1020150180702A KR20150180702A KR20170072491A KR 20170072491 A KR20170072491 A KR 20170072491A KR 1020150180702 A KR1020150180702 A KR 1020150180702A KR 20150180702 A KR20150180702 A KR 20150180702A KR 20170072491 A KR20170072491 A KR 20170072491A
- Authority
- KR
- South Korea
- Prior art keywords
- depth
- threshold value
- entities
- detecting
- area
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K29/00—Other apparatus for animal husbandry
- A01K29/005—Monitoring or measuring activity, e.g. detecting heat or mating
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K1/00—Housing animals; Equipment therefor
- A01K1/02—Pigsties; Dog-kennels; Rabbit-hutches or the like
- A01K1/035—Devices for use in keeping domestic animals, e.g. fittings in housings or dog beds
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K29/00—Other apparatus for animal husbandry
-
- H04N13/02—
-
- H04N13/0271—
Abstract
A method for detecting an object in a housing, the method comprising: detecting a plurality of objects based on depth information on a house internal image imaged through a depth camera; generating a binarized image; The overlapping region is detected, the predetermined depth threshold value is changed to separate the overlapping entity, and the plurality of entities including the separated entities are tracked through the indexing process.
Description
The present invention relates to an apparatus and method for detecting an object in a house, and more particularly, to an apparatus and method for detecting an object in a house.
It is very difficult to grasp the behavior characteristics of livestock in real time due to the characteristics of the barn / pig farm where a few administrators have to manage many objects. Recently, researches on object detection system that can construct low cost based on ICT (Information Communication Technology) have been actively carried out in small scale farms. Among them, object detection systems are being developed that continuously track and analyze livestock behavior through camera sensors.
In the past, object detection technology has been proposed to detect abnormal conditions in a house using a frame difference technique. The difference image method calculates the pixel change between the previous frame (or background frame) and the current frame and detects motion. By using this difference image technique, the detected pixels can be divided into moving objects, numbered, and motion can be tracked through them.
In this regard, Korean Patent Laid-Open No. 10-1998-031927 entitled "Surveillance camera and monitoring method for alarming the position of a moving object" has a lens state adjusting unit A storage unit for storing first and second image data for a surveillance region having a predetermined time interval, and a storage unit for storing a difference image between the first and second image data input from the storage unit, And a moving object detection unit for detecting a moving object depending on the size of the moving object and generating a moving object detection signal in accordance with the detected moving object, Lt; / RTI >
However, the object detection technique using this differential image technique is effective in detecting the behavior of each individual object on a frame basis, but there is a limitation in that it can not solve the overlap problem between the objects in continuous object tracking. In addition, in the case of other camera sensors, since the video camera, which is influenced by the light and which is influenced by the weather and the time, is used, characteristic information extracted during continuous monitoring of the object may be distorted, .
There is a need for an object detection technology that can accurately detect and track the movement of individual objects overcoming these problems.
An embodiment of the present invention is to provide an apparatus and method for detecting objects that are overlapped in a housing based on depth information.
It should be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.
According to an aspect of the present invention, there is provided an apparatus for detecting a cage of a housing, which detects a plurality of objects based on depth information from a housing internal image imaged through a depth camera and generates a binarized image, ; An object overlapping analyzing unit for detecting an object overlapping region in which two or more objects are recognized as one based on the area values of the plurality of objects and applying a predetermined depth threshold value to the object overlapping region to separate the overlapping objects; And an entity tracking unit for processing tracking through the indexing process on the plurality of entities including the separated entities.
According to another aspect of the present invention, there is provided a method for detecting a crawling object, comprising: detecting a plurality of objects based on depth information and generating a binarized image; Detecting an object overlap region in which two or more objects are recognized as one based on the plurality of object area values; Dividing the overlapping entity by applying the depth threshold value to the object overlapping region; And tracking the plurality of entities including the separated entities through an indexing process.
According to the above-mentioned problem solving means of the present invention, it is possible to accurately detect the overlap between individuals in a housing / pig house, which is a dense breeding environment, in real time to distinguish and track individual objects.
According to the object of the present invention, it is unnecessary to perform pre-processing such as attaching sensors to individual objects for livestock behavior analysis by detecting moving objects in real time using the in-housing depth information obtained through the depth camera. In addition, by using the depth information image instead of the RGB image, it is possible to prevent false detection due to the color of the object, showing strong performance against the distortion caused by the light.
According to an embodiment of the present invention, individual identification information and location information can be provided, so that behavior characteristics of individual entities can be easily analyzed.
According to the object of the present invention, even if a camera is installed in a certain area such as the side of the ceiling of a barn / pedestrian ceiling due to the structural characteristics of the breeding environment, even if a camera is installed in depth, accurate depth- And object tracking.
1 is a block diagram of a housing detection apparatus according to an embodiment of the present invention.
2 is an example of a binarized image showing an entity in a house according to an embodiment of the present invention.
3 is an example of an object overlap analysis processing algorithm according to an embodiment of the present invention.
4 is an example of a result image of the overlapping analysis process according to an embodiment of the present invention.
FIG. 5 is a flowchart illustrating a method for detecting a crawling object according to an embodiment of the present invention. Referring to FIG.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description in the drawings are omitted, and like parts are denoted by similar reference numerals throughout the specification. In the following description with reference to the drawings, the same reference numerals will be used to designate the same names, and the reference numerals are merely for convenience of description, and the concepts, features, and functions Or the effect is not limited to interpretation.
Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when a component is referred to as "comprising ", it is understood that it may include other components as well as other components, But do not preclude the presence or addition of a feature, a number, a step, an operation, an element, a component, or a combination thereof.
In this specification, the term " part " means a unit realized by hardware or software, a unit realized by using both, and one unit may be realized by using two or more hardware, The above units may be realized by one hardware.
1 is a block diagram of a housing detection apparatus according to an embodiment of the present invention.
1, the
In one embodiment of the present invention, the
The
The
The Kinect sensor is a structured light projection type depth camera that projects a defined pattern image using a projector or a laser and obtains a projected image through a camera to acquire three-dimensional information of a scene.
Such a Kinect sensor includes an infrared radiator for irradiating a pattern using an infrared laser, and an infrared camera for capturing an infrared image, and an RGB camera functioning as a general web cam is disposed between the infrared radiator and the infrared camera. In addition, the Kinect sensor can be further configured with a tilt motor that adjusts the microphone array and angle of the camera.
The basic principle of a Kinect sensor is that when a laser pattern irradiated from an infrared radiator is projected and reflected on an object, the distance to the object surface is obtained using the position and size of the pattern at the reflection point.
According to this principle, the
The
Then, the
2 is an example of a binarized image showing an entity in a house according to an embodiment of the present invention.
In FIG. 2, the areas marked with gray are distinguished from the shaded areas, and the areas in which numbers are numbered represent individual objects (i.e., pigs). In addition to the objects, the background (For example, pile bottom). In this case, the portion indicated by the object is a region having a depth value equal to or greater than a predetermined depth threshold value, and the background region is a region having a depth value less than a depth threshold value.
That is, the
For example, when the
Referring back to FIG. 1, the object
In this case, the object
FIG. 3 is a graph showing an algorithm for processing an overlapping analysis according to an embodiment of the present invention in a pseudocode.
In the case of the object overlap analysis, the input data is {(X 1 , Y 1 ), (X 2 , Y 2 ), (X 3 , Y 3 ) which are the positional information of the motion object ) (X n , Y n )}. For reference, the motion entity (i.e., standing pig) is detected as a region having a depth value equal to or greater than a predetermined depth threshold value. In addition, the output data is the position information of the detected moving object on the stacking problem of the input data {(X a, Y a) , (X b, Y b)}. For example, the input data {(X 1, Y 1) , (
In FIG. 3, it is shown that the object
Specifically, the object overlap analysis processing algorithm can be implemented in the following steps.
In the first step (Step 1), the object
In the second step (Step 2), the object
In the final step (Step 3), the following object overlap analysis and processing procedure is performed.
First, if the area of the ROI of the input motion object is equal to or greater than a predetermined ROI size threshold, it is determined that an object overlapping problem has occurred (i.e., an object overlap region). For reference, the area threshold value can be set based on the area value of the extracted region of interest when all of the individual entities are distinguished.
Next, the predetermined depth threshold value (Depth_threshold) is sequentially increased by a predetermined unit value for the region where the object overlap occurs. That is, the
For reference, the
Next, in the re-acquired binarized image, if the value obtained by subtracting the depth value "(x, y) .floor_depth" of the pile bottom from the depth value "(x, y) .pig_depth" of the object is larger than the depth threshold value, And re-detected as an individual.
Then, a region of interest for the re-detected entity is designated, and the area S 'of the region of interest is re-calculated. The above process is repeated until the area S 'of the re-detected region of interest becomes equal to or smaller than the area threshold value.
If the area S 'of the re-detected region of interest is below the area threshold value, it means that two or more individuals that have been displayed as one region of interest are separated. That is, the
More specifically, the above process utilizes the curved shape of the 'iso' portion of an individual's body characteristics. That is, two or more entities that are detected overlapping at arbitrary depth threshold values are distinguished from each other as the depth threshold value increases (i.e., the detection position of the entity increases from the bottom). This is because, as the depth threshold value becomes larger, a position area having a value less than a depth threshold value (i.e., an edge part of a relatively lower position of the 'iso' part of the object) It is processed.
For example, referring to the XYZ axis shown in FIG. 1, as the depth threshold increases, the height of the section (XY axis) of the object gradually increases with respect to the vertical axis Z on the housing. Thus, the area of the detected region (i.e., the region of interest) among the 'iso' regions of the object having the curved shape is changed. That is, the 'iso' area detected for each of two or more objects included in the object overlapping area is gradually narrowed, so that the overlapped entity area is separated and the boundary becomes clear.
As described above, the
The result of the above process can be shown in FIG.
4 is an example of a result image of the overlapping analysis process according to an embodiment of the present invention.
The image shown on the left side of FIG. 4 is detected as a binary image before the object overlap analysis is performed. At this time, you can see the area where two pigs are superimposed on the top. In this case, if the area of the area is larger than the predetermined area threshold value, it is determined that the object overlapping occurs. Accordingly, when the depth threshold value is gradually increased, it can be seen that the objects at the corresponding positions are separated into two as shown in the image shown on the right side of FIG.
When the object overlapping is resolved through the object overlap analysis as described above, the
That is, when the area of the ROI of the input motion entity is smaller than the predetermined ROI size threshold in the OBI analysis process described with reference to FIG. 3, it is determined that the ROI is not an overlapping object, It is possible to calculate the degree of association with the position information and to track the corresponding object.
Referring again to FIG. 1, the
Meanwhile, the
Hereinafter, referring to FIG. 5, an object detection method for performing object overlap processing in a house based on depth information through the
FIG. 5 is a flowchart illustrating a method for detecting a crawling object according to an embodiment of the present invention. Referring to FIG.
First, a plurality of objects are detected based on the depth information and converted into a binarized image (S510).
At this time, the depth threshold value is a criterion of the depth value set to distinguish the object and the background. Accordingly, an area having a reflection size (i.e., depth value) greater than or equal to the depth threshold value is detected as an object and a region having a reflection size smaller than the depth threshold value is detected as a background Region).
Next, an object overlap region in which two or more objects are recognized as one is detected based on the detected plurality of object area values (S520).
At this time, if the area value per object is larger than the preset area threshold value, it can be determined as the object overlap region.
For reference, the step of correcting the depth information by compensating the depth value according to the position of each object based on the image capturing angle may be performed first, and then the object overlapping region may be detected.
Then, the overlapped objects are separated by applying a predetermined depth threshold value to the detected object overlap region (S530).
Specifically, the depth threshold value is changed until two or more entities are separated from each other at the position of the object overlap region, and the depth threshold value is changed to detect two or more entities separated from the regenerated binarized image, The location information can be generated for each of the above objects.
At this time, the depth threshold value is gradually increased until the area value of the object overlapping area becomes smaller than the predetermined area threshold value.
Next, a plurality of entities including the separated entities are tracked through an indexing process (S540).
At this time, it is possible to calculate the degree of association with a previous position of a plurality of entities, and to track individual entities in one image at the same time.
The method for detecting a cradle based on depth information according to an embodiment of the present invention described above may be implemented in the form of a recording medium including instructions executable by a computer such as a program module executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. The computer readable medium may also include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes any information delivery media, including computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism.
It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.
The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.
100: object detection device in a house
110: Depth camera
120:
130: Object Overlap Analysis Unit
140:
Claims (13)
An object detector for detecting a plurality of entities based on depth information and generating a binarized image of a housing internal image photographed through a depth camera;
An object overlapping analyzing unit for detecting an object overlapping region in which two or more objects are recognized as one based on the area values of the plurality of objects and applying a predetermined depth threshold value to the object overlapping region to separate the overlapping objects; And
And an entity tracking unit for processing tracking through the indexing process on the plurality of entities including the separated entities.
Wherein the object overlap analyzing unit comprises:
Determining an object overlapping region when the area value of each object is greater than a predetermined area threshold value and changing the depth threshold value until two or more objects are separated and detected at the position of the object overlapping region,
Wherein the entity detecting unit comprises:
Wherein the binarized image is generated by re-detecting a plurality of entities including the two or more entities separated according to the change of the depth threshold value.
The object-
And calculates the degree of association of the plurality of entities with respect to the previous location, and simultaneously tracks individual entities in one image.
Wherein the object overlap analyzing unit comprises:
And changing the depth threshold value until the area value of the object overlap region becomes smaller than a predetermined area threshold value.
Wherein the entity detecting unit comprises:
And corrects the depth information by compensating a depth value according to a position of each object based on an image photographing angle of the depth camera.
Wherein the entity detecting unit comprises:
Detecting an area having a depth value equal to or greater than the depth threshold value as an object and processing an area having a depth value less than the depth threshold value as a background.
Detecting a plurality of entities based on depth information and generating a binarized image of a housing internal image photographed through a depth camera;
Detecting an object overlap region in which two or more objects are recognized as one based on the plurality of object area values;
Dividing the overlapping entity by applying the depth threshold value to the object overlapping region; And
And tracking the plurality of entities including the separated entities through an indexing process.
Wherein the step of detecting the object overlap region comprises:
And if the area value of each object is greater than a preset area threshold value, it is determined to be an object overlap region.
The step of separating the overlapping entity comprises:
Changing the depth threshold value until at least two entities are separately detected at the position of the object overlapping region; And
And re-detecting a plurality of objects including two or more separated objects by applying the depth threshold value to the object overlapping area.
Wherein the step of tracking the plurality of entities through an indexing process comprises:
Calculating a degree of association between the plurality of entities and a previous location, and tracking individual entities in a single image at the same time.
The method of claim 1,
And changing the depth threshold value until the area value of the object overlapping area becomes smaller than a predetermined area threshold value.
Wherein the step of detecting the object overlap region comprises:
And correcting the depth information by compensating the depth value according to the position of each object based on the image photographing angle.
Wherein the detecting the plurality of entities comprises:
Detecting an area having a depth value greater than or equal to the depth threshold value as an entity and processing an area having a depth value less than the depth threshold value as a background.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150180702A KR101793790B1 (en) | 2015-12-17 | 2015-12-17 | Apparatus and method for detecting entity in pen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150180702A KR101793790B1 (en) | 2015-12-17 | 2015-12-17 | Apparatus and method for detecting entity in pen |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170072491A true KR20170072491A (en) | 2017-06-27 |
KR101793790B1 KR101793790B1 (en) | 2017-11-20 |
Family
ID=59514733
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150180702A KR101793790B1 (en) | 2015-12-17 | 2015-12-17 | Apparatus and method for detecting entity in pen |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101793790B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190030082A (en) * | 2017-09-13 | 2019-03-21 | (주)제이케이데이터시스템즈 | System and Method for measuring weight of poultry based on pattern recognition |
CN113449638A (en) * | 2021-06-29 | 2021-09-28 | 西藏新好科技有限公司 | Pig image ideal frame screening method based on machine vision technology |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102172347B1 (en) * | 2019-11-11 | 2020-10-30 | (주)이지팜 | Method and system for determining health status of farm livestock |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101336139B1 (en) | 2012-06-11 | 2013-12-05 | 동의대학교 산학협력단 | System and method for motion estimating using depth camera |
KR101568979B1 (en) | 2014-05-22 | 2015-11-13 | 고려대학교 산학협력단 | Cattle monitoring system and method using depth information |
-
2015
- 2015-12-17 KR KR1020150180702A patent/KR101793790B1/en active IP Right Grant
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190030082A (en) * | 2017-09-13 | 2019-03-21 | (주)제이케이데이터시스템즈 | System and Method for measuring weight of poultry based on pattern recognition |
CN113449638A (en) * | 2021-06-29 | 2021-09-28 | 西藏新好科技有限公司 | Pig image ideal frame screening method based on machine vision technology |
CN113449638B (en) * | 2021-06-29 | 2023-04-21 | 北京新希望六和生物科技产业集团有限公司 | Pig image ideal frame screening method based on machine vision technology |
Also Published As
Publication number | Publication date |
---|---|
KR101793790B1 (en) | 2017-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6825569B2 (en) | Signal processor, signal processing method, and program | |
CN111160302B (en) | Obstacle information identification method and device based on automatic driving environment | |
US11195038B2 (en) | Device and a method for extracting dynamic information on a scene using a convolutional neural network | |
US8867790B2 (en) | Object detection device, object detection method, and program | |
US6956469B2 (en) | Method and apparatus for pedestrian detection | |
US8611591B2 (en) | System and method for visually tracking with occlusions | |
US7684590B2 (en) | Method of recognizing and/or tracking objects | |
JP6125188B2 (en) | Video processing method and apparatus | |
KR101766603B1 (en) | Image processing apparatus, image processing system, image processing method, and computer program | |
CA3066502A1 (en) | Determining positions and orientations of objects | |
KR100879623B1 (en) | Automated wide area surveillance system using ptz camera and method therefor | |
US9619895B2 (en) | Image processing method of vehicle camera and image processing apparatus using the same | |
CN103123687A (en) | Fast obstacle detection | |
CN106228570B (en) | A kind of Truth data determines method and apparatus | |
US10692225B2 (en) | System and method for detecting moving object in an image | |
KR101793790B1 (en) | Apparatus and method for detecting entity in pen | |
US20150178573A1 (en) | Ground plane detection | |
JP2016152027A (en) | Image processing device, image processing method and program | |
Naser et al. | Shadowcam: Real-time detection of moving obstacles behind a corner for autonomous vehicles | |
US20190005344A1 (en) | Part recognition method, information processing apparatus, and imaging control system | |
KR101827113B1 (en) | Apparatus and method for detecting proximal entity in pen | |
KR101595334B1 (en) | Method and apparatus for movement trajectory tracking of moving object on animal farm | |
Hadi et al. | Fusion of thermal and depth images for occlusion handling for human detection from mobile robot | |
KR101827114B1 (en) | Apparatus and method for detecting proximal entity in pen | |
CN111325073A (en) | Monitoring video abnormal behavior detection method based on motion information clustering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
N231 | Notification of change of applicant | ||
E701 | Decision to grant or registration of patent right |