KR20170072491A - Apparatus and method for detecting entity in pen - Google Patents

Apparatus and method for detecting entity in pen Download PDF

Info

Publication number
KR20170072491A
KR20170072491A KR1020150180702A KR20150180702A KR20170072491A KR 20170072491 A KR20170072491 A KR 20170072491A KR 1020150180702 A KR1020150180702 A KR 1020150180702A KR 20150180702 A KR20150180702 A KR 20150180702A KR 20170072491 A KR20170072491 A KR 20170072491A
Authority
KR
South Korea
Prior art keywords
depth
threshold value
entities
detecting
area
Prior art date
Application number
KR1020150180702A
Other languages
Korean (ko)
Other versions
KR101793790B1 (en
Inventor
박대희
정용화
이종욱
최장민
노병준
Original Assignee
고려대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 고려대학교 산학협력단 filed Critical 고려대학교 산학협력단
Priority to KR1020150180702A priority Critical patent/KR101793790B1/en
Publication of KR20170072491A publication Critical patent/KR20170072491A/en
Application granted granted Critical
Publication of KR101793790B1 publication Critical patent/KR101793790B1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K1/00Housing animals; Equipment therefor
    • A01K1/02Pigsties; Dog-kennels; Rabbit-hutches or the like
    • A01K1/035Devices for use in keeping domestic animals, e.g. fittings in housings or dog beds
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • H04N13/02
    • H04N13/0271

Abstract

A method for detecting an object in a housing, the method comprising: detecting a plurality of objects based on depth information on a house internal image imaged through a depth camera; generating a binarized image; The overlapping region is detected, the predetermined depth threshold value is changed to separate the overlapping entity, and the plurality of entities including the separated entities are tracked through the indexing process.

Description

[0001] APPARATUS AND METHOD FOR DETECTING ENTITY IN PEN [0002]

The present invention relates to an apparatus and method for detecting an object in a house, and more particularly, to an apparatus and method for detecting an object in a house.

It is very difficult to grasp the behavior characteristics of livestock in real time due to the characteristics of the barn / pig farm where a few administrators have to manage many objects. Recently, researches on object detection system that can construct low cost based on ICT (Information Communication Technology) have been actively carried out in small scale farms. Among them, object detection systems are being developed that continuously track and analyze livestock behavior through camera sensors.

In the past, object detection technology has been proposed to detect abnormal conditions in a house using a frame difference technique. The difference image method calculates the pixel change between the previous frame (or background frame) and the current frame and detects motion. By using this difference image technique, the detected pixels can be divided into moving objects, numbered, and motion can be tracked through them.

In this regard, Korean Patent Laid-Open No. 10-1998-031927 entitled "Surveillance camera and monitoring method for alarming the position of a moving object" has a lens state adjusting unit A storage unit for storing first and second image data for a surveillance region having a predetermined time interval, and a storage unit for storing a difference image between the first and second image data input from the storage unit, And a moving object detection unit for detecting a moving object depending on the size of the moving object and generating a moving object detection signal in accordance with the detected moving object, Lt; / RTI >

However, the object detection technique using this differential image technique is effective in detecting the behavior of each individual object on a frame basis, but there is a limitation in that it can not solve the overlap problem between the objects in continuous object tracking. In addition, in the case of other camera sensors, since the video camera, which is influenced by the light and which is influenced by the weather and the time, is used, characteristic information extracted during continuous monitoring of the object may be distorted, .

There is a need for an object detection technology that can accurately detect and track the movement of individual objects overcoming these problems.

An embodiment of the present invention is to provide an apparatus and method for detecting objects that are overlapped in a housing based on depth information.

It should be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

According to an aspect of the present invention, there is provided an apparatus for detecting a cage of a housing, which detects a plurality of objects based on depth information from a housing internal image imaged through a depth camera and generates a binarized image, ; An object overlapping analyzing unit for detecting an object overlapping region in which two or more objects are recognized as one based on the area values of the plurality of objects and applying a predetermined depth threshold value to the object overlapping region to separate the overlapping objects; And an entity tracking unit for processing tracking through the indexing process on the plurality of entities including the separated entities.

According to another aspect of the present invention, there is provided a method for detecting a crawling object, comprising: detecting a plurality of objects based on depth information and generating a binarized image; Detecting an object overlap region in which two or more objects are recognized as one based on the plurality of object area values; Dividing the overlapping entity by applying the depth threshold value to the object overlapping region; And tracking the plurality of entities including the separated entities through an indexing process.

According to the above-mentioned problem solving means of the present invention, it is possible to accurately detect the overlap between individuals in a housing / pig house, which is a dense breeding environment, in real time to distinguish and track individual objects.

According to the object of the present invention, it is unnecessary to perform pre-processing such as attaching sensors to individual objects for livestock behavior analysis by detecting moving objects in real time using the in-housing depth information obtained through the depth camera. In addition, by using the depth information image instead of the RGB image, it is possible to prevent false detection due to the color of the object, showing strong performance against the distortion caused by the light.

According to an embodiment of the present invention, individual identification information and location information can be provided, so that behavior characteristics of individual entities can be easily analyzed.

According to the object of the present invention, even if a camera is installed in a certain area such as the side of the ceiling of a barn / pedestrian ceiling due to the structural characteristics of the breeding environment, even if a camera is installed in depth, accurate depth- And object tracking.

1 is a block diagram of a housing detection apparatus according to an embodiment of the present invention.
2 is an example of a binarized image showing an entity in a house according to an embodiment of the present invention.
3 is an example of an object overlap analysis processing algorithm according to an embodiment of the present invention.
4 is an example of a result image of the overlapping analysis process according to an embodiment of the present invention.
FIG. 5 is a flowchart illustrating a method for detecting a crawling object according to an embodiment of the present invention. Referring to FIG.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description in the drawings are omitted, and like parts are denoted by similar reference numerals throughout the specification. In the following description with reference to the drawings, the same reference numerals will be used to designate the same names, and the reference numerals are merely for convenience of description, and the concepts, features, and functions Or the effect is not limited to interpretation.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when a component is referred to as "comprising ", it is understood that it may include other components as well as other components, But do not preclude the presence or addition of a feature, a number, a step, an operation, an element, a component, or a combination thereof.

In this specification, the term " part " means a unit realized by hardware or software, a unit realized by using both, and one unit may be realized by using two or more hardware, The above units may be realized by one hardware.

1 is a block diagram of a housing detection apparatus according to an embodiment of the present invention.

1, the housing detection apparatus 100 includes a depth camera 110, an object detection unit 120, an object overlap analysis unit 130, and an object tracking unit 140.

In one embodiment of the present invention, the housing detection apparatus 100 will illustrate, as an example of housekeeping, the detection and tracking of pigs being densely maintained in pigs.

The depth camera 110 is installed at an angle at which the entire interior of the housing can be photographed at one position of the housing, and transmits an image photographed inside the housing to the object detection unit 120 in real time. For example, as shown in FIG. 1, the depth camera 110 may be installed in one area (e.g., ceiling) of the housing.

The depth camera 110 may be driven by any one of various depth recognition methods, and depth information is included in an image photographed through the depth camera 110. In an embodiment of the present invention, it is assumed that the depth camera 110 is a Kinect sensor.

The Kinect sensor is a structured light projection type depth camera that projects a defined pattern image using a projector or a laser and obtains a projected image through a camera to acquire three-dimensional information of a scene.

Such a Kinect sensor includes an infrared radiator for irradiating a pattern using an infrared laser, and an infrared camera for capturing an infrared image, and an RGB camera functioning as a general web cam is disposed between the infrared radiator and the infrared camera. In addition, the Kinect sensor can be further configured with a tilt motor that adjusts the microphone array and angle of the camera.

The basic principle of a Kinect sensor is that when a laser pattern irradiated from an infrared radiator is projected and reflected on an object, the distance to the object surface is obtained using the position and size of the pattern at the reflection point.

According to this principle, the depth camera 110 irradiates the laser pattern with the space inside the housing, and senses the laser pattern reflected from the object, thereby generating an image including the depth information per object.

The object detection unit 120 converts the image including the object-specific depth information received from the depth camera 110 into a binary image. At this time, the entity detector 120 detects a region having a reflection magnitude (i.e., depth value) of a laser pattern of a predetermined depth threshold value or more as an entity and displays the region as a binary image. For reference, the depth threshold is a threshold value for a magnitude value reflected from an object, the infrared pattern projected through the depth camera 110. In one embodiment of the present invention, the greater the depth threshold is the height from the bottom of the house (I. E., The reflected infrared light value increases).

Then, the entity detector 120 sets the detected entities as ROIs, extracts the positions and areas of the ROIs to analyze the behavior patterns of the respective entities, and stores them as property information.

2 is an example of a binarized image showing an entity in a house according to an embodiment of the present invention.

In FIG. 2, the areas marked with gray are distinguished from the shaded areas, and the areas in which numbers are numbered represent individual objects (i.e., pigs). In addition to the objects, the background (For example, pile bottom). In this case, the portion indicated by the object is a region having a depth value equal to or greater than a predetermined depth threshold value, and the background region is a region having a depth value less than a depth threshold value.

That is, the object detecting unit 120 detects a pixel having a reflection magnitude equal to or greater than a depth threshold value from the image data received from the depth camera 110, assigns a predetermined value to the detected pixels, and displays the pixel to be distinguished from the background region . For reference, a set of consecutive or adjacent pixels among the detected pixels may be represented as one entity.

For example, when the depth camera 110 is installed at a position (e.g., a side) other than the center of the ceiling of the python so that the image is taken at an arbitrary angle, the object detection unit 120 detects depth information To solve the problem, we can detect the object after correcting the depth information. At this time, the object detection unit 120 corrects the depth information by compensating the depth value according to the position of each object based on the image photographing angle.

Referring back to FIG. 1, the object overlap analyzing unit 130 detects an " object overlapping " region indicated by one region of interest because two or more objects can not be distinguished from each other. That is, "overlapping objects" refers to a phenomenon in which the contours of objects or the boundary between objects are unclear and recognized as one object in a binarized image expressed in black and white due to the fact that the objects are very adjacent to each other.

In this case, the object overlap analyzing unit 130 performs the adaptive threshold value algorithm as shown in FIG. 3 to sequentially apply the depth threshold values to the positions where the object overlaps are detected, .

FIG. 3 is a graph showing an algorithm for processing an overlapping analysis according to an embodiment of the present invention in a pseudocode.

In the case of the object overlap analysis, the input data is {(X 1 , Y 1 ), (X 2 , Y 2 ), (X 3 , Y 3 ) which are the positional information of the motion object ) (X n , Y n )}. For reference, the motion entity (i.e., standing pig) is detected as a region having a depth value equal to or greater than a predetermined depth threshold value. In addition, the output data is the position information of the detected moving object on the stacking problem of the input data {(X a, Y a) , (X b, Y b)}. For example, the input data {(X 1, Y 1) , (X 2, Y 2), (X 3, Y 3) ...... (X n, Y n)} of the (n) th motion object (X n , Y n ), which is the position information of the nth motion entity, can be set to {(X a , Y a ), (X b , Y b )}. Here, the position information of the motion entity represents the coordinates of the x and y axes shown in Fig. For reference, to indicate an ROI in the object as a rectangle, the location information {(X a, Y a) , (X b, Y b)} is the ROI of the object overlap occurs in the four squares on the four corners rather than point diagonally It is simply represented by two points located.

In FIG. 3, it is shown that the object overlap analysis unit 130 processes the object overlap analysis for the motion object (i.e., standing object). However, the input data {(X 1, Y 1) , (X 2, Y 2), (X 3, Y 3) ...... (X n, Y n)} In addition, another object within the housing (I.e., a lying object having a depth value equal to or less than a depth threshold value) may be further included. That is, although the depth threshold value is set to be large enough to detect the standing entities in FIG. 3, the depth threshold value may be set so as to detect all entities.

Specifically, the object overlap analysis processing algorithm can be implemented in the following steps.

In the first step (Step 1), the object overlap analyzing unit 130 sets the position information data of the motion objects as input data.

In the second step (Step 2), the object overlap analyzing unit 130 calculates the area S (ROI size) of the ROI based on the position information of each motion entity.

In the final step (Step 3), the following object overlap analysis and processing procedure is performed.

First, if the area of the ROI of the input motion object is equal to or greater than a predetermined ROI size threshold, it is determined that an object overlapping problem has occurred (i.e., an object overlap region). For reference, the area threshold value can be set based on the area value of the extracted region of interest when all of the individual entities are distinguished.

Next, the predetermined depth threshold value (Depth_threshold) is sequentially increased by a predetermined unit value for the region where the object overlap occurs. That is, the object overlapping analyzer 130 obtains the binarization image generated by applying the depth threshold value higher than the previous depth threshold value to the object detection unit 120, Location) of the object.

For reference, the object detection unit 120 may apply the depth threshold value only to the corresponding position where the object overlap occurs, under the control of the object overlap analysis unit 130. That is, when the object overlapping is determined through the object overlapping analysis unit 130, the object detection unit 120 acquires an image of the interior of the house (e.g., a nursery) taken by the depth camera 110 again. Then, the object detection unit 120 may apply the modified depth threshold value to the position determined as the object overlap region and maintain the original depth threshold value for the remaining regions. The entity detector 120 may generate a binarized image by re-detecting the entity in the region where the object overlap occurs by applying the changed depth threshold, and provide the result to the object overlap analyzer 130.

Next, in the re-acquired binarized image, if the value obtained by subtracting the depth value "(x, y) .floor_depth" of the pile bottom from the depth value "(x, y) .pig_depth" of the object is larger than the depth threshold value, And re-detected as an individual.

Then, a region of interest for the re-detected entity is designated, and the area S 'of the region of interest is re-calculated. The above process is repeated until the area S 'of the re-detected region of interest becomes equal to or smaller than the area threshold value.

If the area S 'of the re-detected region of interest is below the area threshold value, it means that two or more individuals that have been displayed as one region of interest are separated. That is, the entity detector 120 detects a plurality of entities including two or more separated entities according to the change of the depth threshold value, and generates a binarized image. Accordingly, the object coordinates overlapping analysis unit 130, the former object overlap region was detected by {(X a, Y a), (X b, Y b)} is less than each of the area threshold value from the object detection unit 120 And the position information thereof can be stored.

More specifically, the above process utilizes the curved shape of the 'iso' portion of an individual's body characteristics. That is, two or more entities that are detected overlapping at arbitrary depth threshold values are distinguished from each other as the depth threshold value increases (i.e., the detection position of the entity increases from the bottom). This is because, as the depth threshold value becomes larger, a position area having a value less than a depth threshold value (i.e., an edge part of a relatively lower position of the 'iso' part of the object) It is processed.

For example, referring to the XYZ axis shown in FIG. 1, as the depth threshold increases, the height of the section (XY axis) of the object gradually increases with respect to the vertical axis Z on the housing. Thus, the area of the detected region (i.e., the region of interest) among the 'iso' regions of the object having the curved shape is changed. That is, the 'iso' area detected for each of two or more objects included in the object overlapping area is gradually narrowed, so that the overlapped entity area is separated and the boundary becomes clear.

As described above, the object overlapping analyzer 130 detects overlapping of objects based on the area of the ROI of the MO, sequentially changes the depth threshold value until the object separation is possible at the position where overlapping of the object is detected, Again.

The result of the above process can be shown in FIG.

4 is an example of a result image of the overlapping analysis process according to an embodiment of the present invention.

The image shown on the left side of FIG. 4 is detected as a binary image before the object overlap analysis is performed. At this time, you can see the area where two pigs are superimposed on the top. In this case, if the area of the area is larger than the predetermined area threshold value, it is determined that the object overlapping occurs. Accordingly, when the depth threshold value is gradually increased, it can be seen that the objects at the corresponding positions are separated into two as shown in the image shown on the right side of FIG.

When the object overlapping is resolved through the object overlap analysis as described above, the object tracking unit 140 performs an indexing process on individual objects.

That is, when the area of the ROI of the input motion entity is smaller than the predetermined ROI size threshold in the OBI analysis process described with reference to FIG. 3, it is determined that the ROI is not an overlapping object, It is possible to calculate the degree of association with the position information and to track the corresponding object.

Referring again to FIG. 1, the entity tracking unit 140 processes the plurality of entities including the separated entities through the entropy analysis unit 130 through the indexing process. At this time, the entity tracking unit 140 may calculate an Euclidean distance between the current location information and the previous location information of the entity, and may perform the indexing process so as to correspond to the identification information assigned at the previous location. Through this, it is possible to analyze and solve the overlapping of objects even in the case where several objects move at the same time, and to simultaneously process the tracking of individual objects in one image.

Meanwhile, the housing detection apparatus 100 described with reference to FIGS. 1 to 4 may be implemented by a memory and a processor interlocked with the depth camera 110. That is, each algorithm or execution operation processed by the entity detection unit 120, the object overlap analysis unit 130, and the entity tracking unit 140 shown in FIG. 1 may be a program or a form of two or more programs linked to each other, (Not shown). One or more programs stored in the memory (not shown) are executed by a processor (not shown), and a processor (not shown) may perform predetermined processes as each program is executed.

Hereinafter, referring to FIG. 5, an object detection method for performing object overlap processing in a house based on depth information through the housing detection apparatus 100 will be described in detail.

FIG. 5 is a flowchart illustrating a method for detecting a crawling object according to an embodiment of the present invention. Referring to FIG.

First, a plurality of objects are detected based on the depth information and converted into a binarized image (S510).

At this time, the depth threshold value is a criterion of the depth value set to distinguish the object and the background. Accordingly, an area having a reflection size (i.e., depth value) greater than or equal to the depth threshold value is detected as an object and a region having a reflection size smaller than the depth threshold value is detected as a background Region).

Next, an object overlap region in which two or more objects are recognized as one is detected based on the detected plurality of object area values (S520).

At this time, if the area value per object is larger than the preset area threshold value, it can be determined as the object overlap region.

For reference, the step of correcting the depth information by compensating the depth value according to the position of each object based on the image capturing angle may be performed first, and then the object overlapping region may be detected.

Then, the overlapped objects are separated by applying a predetermined depth threshold value to the detected object overlap region (S530).

Specifically, the depth threshold value is changed until two or more entities are separated from each other at the position of the object overlap region, and the depth threshold value is changed to detect two or more entities separated from the regenerated binarized image, The location information can be generated for each of the above objects.

At this time, the depth threshold value is gradually increased until the area value of the object overlapping area becomes smaller than the predetermined area threshold value.

Next, a plurality of entities including the separated entities are tracked through an indexing process (S540).

At this time, it is possible to calculate the degree of association with a previous position of a plurality of entities, and to track individual entities in one image at the same time.

The method for detecting a cradle based on depth information according to an embodiment of the present invention described above may be implemented in the form of a recording medium including instructions executable by a computer such as a program module executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. The computer readable medium may also include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes any information delivery media, including computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism.

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.

100: object detection device in a house
110: Depth camera
120:
130: Object Overlap Analysis Unit
140:

Claims (13)

A housing animal detection device,
An object detector for detecting a plurality of entities based on depth information and generating a binarized image of a housing internal image photographed through a depth camera;
An object overlapping analyzing unit for detecting an object overlapping region in which two or more objects are recognized as one based on the area values of the plurality of objects and applying a predetermined depth threshold value to the object overlapping region to separate the overlapping objects; And
And an entity tracking unit for processing tracking through the indexing process on the plurality of entities including the separated entities.
The method according to claim 1,
Wherein the object overlap analyzing unit comprises:
Determining an object overlapping region when the area value of each object is greater than a predetermined area threshold value and changing the depth threshold value until two or more objects are separated and detected at the position of the object overlapping region,
Wherein the entity detecting unit comprises:
Wherein the binarized image is generated by re-detecting a plurality of entities including the two or more entities separated according to the change of the depth threshold value.
The method according to claim 1,
The object-
And calculates the degree of association of the plurality of entities with respect to the previous location, and simultaneously tracks individual entities in one image.
The method according to claim 1,
Wherein the object overlap analyzing unit comprises:
And changing the depth threshold value until the area value of the object overlap region becomes smaller than a predetermined area threshold value.
The method according to claim 1,
Wherein the entity detecting unit comprises:
And corrects the depth information by compensating a depth value according to a position of each object based on an image photographing angle of the depth camera.
The method according to claim 1,
Wherein the entity detecting unit comprises:
Detecting an area having a depth value equal to or greater than the depth threshold value as an object and processing an area having a depth value less than the depth threshold value as a background.
1. A method for detecting a crawling object via a crawler object detection device,
Detecting a plurality of entities based on depth information and generating a binarized image of a housing internal image photographed through a depth camera;
Detecting an object overlap region in which two or more objects are recognized as one based on the plurality of object area values;
Dividing the overlapping entity by applying the depth threshold value to the object overlapping region; And
And tracking the plurality of entities including the separated entities through an indexing process.
8. The method of claim 7,
Wherein the step of detecting the object overlap region comprises:
And if the area value of each object is greater than a preset area threshold value, it is determined to be an object overlap region.
8. The method of claim 7,
The step of separating the overlapping entity comprises:
Changing the depth threshold value until at least two entities are separately detected at the position of the object overlapping region; And
And re-detecting a plurality of objects including two or more separated objects by applying the depth threshold value to the object overlapping area.
8. The method of claim 7,
Wherein the step of tracking the plurality of entities through an indexing process comprises:
Calculating a degree of association between the plurality of entities and a previous location, and tracking individual entities in a single image at the same time.
8. The method of claim 7,
The method of claim 1,
And changing the depth threshold value until the area value of the object overlapping area becomes smaller than a predetermined area threshold value.
8. The method of claim 7,
Wherein the step of detecting the object overlap region comprises:
And correcting the depth information by compensating the depth value according to the position of each object based on the image photographing angle.
8. The method of claim 7,
Wherein the detecting the plurality of entities comprises:
Detecting an area having a depth value greater than or equal to the depth threshold value as an entity and processing an area having a depth value less than the depth threshold value as a background.
KR1020150180702A 2015-12-17 2015-12-17 Apparatus and method for detecting entity in pen KR101793790B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150180702A KR101793790B1 (en) 2015-12-17 2015-12-17 Apparatus and method for detecting entity in pen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150180702A KR101793790B1 (en) 2015-12-17 2015-12-17 Apparatus and method for detecting entity in pen

Publications (2)

Publication Number Publication Date
KR20170072491A true KR20170072491A (en) 2017-06-27
KR101793790B1 KR101793790B1 (en) 2017-11-20

Family

ID=59514733

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150180702A KR101793790B1 (en) 2015-12-17 2015-12-17 Apparatus and method for detecting entity in pen

Country Status (1)

Country Link
KR (1) KR101793790B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190030082A (en) * 2017-09-13 2019-03-21 (주)제이케이데이터시스템즈 System and Method for measuring weight of poultry based on pattern recognition
CN113449638A (en) * 2021-06-29 2021-09-28 西藏新好科技有限公司 Pig image ideal frame screening method based on machine vision technology

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102172347B1 (en) * 2019-11-11 2020-10-30 (주)이지팜 Method and system for determining health status of farm livestock

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101336139B1 (en) 2012-06-11 2013-12-05 동의대학교 산학협력단 System and method for motion estimating using depth camera
KR101568979B1 (en) 2014-05-22 2015-11-13 고려대학교 산학협력단 Cattle monitoring system and method using depth information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190030082A (en) * 2017-09-13 2019-03-21 (주)제이케이데이터시스템즈 System and Method for measuring weight of poultry based on pattern recognition
CN113449638A (en) * 2021-06-29 2021-09-28 西藏新好科技有限公司 Pig image ideal frame screening method based on machine vision technology
CN113449638B (en) * 2021-06-29 2023-04-21 北京新希望六和生物科技产业集团有限公司 Pig image ideal frame screening method based on machine vision technology

Also Published As

Publication number Publication date
KR101793790B1 (en) 2017-11-20

Similar Documents

Publication Publication Date Title
JP6825569B2 (en) Signal processor, signal processing method, and program
CN111160302B (en) Obstacle information identification method and device based on automatic driving environment
US11195038B2 (en) Device and a method for extracting dynamic information on a scene using a convolutional neural network
US8867790B2 (en) Object detection device, object detection method, and program
US6956469B2 (en) Method and apparatus for pedestrian detection
US8611591B2 (en) System and method for visually tracking with occlusions
US7684590B2 (en) Method of recognizing and/or tracking objects
JP6125188B2 (en) Video processing method and apparatus
KR101766603B1 (en) Image processing apparatus, image processing system, image processing method, and computer program
CA3066502A1 (en) Determining positions and orientations of objects
KR100879623B1 (en) Automated wide area surveillance system using ptz camera and method therefor
US9619895B2 (en) Image processing method of vehicle camera and image processing apparatus using the same
CN103123687A (en) Fast obstacle detection
CN106228570B (en) A kind of Truth data determines method and apparatus
US10692225B2 (en) System and method for detecting moving object in an image
KR101793790B1 (en) Apparatus and method for detecting entity in pen
US20150178573A1 (en) Ground plane detection
JP2016152027A (en) Image processing device, image processing method and program
Naser et al. Shadowcam: Real-time detection of moving obstacles behind a corner for autonomous vehicles
US20190005344A1 (en) Part recognition method, information processing apparatus, and imaging control system
KR101827113B1 (en) Apparatus and method for detecting proximal entity in pen
KR101595334B1 (en) Method and apparatus for movement trajectory tracking of moving object on animal farm
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
KR101827114B1 (en) Apparatus and method for detecting proximal entity in pen
CN111325073A (en) Monitoring video abnormal behavior detection method based on motion information clustering

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
N231 Notification of change of applicant
E701 Decision to grant or registration of patent right