WO2019212237A1 - Dispositif et procédé de détection d'entité anormale - Google Patents

Dispositif et procédé de détection d'entité anormale Download PDF

Info

Publication number
WO2019212237A1
WO2019212237A1 PCT/KR2019/005226 KR2019005226W WO2019212237A1 WO 2019212237 A1 WO2019212237 A1 WO 2019212237A1 KR 2019005226 W KR2019005226 W KR 2019005226W WO 2019212237 A1 WO2019212237 A1 WO 2019212237A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
data
abnormal
abnormal object
Prior art date
Application number
PCT/KR2019/005226
Other languages
English (en)
Korean (ko)
Inventor
김민규
류승훈
신제용
Original Assignee
엘지이노텍 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지이노텍 주식회사 filed Critical 엘지이노텍 주식회사
Priority to KR1020197016267A priority Critical patent/KR20200139616A/ko
Publication of WO2019212237A1 publication Critical patent/WO2019212237A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • Embodiments of the present invention relate to an apparatus and a method for detecting abnormal objects.
  • Livestock raised in groups within a small kennel are very vulnerable to the spread of communicable disease.
  • legal epidemics such as foot-and-mouth disease and bird flu are spread through the air, so once they occur, the social costs of protection and prevention of infection are very high, and the social anxiety about food is rapidly spreading. . If abnormal signs are found in the kennel, it is important to isolate diseased livestock as soon as possible to prevent the spread of disease.
  • the local machine collects the image data photographed in the kennel, and transmits the collected image data to the learning server.
  • the learning server may learn the image data received from the local machines and extract a parameter to be applied to the anomaly detection algorithm.
  • the local machine includes a camera, and all image data photographed by the camera is transmitted to the learning server. At this time, excessive communication traffic is generated when the local machine transmits data to the learning server, and the learning server may perform an excessive amount of operations to learn all of them.
  • the present invention has been made in an effort to provide an abnormal object detecting apparatus and method capable of detecting an object having a high probability of disease in image data photographed inside a kennel.
  • An apparatus for detecting an anomaly includes a communication processor communicating with a learning server; And a controller configured to receive a captured image and to output an image of masking a pixel region in which no motion is detected among the objects of the captured image, wherein the communication processor includes a bitmap of the captured image and the object density of the captured image. Send to the learning server.
  • the bitmap may include information in which an object density of the captured image is modified.
  • the communication processor communicates with an administrator terminal, and the modified information may be modified by the administrator terminal.
  • the apparatus for detecting an abnormal object transmits a controller for extracting abnormal object information from a captured image in which a plurality of objects are photographed together, and transmits the abnormal object information to a manager terminal. And a learning unit for receiving feedback information about the learning information, and a learning preprocessing unit generating learning data based on the feedback information.
  • the learning data includes image data extracted based on the feedback information.
  • the controller extracts the abnormal entity information by using update information of the learning data learned by the learning server.
  • the feedback information may inform that there is an error in the abnormal entity information.
  • the feedback information may include information about an error time zone and information about a spatial area of the abnormal entity information.
  • the image data based on the feedback information may be image data in the time domain and the spatial domain extracted according to the feedback information of the captured image.
  • the control unit may include a first feature extracting unit extracting a position distribution of the plurality of objects in the captured image, a second feature extracting unit extracting the movements of the plurality of objects in the captured image, and the first feature extracting unit. And an abnormal entity information generating unit that generates abnormal entity data based on the position distribution extracted by the extracted feature and the movement extracted by the second feature extraction unit.
  • the update information includes at least one of first update information applied to the first feature extraction unit, second update information applied to the second feature extraction unit, and third update information applied to the abnormal object information generating unit. can do.
  • An object detection method comprises the steps of receiving an n-th image; outputting a first density map of the object in the n-th image; outputting a motion map in an nth image; Outputting an n-th or more object density map by using the first density map, the motion map, and a previously stored n-th or more object density map; And masking an area where the abnormal object is located in the n-th image using the n-th or more object density map, wherein the n-th image and the second density map are transmitted to a learning server, and the second density map. Is information in which the density of the object in the n-th image is corrected.
  • the second density map may be modified by the administrator terminal.
  • the method may further include outputting.
  • the modified information includes at least one of error time space information and space area information of the n-th or more object density map, and the second density map uses at least one of the time area information and the space area information. Can be extracted.
  • the outputting of the abnormal object density map for the predetermined image may include extracting position distributions of the plurality of objects in the predetermined image by using the predetermined image and first update information received from the learning server. Extracting the movements of the plurality of objects in the predetermined image using the image and the second update information received from the learning server, and the position distribution, the movement, and the third update information received from the learning server.
  • the method may include generating an abnormal object density map of the predetermined image using the image.
  • An anomaly object detection system includes an anomaly object detection device communicating with a learning server and the learning server, wherein the anomaly object detection device comprises: a communication processor communicating with the learning server; And a controller configured to receive a captured image and to output an image of masking a pixel region in which no motion is detected among the objects of the captured image, wherein the communication processor includes a bitmap of the captured image and the object density of the captured image. Send to the learning server.
  • the apparatus and method for detecting an abnormal object may detect an abnormal object having a high probability of disease on image data photographed inside a kennel.
  • the amount of data transmitted to the learning server can be reduced, it is possible to prevent the occurrence of excessive communication traffic, and to reduce the amount of computation in the learning server.
  • FIG. 1 is a block diagram of an anomaly detection system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a learning system for detecting abnormal objects according to an embodiment of the present invention.
  • FIG. 3 is a conceptual diagram of an apparatus for detecting an anomaly according to an embodiment of the present invention.
  • FIG. 4 is a block diagram of an anomaly detection apparatus according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for detecting an abnormal object of the apparatus for detecting an abnormal object according to an embodiment of the present invention.
  • 6 is a view for explaining the principle of the object density prediction network.
  • FIG. 7 is a diagram illustrating an example of displaying abnormal entity information on a manager terminal.
  • FIG. 8 is a diagram for explaining an example in which abnormal entity information is displayed for each block on a manager terminal.
  • FIG. 9 is a block diagram of a controller included in an anomaly detecting apparatus according to an embodiment of the present invention.
  • FIG. 10 is a view for explaining an abnormal object detection algorithm of the abnormal object detection apparatus according to the embodiment of the present invention.
  • 11 is a view for explaining a method for detecting an abnormal object by the apparatus for detecting an abnormal object according to an embodiment of the present invention by using the result of re-learning by the learning server.
  • ordinal numbers such as second and first
  • first and second components may be used to describe various components, but the components are not limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • second component may be referred to as the first component, and similarly, the first component may also be referred to as the second component.
  • FIG. 1 is a block diagram of an abnormal object detection system according to an embodiment of the present invention
  • FIG. 2 is a block diagram of a learning system for detecting an abnormal object according to an embodiment of the present invention
  • FIG. 3 is according to an embodiment of the present invention. It is a conceptual diagram of the abnormal object detection apparatus.
  • an abnormal object detection system 1000 includes an abnormal object detecting apparatus 100, an administrator terminal 200, an air conditioning apparatus 300, and a learning server 400. do.
  • the learning system 2000 for detecting an abnormal object includes a plurality of abnormal object detecting devices 100 and a learning server 400.
  • the plurality of abnormal object detecting apparatuses 100 may be a plurality of abnormal object detecting apparatuses installed in one breeding ground, or may be a plurality of abnormal object detecting apparatuses installed in the plurality of breeding grounds.
  • the abnormality object detecting apparatus 100 may detect an environment in the kennel 10 and transmit it to at least one of the manager terminal 200 and the air conditioning apparatus 300.
  • the kennel 10 means a livestock breeding barn.
  • the livestock may be not only poultry such as chickens and ducks, but also various kinds of animals bred in groups such as cattle and pigs.
  • the abnormal object detection apparatus 100 extracts abnormal object information in the kennel 10.
  • the abnormal entity information may include at least one of information on the presence or absence of the abnormal entity, information on a spatial area in which the abnormal entity exists, and information on a time domain in which the abnormal entity exists.
  • the abnormal subject may refer to an individual who is not in a normal state due to a disease, pregnancy, or the like.
  • the abnormal object detecting apparatus 100 may include a photographing unit, and may acquire image data of an image in which a plurality of objects are photographed together using the photographing unit.
  • the photographing unit may be mixed with the camera.
  • the abnormal object detecting apparatus 100 may extract an abnormal object information on the image data by driving a pre-stored algorithm.
  • the prestored algorithm may include a trained model.
  • the learned model may be a computer readable program and may be stored in a recording medium or a storage device executable by the computer.
  • a processor in a computer may read a program stored in a recording medium or a storage device, execute a program, that is, a trained model, calculate input information, and output a calculation result.
  • the input information may be image data
  • the operation result may be abnormal object information.
  • the abnormality object detecting apparatus 100 may be arranged for each of the kennel 10.
  • the abnormal object detecting apparatus 100 may include a plurality of photographing units 111, and the plurality of photographing units 111 may be disposed at various places in the kennel 10.
  • the plurality of photographing units 111 may be disposed at the upper and side portions of the kennel 10.
  • the abnormal object detection apparatus 100 may extract the abnormal object information by collecting a plurality of image data acquired by the plurality of photographing units 111.
  • a plurality of abnormal object detection apparatuses 100 may be disposed in one kennel 10.
  • the plurality of abnormal object detecting apparatuses 100 may be disposed at various places in the kennel 10, and each of the abnormal object detecting apparatuses 100 uses the individual image data obtained by each photographing unit 111. You can also extract information.
  • the abnormality object detecting apparatus 100 may communicate with the manager terminal 200 and the air conditioning apparatus 300 by wire or wirelessly.
  • the abnormality object detecting apparatus 100 is illustrated as communicating with the manager terminal 200 and the air conditioning apparatus 300, respectively, but is not limited thereto, and the abnormality object detecting apparatus 100 communicates with the manager terminal 200.
  • the manager terminal 200 may communicate with the air conditioning apparatus 300.
  • the manager terminal 200 may be a personal computer (PC), a tablet PC, a mobile terminal, or the like, and may be mixed with a management server.
  • the abnormal object detecting apparatus 100 transmits at least one of the environment and the abnormal object information in the breeding ground 10 to the manager terminal 200
  • the manager may check the breeding ground 10 through a screen output to the manager terminal 200.
  • At least one of environment information and abnormal object information within the server can be recognized.
  • the abnormal object detecting apparatus 100 captures an abnormal situation in the kennel 10 and transmits the abnormal situation to the manager terminal 200
  • the manager is connected to the kennel 10 through a screen output to the manager terminal 200. It can be recognized that an abnormal situation has occurred in the first place and can respond to the initial situation.
  • the abnormal situation may be, for example, the generation of diseased livestock, pregnancy of the livestock, growth of the livestock, the humidity in the kennel 10, temperature, more than the concentration of a specific molecule and the like.
  • the air conditioning apparatus 300 is a device for controlling the temperature of the kennel 10.
  • the abnormality object detecting apparatus 100 captures a temperature abnormality in the kennel 10 and transmits it to the manager terminal 200, the manager generates a temperature abnormality in the kennel 10 through a screen output to the manager terminal 200. It can be recognized that, by controlling the air conditioning apparatus 300 can normalize the temperature in the kennel 10.
  • the air conditioner 300 may directly normalize the temperature in the kennel 10.
  • the abnormality object detecting apparatus 100, the manager terminal 200, or the air conditioning apparatus 300 may detect a temperature abnormality in the kennel 10 and normalize the temperature in the kennel 10.
  • the air conditioning apparatus 300 may adjust the humidity of the kennel 10. When the humidity in the kennel 10 is abnormal, the air conditioning apparatus 300 may be controlled to normalize the humidity in the kennel 10.
  • the abnormal object detection apparatus 100 extracts abnormal object information by driving a previously stored algorithm.
  • the abnormal object detecting apparatus 100 may transmit the training data to the remote learning server 400 and extract the abnormal object information by applying the parameter received from the learning server 400 to the algorithm for detecting the abnormal object. have.
  • the learning server 400 receives the training data from the plurality of abnormal object detection apparatus 100, and re-trains the training data to extract parameters.
  • the learning server 400 may learn the learning data using a deep learning technique, but is not limited thereto.
  • the learning server 400 may learn the learning data using various techniques and extract parameters.
  • the abnormal object detecting apparatus 100 may be mixed with the local machine, and the learning server 400 may collect the training data from the plurality of abnormal object detecting apparatuses installed in the plurality of kennels.
  • the abnormality object detecting apparatus 100 pre-processes the image data, selects the training data, and transmits only the selected training data to the training server 400. Accordingly, communication traffic between the abnormality object detecting apparatus 100 and the learning server 400 can be reduced, and the amount of computation of the learning server 400 can be reduced.
  • FIG. 4 is a block diagram of an apparatus for detecting an abnormal object according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for detecting an abnormal object in an apparatus for detecting an abnormal object according to an embodiment of the present invention.
  • the abnormal object detecting apparatus 100 may include a photographing unit 111, a control unit 112, a communication unit 113, a display unit 114, a user interface unit 115, an encoding unit 116, and a database. 117, a light source 118, a pan tilt unit 119, and a learning preprocessor 120.
  • the display unit 114, the user interface unit 115, the light source unit 118, and the pan tilt unit 119 may be omitted.
  • the photographing unit 111 may be implemented to include a lens and an image sensor, and at least one of the controller 112, the encoding unit 116, and the learning preprocessor 120 may be implemented by a computer processor or a chip.
  • the database 117 may be mixed with a memory, and the communication unit 113 may be mixed with an antenna or a communication processor.
  • the photographing unit 111 may include one or a plurality of photographing units.
  • the photographing unit 111 may include at least one upper photographing unit disposed above the kennel 10 and at least one side photographing unit disposed at the side of the kennel 10.
  • Each of the upper photographing unit and the side photographing unit may be an IP camera that can communicate in a wired or wireless manner and transmit real-time images.
  • the photographing unit 111 may generate image data by photographing an image including a plurality of objects.
  • a plurality of individuals may mean poultry farmed in a kennel.
  • image data photographed by the photographing unit 111 may be mixed with original data, original image, photographed image, and the like.
  • the photographing unit 111 may generate a plurality of image data using a plurality of images sequentially photographed.
  • the photographing unit 111 may generate first image data by capturing a first image including a plurality of objects, and generate second image data by capturing a second image including a plurality of objects. can do.
  • Each of the first image and the second image may be an image continuously photographed in time, and one image data may mean a single frame.
  • the photographing unit 111 may generate the first image data and the second image data by using the first image and the second image which are sequentially photographed.
  • the photographing unit 111 may be an image sensor photographing a subject using a complementary metal-oxide semiconductor (CMOS) module or a charge coupled device (CCD) module.
  • CMOS complementary metal-oxide semiconductor
  • CCD charge coupled device
  • the photographing unit 111 may include a fisheye lens or a wide angle lens having a wide viewing angle. Accordingly, it is also possible for one photographing unit 111 to photograph the entire space inside the kennel 10.
  • the photographing unit 111 may be a depth camera.
  • the photographing unit 111 may be driven by any one of various depth recognition methods, and the depth information may be included in the image photographed by the photographing unit 111.
  • the photographing unit 111 may be, for example, a Kinect sensor.
  • Kinect sensor is a depth camera of the structured light projection method, it is possible to obtain a three-dimensional information of the scene by projecting a pattern image defined using a projector or a laser, and obtains the image projected pattern through the camera.
  • These Kinect sensors include infrared emitters that irradiate patterns using infrared lasers, and infrared cameras that capture infrared images.
  • An RGB camera that functions like a typical webcam is disposed between the infrared emitter and the infrared camera.
  • the Kinect sensor may further comprise a pan tilt unit 119 for adjusting the angle of the microphone array and the camera.
  • the basic principle of the Kinect sensor is that when the laser pattern irradiated from the infrared emitter is projected and reflected on the object, the distance to the surface of the object is obtained using the position and size of the pattern at the reflection point.
  • the photographing unit 111 may generate the image data including the depth information for each object by irradiating the laser pattern to the space in the kennel and sensing the laser pattern reflected from the object.
  • the controller 112 controls the abnormal object detection apparatus 100 as a whole.
  • the control unit 112 extracts the abnormal object information from the image data for the image photographed with a plurality of objects by the photographing unit 111.
  • the abnormal object information may include at least one of information on the presence or absence of an abnormal object in the image data, information on a spatial area in which the abnormal object exists in the image data, and information on a time domain in which the abnormal object exists in the image data. have.
  • the information about the spatial region in which the abnormal object exists in the image data may be coordinate information of the region in which the abnormal object exists or coordinate information of the abnormal object.
  • the abnormal entity information may be represented by region, block, or pixel, and the abnormal entity information may further include a probability that there is an abnormal entity by region, block, or pixel.
  • the abnormal entity information may be mixed with the abnormal entity data.
  • the abnormal entity information may be information obtained from abnormal entity data.
  • the abnormal entity information may include abnormal entity data.
  • the control unit 112 may extract the abnormal object information by a method of driving the algorithm stored in advance in the database 117, the pre-stored algorithm and the parameters applied to the algorithm is the result learned by the learning server 400 Extracted algorithms and parameters. Details of how the control unit 112 extracts abnormal object information will be described later.
  • the controller 112 may perform an operation of the abnormality object detecting apparatus 100 by executing a command stored in the user interface 115 or the database 117, for example.
  • the controller 112 may control various operations of the abnormality object detecting apparatus 100 using a command received from the manager terminal 200.
  • the controller 112 may control the photographing unit 111 to track and capture a specific area in which an abnormal object is located.
  • the tracking shooting target may be set through the user interface unit 115 or may be set through a control command of the manager terminal 200.
  • the control unit 112 controls the photographing unit 111 to track and photograph a specific area in which an abnormal object exists to generate image data so that continuous monitoring can be performed.
  • the controller 112 may control the pan tilt unit 119 of the abnormal object detecting apparatus 100 to perform tracking shooting.
  • the pan tilt unit 119 may control a photographing area of the photographing unit 111 by driving two motors, a pan and a tilt.
  • the pan tilt unit 119 may adjust the directing direction of the photographing unit 111 to photograph a specific area under the control of the controller 112.
  • the pan tilt unit 119 may adjust the directing direction of the photographing unit 111 to track a specific object under the control of the controller 112.
  • the controller 112 receives an image photographed by the photographing unit 111, and an area in which no motion is detected, for example, a motion, is not detected for a predetermined time among the objects in the input image.
  • An image in which no pixels are masked may be generated and output.
  • an image in which a pixel in which no motion is detected is masked may mean an image in which a pixel in which no motion is detected is displayed to be distinguished from a neighboring pixel or an image which is not displayed in order to be distinguished from a peripheral pixel. That is, in the present specification, masking may mean an overall method of distinguishing from surrounding pixels, and may be interpreted to include not only masking processing generally applied in the field of image processing, but also mosaic processing, transparent pixel processing, and the like. have.
  • the communication unit 113 may perform data communication with at least one of the other abnormality object detecting apparatus, the manager terminal 200, or the learning server 400.
  • the communication unit 113 may include a wireless LAN (WLAN), Wi-Fi, WiBro, WiMAX, World Interoperability for Microwave Access (WMAX), and HSDPA (High Speed Downlink).
  • WLAN wireless LAN
  • Wi-Fi Wireless LAN
  • WiBro Wi-Fi
  • WiMAX Wireless LAN
  • WiMAX Worldwide Interoperability for Microwave Access
  • HSDPA High Speed Downlink
  • Data communication may be performed using telecommunication technologies such as Packet Access, IEEE 802.16, Long Term Evolution (LTE), and Wireless Mobile Broadband Service (WMBS).
  • LTE Long Term Evolution
  • WMBS Wireless Mobile Broadband Service
  • the communication unit 113 may include Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Zigbee, and Near Field Communication (NFC).
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • NFC Near Field Communication
  • data communication may be performed using a short-range communication technology such as USB communication, Ethernet, serial communication, and optical / coaxial cable.
  • the communication unit 113 may perform data communication with another abnormality object detecting apparatus using a short range communication technology, and perform data communication with the manager terminal 200 or the learning server 400 using a long distance communication technology.
  • the present invention is not limited thereto, and various communication technologies may be used in consideration of various aspects of the kennel 10.
  • the communication unit 113 transmits the image data photographed by the photographing unit 111 to the manager terminal 200, or transmits the abnormal object information extracted by the controller 112 to the manager terminal 200, or the image data and the abnormality. The result of matching the individual information may be transmitted to the manager terminal 200.
  • the communication unit 113 may transmit a bitmap of an image photographed by the photographing unit 111 and a density of an object in the image to the learning server 400.
  • the bitmap transmitted to the learning server 400 may be a bitmap modified by the manager terminal 200.
  • the data transmitted through the communication unit 113 may be compressed data encoded through the encoding unit 116.
  • the display unit 114 includes a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display. ), At least one of a 3D display and an e-ink display.
  • LCD liquid crystal display
  • TFT LCD thin film transistor liquid crystal display
  • OLED organic light-emitting diode
  • flexible display At least one of a 3D display and an e-ink display.
  • the display unit 114 may display at least one of image data and distribution data in which pixel values are adjusted through the control unit 112.
  • the display unit 114 may output the image data photographed by the photographing unit 111 to the screen, or may output the result of detecting the image data and the abnormal object information on the screen.
  • the display unit 114 may output various user interfaces or graphic user interfaces on the screen.
  • the user interface 115 generates input data for controlling the operation of the abnormality object detecting apparatus 100.
  • the user interface unit 115 may include a keypad, a dome switch, a touch pad, a jog wheel, a jog switch, and the like.
  • the display unit 114 and the touch pad have a mutual layer structure and constitute a touch screen, the display unit 114 may be used as an input device in addition to the output device.
  • the user interface 115 may receive various commands for the operation of the abnormality object detecting apparatus.
  • the encoding unit 116 encodes the image data photographed by the photographing unit 111 or the processed image data processed through the control unit 112 into a digital signal.
  • the encoding unit 116 may encode image data according to H.264, H.265, Moving Picture Experts Group (MPEG), and Motion Joint Photographic Experts Group (M-JPEG) standards.
  • MPEG Moving Picture Experts Group
  • M-JPEG Motion Joint Photographic Experts Group
  • the database 117 may include a flash memory type, a hard disk type, a multimedia card micro type, a memory of a card type (for example, SD or XD memory).
  • a flash memory type for example, a hard disk type
  • a multimedia card micro type for example, a memory of a card type (for example, SD or XD memory).
  • RAM random access memory
  • SRAM static random access memory
  • ROM read-only memory
  • ROMEEP electrically erasable programmable read-only memory
  • PROM programmable memory Read-Only Memory
  • the anomaly detecting apparatus 100 may operate a web storage that performs a storage function of the database 117 on the Internet, or may operate in connection with the web storage.
  • the database 117 may store image data photographed by the photographing unit 111 and may store image data for a predetermined period of time.
  • the database 117 may store data and programs necessary for the abnormal object detection apparatus 100 to operate, and may store an algorithm and parameters applied to the controller 112 to extract the abnormal object information. have.
  • the database 117 may store various user interfaces (UIs) or graphical user interfaces (GUIs).
  • UIs user interfaces
  • GUIs graphical user interfaces
  • the light source unit 118 may irradiate light in a direction that is directed under the control of the controller 112.
  • the light source unit 118 may include at least one laser diode LD and a light emitting diode LED.
  • the light source unit may emit light of various wavelength bands under the control of the controller.
  • the light source unit 118 may irradiate light in the infrared wavelength band for night photographing.
  • the light source unit 118 may irradiate light in the ultraviolet wavelength band for photochemotherapy of livestock in the kennel.
  • the learning preprocessor 120 extracts the learning data to be used for learning by the learning server 400 from the image data acquired by the photographing unit 111. A detailed method of extracting the training data by the learning preprocessor 120 will be described below.
  • the apparatus 100 for detecting an abnormality acquires image data of an image in which a plurality of objects are photographed together using the photographing unit 111 (S500), and the controller 112 obtains the image data.
  • the abnormal object information is extracted from the apparatus (S502).
  • one image data may mean a single frame, and the photographing unit 111 may generate a plurality of image data by using images sequentially photographed.
  • the controller 112 may extract the abnormal entity information using a pre-stored algorithm and a parameter applied thereto.
  • the pre-stored algorithm is a motion among the objects using a first algorithm trained to display the area where the object is determined to be located in the image data and a second algorithm trained to detect motion by using an optical flow.
  • the first algorithm may use a real-time object detection method using an algorithm indicating an area in which an object exists, or the first algorithm may indicate distribution information of an object, that is, density information of an area in which an object is located. Can also be used.
  • the first algorithm may be all or part of the learned model in this specification.
  • the learned model may be a computer readable program and may be stored in a recording medium or a storage device that can be executed by the computer.
  • a processor in a computer may read a program stored in a recording medium or a storage device, execute a program, that is, a trained model, calculate input information, and output a calculation result.
  • the input of the trained model may be image data photographing one or a plurality of objects
  • the output of the trained model may be a density map of an object in the image data.
  • the density map may be mixed with position distribution data, density information, density image, and the like.
  • 6 is a diagram for describing an object density prediction network, which is an example of a first algorithm according to an embodiment of the present invention.
  • the object density prediction network is an example of a deep learning algorithm designed to display density information of an area where an object is located.
  • the object density prediction network may be an algorithm for inputting an original image into a convolution network-based learning machine and then outputting a density image represented by a gray scale probability map.
  • the convolutional network may be a neural network.
  • the image data photographing one or more objects becomes an input layer of the neural network
  • the neural network may perform an operation on the image data photographing one or more objects based on the learned weight coefficient.
  • the output layer of the neural network may be a density map for the entity. According to this, abnormal object information can be easily extracted even in a kennel environment in which objects having a similar shape, such as a chicken house, are accommodated at high density.
  • the object density prediction network may be learned using the original image and the density image, and may be learned by the training server of FIG. 11 to be described later.
  • the object density prediction network may include at least one convolution network (layer).
  • the convolutional network may classify the feature points of the image using at least one feature map (W).
  • the convolutional network may improve the performance of the object density prediction network through at least one pooler and / or activator.
  • the object density prediction network further includes a concatenator, and the concatenator can concatenate the output results of at least one convolutional network, rearrange them, and output density (distribution) information of the object using feature points of the original image. have.
  • At least one feature map (W) may be trained (tuned) to output density information of an object in a training server to be described later with reference to FIG. 11.
  • the input teacher data of the object density prediction network included in the learning server may be an original image and a learning label
  • the output may be a density image
  • the original image may be an image in which one or a plurality of objects are photographed by the photographing unit
  • the learning label may be an object density estimation image represented by a pseudo density 8-bit gray image or a probability map.
  • the density density network is configured to have the same or similar size (Width x Highet) as the original image, and the position of each pixel (or block) corresponds to each other, and the pixel value of the density image is It can represent the probability that an object (eg poultry) can be present in the corresponding pixel.
  • the density image may be an 8-bit gray image or a probability map.
  • a density image may be generated using a method of taking a white dot in the middle of a labeled rectangular box and blurring the spot to blur each object.
  • the communication unit 113 of the abnormal object detecting apparatus 100 transmits the abnormal object information acquired by the control unit 112 to the manager terminal 200 (S504), and the manager terminal 200 outputs the abnormal object information (S506). ). Accordingly, the manager can recognize the abnormal object information output to the manager terminal 200.
  • the abnormal entity information may be mixed with the abnormal entity data.
  • the manager terminal 200 may display an image as shown in FIG. 7C. That is, the manager terminal 200 includes an image obtained by matching the original image photographed by the photographing unit 111 as shown in FIG. 7A and abnormal object data extracted by the controller 112 as shown in FIG. 7B. Can be displayed.
  • the abnormal object data may be a chicken density estimation image, and may be displayed as a pseudo density 8 bit gray image, a probability map, or a bit map.
  • the abnormal object data is not data indicating an abnormal probability for each object, but data representing an abnormal probability of an object, a body, or a head in a region, block, or pixel corresponding to each divided area, block, or pixel in the image data. to be.
  • the manager may transmit feedback information to the abnormal object detection apparatus 100 through the manager terminal 200 (S508). For example, when it is determined that the abnormal object detecting apparatus 100 is an abnormal object even though it is not an abnormal object, or when the abnormal object detecting apparatus 100 is determined to be a normal object despite being an abnormal object, the manager terminal 200 is determined. ) May transmit feedback information indicating that there is an error in the abnormal object information to the abnormal object detecting apparatus 100.
  • the feedback information may include at least one of a temporal domain and a spatial domain in which the error information is abnormal.
  • the error time zone and the space zone may be selected or designated by an administrator, and may be input through a user interface unit of the manager terminal 200.
  • the manager terminal 200 displays abnormal object information 802, 804, 806, and 808 for four areas of the screen 800, and the manager displays 4. If one or more pieces of abnormal object information (for example, 806) is determined to be an error, the administrator may select and feed back the abnormal object information 806 which is an error through the user interface. Alternatively, when the abnormal object information is not displayed despite the abnormal object in the screen output by the manager terminal 200, the manager may designate and feed back a time area and a spatial area which appear to be the abnormal object.
  • the learning preprocessor 120 of the object detecting apparatus 100 extracts training data using the feedback information received from the manager terminal 200 (S510) and transmits the extracted training data to the training server 400 ( S512).
  • the training data may be part of the image data acquired in step S500.
  • the learning preprocessor 120 of the abnormality object detecting apparatus 100 may extract the image data according to the time domain and the spatial domain included in the feedback information received from the manager terminal 200 among the image data acquired in step S500. .
  • the training data is part of the nth frame (eg, FIG. 8). 806 of (c).
  • the training data may be the n + 3 th frame in the n th frame of the entire frames.
  • the abnormal object detecting apparatus 100 may extract only the image data having an error in detecting the abnormal object from the entire image data acquired by the photographing unit 111 and transmit the extracted image data to the learning server 400. Accordingly, communication traffic between the abnormality object detecting apparatus 100 and the learning server 400 may be significantly reduced.
  • the training data may further include at least one of error information and correction information as well as extracted image data.
  • the error information may be information indicating that a determination by the controller is wrong, and the correction information may be information indicating a direction to be corrected.
  • the training data may include the extracted original image data and the abnormal object data after the error is corrected. For example, if it is determined that there is an error in some region (for example, 806 of FIG. 8C) in the n th frame to the n + 3 th frame, the training data is determined for the n th frame to the n + 3 th frame.
  • Image data of the original and abnormal object data for the nth to n + 3th frames, and the abnormal object data for the nth to n + 3th frames may be included in some areas determined to be in error (eg, This may be a gray image after the error of 806 of FIG. 8C is corrected.
  • the training server 400 retrains using the training data received from the abnormal object detection apparatus 100 (S514). That is, the learning server 400 may detect an error in some areas determined to be in error in the original image data for the n th frame to the n + 3 th frame and the abnormal object data for the n th th frame to the n + 3 th frame. The relationship between the gray images after the correction can be learned. Accordingly, the learning server 400 transmits the update information for detecting the abnormal object to the abnormal object detecting apparatus 100 (S516).
  • the update information may be a parameter applied to an algorithm for detecting an abnormal object, which may be in the form of an adjustable matrix.
  • the apparatus 100 for detecting an abnormality acquires image data of an image in which a plurality of objects are taken together as shown in step S500 (S518), and the controller 112 obtains the image using an algorithm and update information stored in advance.
  • the abnormal object information is extracted from the image data (S520).
  • the abnormal object detecting apparatus 100 may detect the abnormal object by reflecting the feedback information of the manager terminal 200, and may reduce communication traffic between the abnormal object detecting apparatus 100 and the learning server 400. .
  • the learning server 400 may transmit the same update information to the plurality of abnormal object detecting apparatuses 100. have. According to this, the result of re-learning by the error information obtained from one abnormal object detecting apparatus 100 can be reflected in the other abnormal object detecting apparatus 100. As a result, the plurality of abnormal object detecting apparatuses 100 Overall, the detection accuracy can be improved. Alternatively, since the abnormal object detection apparatus 100 may be in a different environment, the update information may be specific for the abnormal object detection apparatus 100.
  • the learning server 400 also stores an algorithm such as an object density prediction network of the abnormal object detecting apparatus 100, and the learning server 400 uses the same as an original image corresponding to the error data. It is desirable to relearn.
  • FIG. 9 is a block diagram of a controller included in an abnormal object detecting apparatus according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating an abnormal object detecting algorithm of the abnormal object detecting apparatus according to an embodiment of the present invention. Is a diagram for explaining a method of detecting an abnormal object by the apparatus for detecting an abnormal object according to an embodiment of the present invention by using the result of re-learning by the learning server.
  • control unit 112 includes a first feature extraction unit 600, a second feature extraction unit 602, and an abnormal entity information generation unit 604.
  • the first feature extracting unit 600 extracts a position distribution of a plurality of objects in the image data with respect to the image photographed by the photographing unit 111, and the second feature extracting unit 602 extracts a plurality of objects in the image data. Extract the movement.
  • the abnormal object information generating unit 604 then generates the abnormal object information, for example, the pixel based on the position distribution extracted by the first feature extraction unit 600 and the movement extracted by the second feature extraction unit 602. Estimate the odds of a star anomaly.
  • the first feature extracting unit 600 may generate position distribution data indicating the position distribution of the plurality of objects using the image data.
  • the location distribution of the plurality of objects may mean a density distribution of objects for each location, and the location distribution data may be mixed with the density map.
  • the first feature extraction unit 600 may use a real-time object detection method using a first algorithm trained to suggest an area in which the object is located in the image data, that is, an area suggestion algorithm.
  • the first feature extraction unit 600 generates first position distribution data indicating a position distribution of the plurality of objects by using the first image data generated by including the plurality of objects, for example, and includes the plurality of objects.
  • Second position distribution data representing a position distribution of a plurality of objects may be generated using the second image data generated by the method.
  • the first position distribution data and the second position distribution data may be position distribution data of image data generated in time series.
  • the position distribution data does not indicate individual individual positions but is data indicating a probability that an individual, a trunk, or a head may exist in a region or block corresponding to each divided region or block of the image data.
  • the position distribution data may be a heat map expressing a probability that an object exists in each pixel in a different color.
  • the first feature extraction unit 600 may detect an animal object from the image data using the object detection classifier.
  • the object detection classifier is trained by constructing a training DB from images of animal objects photographed by different postures or external environments of the animal object.
  • the object detection classifier is a SVM (Support Vector Machine), a neural network, and an AdaBoost algorithm. Create a database of animal subjects through various learning algorithms, including
  • the first feature extracting unit 600 detects an edge of the object corresponding to the foreground in the image data of the background in the previously photographed kennel, applies an edge of the foreground object detected in the image data, and applies the edge of the foreground object.
  • Animal objects can be detected by applying the object detection classifier to the area of the image data to which is applied.
  • the second feature extraction unit 602 may generate motion data indicating the movement of the motion object among the plurality of objects using the image data.
  • the motion data is not data indicating the movement of an individual object, but data indicating whether a motion exists in a corresponding area or block for each divided area or block of the image data, and the motion data may be mixed with the motion map.
  • the motion data may be data indicating whether a motion exists in a pixel corresponding to each pixel.
  • the second feature extraction unit 602 may use a second algorithm trained to detect motion using optical flow.
  • the second feature extraction unit 602 may detect a movement at a specific point, a specific object, or a specific pixel on the distribution map using the single image data or the plurality of consecutive image data.
  • the second feature extraction unit 602 generates first motion data indicating the movement of the motion object among the plurality of objects by using the first image data, and operates the motion among the plurality of objects by using the second image data.
  • Second motion data representing the movement of the object may be generated.
  • the first motion data and the second motion data may be motion data for a plurality of image data generated in time series.
  • the second feature extraction unit 602 may detect the movement of the moving object using the Dense Optical Flow method.
  • the second feature extraction unit 602 may detect a motion for each pixel by calculating a motion vector for all pixels on the image data.
  • the Dense Optical Flow method since the motion vector is calculated for all pixels, the detection accuracy is improved, but the amount of computation is relatively increased. Therefore, the Dense Optical Flow method can be applied to a specific area where detection accuracy is very high, such as a kennel where an abnormal situation is suspected or a kennel with a large number of individuals.
  • the second feature extraction unit 602 may detect the movement of the moving object using a sparse optical flow method.
  • the second feature extraction unit 602 may detect a motion by calculating a motion vector only for some of the characteristic pixels that are easy to track, such as edges in the image.
  • the sparse optical flow method reduces the amount of computation, but only results for a limited number of pixels. Therefore, the sparse optical flow method may be applied to a kennel having a small number of individuals or to a specific area where the objects do not overlap.
  • the second feature extraction unit 602 may detect movement of the moving object using block matching.
  • the second feature extraction unit 602 may divide the image evenly or unequally, calculate a motion vector with respect to the divided region, and detect a motion.
  • Block Matching reduces the amount of computation because it calculates the motion vector for each partition, but it can have a relatively low detection accuracy because it calculates the results for the motion vector for each region. Accordingly, the block matching method may be applied to a kennel with a small number of individuals or to a specific area where the objects do not overlap.
  • the second feature extraction unit 602 may detect the movement of the moving object by using a continuous frame difference method.
  • the second feature extraction unit 602 may compare the consecutive image frames for each pixel and calculate a value corresponding to the difference to detect motion. Since the Continuous Frame Difference method detects motion by using the difference between frames, the overall computational amount is reduced, but the detection accuracy of a large object or a duplicate object may be relatively low. In addition, the Continuous Frame Difference method can not distinguish between the background image and the moving object may have a relatively low accuracy. Therefore, the Continuous Frame Difference method may be applied to a kennel with a small number of objects or to a specific area where the objects do not overlap.
  • the second feature extraction unit 602 may detect the movement of the moving object by using the background subtraction method.
  • the second feature extraction unit 602 may compare successive image frames for each pixel in a state where the background image is initially learned, and calculate a value corresponding to the difference to detect motion.
  • the background subtraction method is to learn the background image in advance so that the background image can be distinguished from the moving object. Therefore, a separate process of filtering the background image is required, thereby increasing the amount of computation but improving the accuracy. Therefore, the Background Subtraction method can be applied to a specific area where detection accuracy is very high, such as a kennel where an abnormal situation is suspected or a kennel with a large number of individuals. In the Background Subtraction method, the background image can be updated continuously.
  • the second feature extraction unit 602 detects the movement on the distribution chart using an appropriate method according to the environment in the kennel and the setting of the outside.
  • the above motion detection method is just an example, and methods capable of displaying a region (for example, a pixel / block) in which a motion occurs in a frame may be used.
  • the process of generating position distribution data by the first feature extracting unit 600 and the process of generating motion data by the second feature extracting unit 602 may be performed simultaneously, in parallel, or sequentially. That is, the process of generating position distribution data by the first feature extracting unit 600 and the process of generating motion data by the second feature extracting unit 602 may be independently processed.
  • the abnormal object information generating unit 604 may generate the abnormal object data indicating the abnormal object by region, block, or pixel by comparing position distribution data and motion data of the image data by region, block, or pixel.
  • the abnormal object information generating unit 604 may generate, for example, first abnormal object data indicating the abnormal object by comparing the first position distribution data and the first motion data.
  • the abnormal object information generating unit 604 may generate first abnormal object data indicating information on an object for which movement is not detected on the first position distribution data by comparing the first position distribution data and the first motion data. . That is, the abnormal object information generating unit 604 may estimate that the object has no disease detected on the first position distribution data indicating the position of the object, and generate the first abnormal object data.
  • the first abnormal object data may refer to data obtained by determining whether an object is a disease by using an object's position distribution and motion detection information for single image data.
  • the abnormal object information generating unit 604 compares the position distribution data and the motion data of the plurality of image data to calculate the cumulative number of motion detection and cumulative motion detection of the plurality of objects, and calculates the cumulative number of motion detection and the number of motion detection.
  • the abnormal object data may be generated according to the cumulative number of motion non-detections.
  • the abnormal object information generating unit 604 may generate the second abnormal object data by comparing the first abnormal object data, the second position distribution data, and the second motion data, for example.
  • the abnormal object information generating unit 604 compares the first abnormal object data with the second position distribution data and the second motion data to calculate the cumulative number of motion detection of the plurality of objects and the cumulative number of motion non-detection of the plurality of objects, and detect the motion.
  • the second or more entity data may be generated according to the cumulative number of times and the motion non-detection cumulative number of times. That is, the second abnormal object data may mean data obtained by determining whether an object is a disease by using position information and motion detection information of the object accumulated with respect to the plurality of image data.
  • the abnormal object information generating unit 604 may control the pixel display of the plurality of objects on the image data according to the motion detection accumulation count and the motion non-detection accumulation count of the abnormal object data.
  • the abnormal object information generating unit 604 may control pixel display of the plurality of objects on the image data according to, for example, the second abnormal object data.
  • the display of the pixel may include all concepts for distinguishing and displaying a pixel corresponding to an arbitrary point from other pixels, such as the saturation of the pixel, the intensity of the pixel, the color of the pixel, the outline of the pixel, and the mark display.
  • the display of the pixel may be controlled by adjusting the pixel value.
  • the pixel value may be adjusted in stages, and in the case of a pixel having a high pixel value, the pixel value may be visually emphasized than a pixel having a low pixel value.
  • the present invention is not limited thereto, and a pixel having a low pixel value may be set to be displayed to be highlighted than a pixel having a high pixel value.
  • the pixel value may mean an abnormality probability of each pixel.
  • one pixel represents one object in order to identify and identify the pixel and the object in which the motion is detected. This is for convenience of description and in practice, a plurality of pixels represent one object. That is, in order to determine an abnormal situation by detecting only movement of some body regions of poultry, a method of controlling the display of pixels by detecting movement for each pixel will be used.
  • the abnormal object information generating unit 604 may classify the object as an abnormal object as the movement of the specific object is not detected, and classify the pixel as a normal object as the motion is detected.
  • the update information may include first update information applied to the first feature extraction unit 600, second update information applied to the second feature extraction unit 602, and third application applied to the abnormal object information generation unit 604. It may include at least one of the update information. At least one of the first update information, the second update information, and the third update information may be update information extracted by the learning server 400 as a result of learning the training data transmitted by the abnormal object detection apparatus 100.
  • the training data may include a part of the image data acquired by the abnormal object detecting apparatus 100 and a part of the abnormal object data in which the error is corrected, which means that the abnormal object detecting apparatus 100 uses the algorithm for detecting the abnormal object. It may be obtained using the feedback information of the manager terminal 200 with respect to the abnormal object information obtained by driving.
  • at least one of the first update information, the second update information, and the third update information may include an adjustable matrix.
  • n-th image data is obtained.
  • the image data may be, for example, RGB data having a size of W X H (S1101).
  • the n th image data may be mixed with the n th original data, the n th original image, the n th original image data, and the like.
  • the controller 112 of the abnormal object detection apparatus 100 detects an object from the nth image data and generates position distribution data of the object with respect to the nth image data (S1102). ).
  • the position distribution data may be generated for each region, for each block, or for each pixel, and the controller 112 may use the first algorithm trained to suggest a region in which the object is located in the image data.
  • the first algorithm may use a real-time object detection method using an algorithm indicating an area where an object exists as described above, or the first algorithm may include distribution information of an object, that is, an area of an area where an object is located. It is also possible to use a method of representing density information.
  • the position distribution data may be a first density map.
  • the update parameter ⁇ value is applied to the position distribution data.
  • the update parameter ⁇ value and the offset parameter ⁇ value may be applied at the same time.
  • is a very small value, for example 0.001. That is, the position distribution data is controlled to be gradually displayed on the pixels only after accumulating for a long time.
  • the offset parameter ⁇ is a parameter for adjusting the accumulation of the position distribution data and may have a value of 1/10 to 1/100 of ⁇ (S1103).
  • the controller 112 of the abnormal object detecting apparatus 100 detects the movement of the object with respect to the n-th image data by comparing the n-th image data and the n-th image data. do.
  • the n ⁇ 1th image data may be stored in the latch circuit or the buffer circuit.
  • the control unit 112 of the abnormal object detection apparatus 100 for example, the second feature extraction unit 602, generates motion data according to the detected movement (S1104).
  • the motion data may be a motion map.
  • the motion data may be generated for each region, block, or pixel, and the controller 112 may use a second algorithm trained to detect motion by using an optical flow.
  • the update parameter ⁇ may be applied to the operation data.
  • is a parameter for adjusting the accumulation of motion data (S1105).
  • the control unit 112 of the abnormal object detecting apparatus 100 for example, the abnormal object information generating unit 604 adds the n-1th or more object data to the position distribution data of the n-th image data (S1106), and the n-th image.
  • the abnormal object data for the n-th image data is generated by subtracting the motion data of the data (S1107).
  • the abnormal object data for the nth image data may be an nth abnormal object density map.
  • the apparatus 100 for detecting an abnormal object repeats steps S1101 to S1107 to display the color of the detected object lightly or close to the original color, and accumulates the color of the detected object. Can be controlled to display dark or close to red.
  • the n-th or more entity data may be matched on the n-th image data, that is, the n-th original data, and the image in which the n-th or more entity data is matched on the n-th original data may be displayed on the manager terminal 200.
  • the region where the abnormal object is located in the n-th image may be masked using the n-th or more object density map, and the masked image may be displayed on the manager terminal 200.
  • the operation of the abnormality object detecting apparatus 100 may apply the following Equation 1.
  • Pixel t Pixel t-1 (1- ⁇ ) + ⁇ W t -F t
  • Equation 1 may be changed according to a setting as an update parameter.
  • Pixel t and Pixelt-1 are abnormal object data, and may indicate concentration of pixels as a value for displaying the presence or absence of the abnormal object in the pixel.
  • Pixel t and Pixelt-1 may be set to have a value between 0 and 1, and the closer to 0, the closer to the primary color (white), and the closer to 1, the red.
  • Pixel t-1 is abnormal object data of a previous frame in which position distribution data and motion data are accumulated.
  • Pixel t is abnormal object data updated by applying position distribution data and motion data of the current frame.
  • Wt may be position distribution data of a current frame.
  • the position distribution data may have a value between 0 and 1 as a probability that an object exists in a corresponding pixel.
  • the update parameter ⁇ may be applied to the position distribution data.
  • is a very small value, for example 0.001. That is, the position distribution data is controlled to be gradually displayed on the pixels only after accumulating for a long time.
  • F t may be motion data of the current frame.
  • the motion data is an absolute value of the motion vector and may have a value of 0 or more. Since the magnitude of the motion vector corresponds to the velocity of the object, it may have a value of 1 or more. Since a separate parameter is not reflected in the motion data, when the motion is detected in the pixel, the display of the pixel is initialized.
  • the operation of the abnormality object detecting apparatus may apply the following Equation 2.
  • Pixel t Pixel t-1 (1- ⁇ + ⁇ ) + ⁇ W t -F t
  • Equation 2 an offset parameter ⁇ is added in Equation 1, and the same description as in Equation 1 is omitted.
  • the offset parameter ⁇ is a parameter for adjusting the accumulation of the position distribution data and may have a value of 1/10 to 1/100 of ⁇ .
  • Equation 3 the operation of the apparatus for detecting an abnormality object according to the embodiment may apply Equation 3 or Equation 4 below.
  • Pixelt Pixelt-1 (1- ⁇ ) + ⁇ Wt- ⁇ Ft
  • Pixel t Pixel t-1 (1- ⁇ + ⁇ ) + ⁇ W t - ⁇ F t
  • Equation 3 or 4 is a product of the operation data F t multiplied by the update parameter ⁇ and the same content as in Equation 1 and Equation 2 is omitted.
  • constant ⁇ is a parameter that adjusts the accumulation of motion data.
  • the operation of the abnormality object detecting apparatus may apply Equation 5 below.
  • Equation 5 is an equation for preventing the value of Equations 1, 2, 3 or 4 from falling below zero.
  • the size of the motion data Ft is greater than the sum of other parameter values, and when the value of Equation 1, 2, 3 or 4 becomes a negative number less than 0, it is corrected to be displayed as 0. Control method.
  • the first update information, the second update information, and the third update information received from the learning server 400 may be applied during the processes S1101 to S1107.
  • the first update information may be represented by at least one of the update parameter ⁇ value and the offset parameter ⁇ value as described above.
  • the abnormality object detecting apparatus 100 may extract the training data and transmit the training data to the training server 400.
  • the training data may include a part of the image data acquired by the abnormal object detecting apparatus 100, which is a feedback on the abnormal object information extracted by the abnormal object detecting apparatus 100 by driving an algorithm for detecting the abnormal object. Can be obtained using the information.
  • the feedback information may include information modified by the manager terminal 200.
  • the apparatus for detecting an anomaly object 100 may extract training data and transmit the training data to the training server 400, and the training server 400 may perform abnormalities.
  • the object density prediction network may be retrained, that is, updated using the training data received from the object detection apparatus 100.
  • the training data may include image data indicated as having an error and abnormal object data in which the error is corrected. For example, when it is detected that the abnormal object exists even though the abnormal object does not exist in the partial region 806 in the n th frame, the training data is the image data of the n th frame and the abnormal object data of the n th frame.
  • the abnormality of the partial region 806 may include the object data.
  • the abnormality object data in which the error is corrected may be information in which the density of the object is corrected by the manager terminal 200, and the density distribution of the object for each region, block, or pixel, extracted by the abnormal object detection apparatus 100, may be determined.
  • a density distribution of an object for each region, block, or pixel modified by the manager terminal 200 may be referred to as a second density map.
  • the learning server 400 compares the output image of the object density prediction network in the learning server 400 with the error-corrected image included in the training data, that is, the correct answer image, to obtain a loss, and to minimize the loss.
  • variables (eg, feature maps) of the object density prediction network can be learned (corrected).
  • the control unit 112 of the abnormality object detecting apparatus 100 generates the position distribution data, that is, in step S1102, the object density network re-learned by the learning server 400. It is available. That is, the control unit 112 of the abnormality object detecting apparatus 100 uses the update information obtained by re-learning the object density network using the training data, for example, the n-th image and the second density map, from the learning server 400. The abnormal object density map of the predetermined image after the nth image may be output.
  • FIG. 12 (a) shows an input image. As shown in FIG. 12 (a), there are many errors in the input image in a method of detecting an individual object when a plurality of objects are distributed in a cluster. Therefore, detection of abnormal objects also causes a large error.
  • FIG. 12B illustrates a result of processing the input image of FIG. 12A using a first algorithm according to an exemplary embodiment of the present invention.
  • a learning model that is trained by creating a probability map by taking a white dot in the middle of a labeled rectangular box and blurring the spot to detect each object is used.
  • the trained model may output a probability map (for example, an 8-bit gray image representing the position distribution of the object) according to the possibility of the existence of the object in the input image.
  • a probability map for example, an 8-bit gray image representing the position distribution of the object
  • the learning model may be used to detect an individual in consideration of the density of the individual, and may determine whether there is an abnormality for each individual.
  • the clustered individuals have a large number of individuals, a large amount of overhead is involved in determining whether there is an abnormality for each individual (tracking method for each individual). Therefore, using the following second algorithm, it is possible to more simply and accurately determine the presence or absence of abnormalities on the clustered individuals.
  • FIG. 12C illustrates a result of processing the input image of FIG. 12A by the second algorithm according to the exemplary embodiment.
  • the moving pixel may be found by extracting the optical flow from the input image using the dense optical flow method. Then, moving pixels were removed from the 8-bit gray image of FIG. 12 (b) to obtain an image representing a pixel without motion as shown in FIG. 12 (c).
  • FIG. 12 (d) is an image obtained by matching the image of FIG. 12 (b) and the image of FIG. 12 (c) with the input image of FIG. 12 (a).
  • an image showing an area in which there is no motion in an area in which a plurality of objects in the input image are clustered and distributed may be obtained. Accordingly, it can be seen that a pixel in which motion is detected can be extracted with respect to the input image, and from this, it is possible to detect a plurality of objects moving in a cluster or a plurality of objects moving in a cluster.
  • ' ⁇ part' used in the present embodiment refers to software or a hardware component such as a field-programmable gate array (FPGA) or an ASIC, and ' ⁇ part' performs certain roles.
  • ' ⁇ ' is not meant to be limited to software or hardware.
  • ' ⁇ Portion' may be configured to be in an addressable storage medium or may be configured to play one or more processors.
  • ' ⁇ ' means components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, procedures, and the like. Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables.
  • the functionality provided within the components and the 'parts' may be combined into a smaller number of components and the 'parts' or further separated into additional components and the 'parts'.
  • the components and ' ⁇ ' may be implemented to play one or more CPUs in the device or secure multimedia card.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif de détection d'entité anormale qui, selon un mode de réalisation, comprend : un processeur de communication pour communiquer avec un serveur d'apprentissage; et une unité de commande pour recevoir une image capturée et délivrer une image dans laquelle, parmi des entités présentes dans l'image capturée, une zone de pixels où un mouvement n'est pas détecté est masquée, le processeur de communication transmettant au serveur d'apprentissage l'image capturée et une table de bits de la densité d'entités de l'image capturée.
PCT/KR2019/005226 2018-05-03 2019-04-30 Dispositif et procédé de détection d'entité anormale WO2019212237A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020197016267A KR20200139616A (ko) 2018-05-03 2019-04-30 이상 개체 검출 장치 및 방법

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20180051258 2018-05-03
KR10-2018-0051258 2018-05-03

Publications (1)

Publication Number Publication Date
WO2019212237A1 true WO2019212237A1 (fr) 2019-11-07

Family

ID=68386335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/005226 WO2019212237A1 (fr) 2018-05-03 2019-04-30 Dispositif et procédé de détection d'entité anormale

Country Status (2)

Country Link
KR (1) KR20200139616A (fr)
WO (1) WO2019212237A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102519715B1 (ko) * 2021-03-24 2023-04-17 주식회사 에어플 도로 정보 제공 시스템 및 도로 정보 제공 방법
KR102372508B1 (ko) * 2021-05-26 2022-03-08 한국교통대학교산학협력단 이상객체탐지장치 및 그 동작 방법
KR102347811B1 (ko) * 2021-05-31 2022-01-06 한국교통대학교산학협력단 이상 행동 객체 탐지 장치 및 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171311A1 (en) * 2014-12-16 2016-06-16 Sighthound Inc. Computer Vision Pipeline and Methods for Detection of Specified Moving Objects
WO2016208875A1 (fr) * 2015-06-25 2016-12-29 에스케이텔레콤 주식회사 Procédé et appareil de détection d'objet mobile à l'aide d'une différence dans une image
US20170098126A1 (en) * 2014-07-07 2017-04-06 Google Inc. Method and system for detecting and presenting video feed
KR101944374B1 (ko) * 2018-02-12 2019-01-31 엘지이노텍 주식회사 이상 개체 검출 장치 및 방법, 이를 포함하는 촬상 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170098126A1 (en) * 2014-07-07 2017-04-06 Google Inc. Method and system for detecting and presenting video feed
US20160171311A1 (en) * 2014-12-16 2016-06-16 Sighthound Inc. Computer Vision Pipeline and Methods for Detection of Specified Moving Objects
WO2016208875A1 (fr) * 2015-06-25 2016-12-29 에스케이텔레콤 주식회사 Procédé et appareil de détection d'objet mobile à l'aide d'une différence dans une image
KR101944374B1 (ko) * 2018-02-12 2019-01-31 엘지이노텍 주식회사 이상 개체 검출 장치 및 방법, 이를 포함하는 촬상 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FANG, ZHIJUN ET AL.: "Abnormal event detection in crowded scenes based on deep learning", MULTIMEDIA TOOLS AND APPLICATIONS, vol. 75, no. 22, 30 November 2016 (2016-11-30), pages 14617 - 14639, XP036082895, Retrieved from the Internet <URL:https://www.link.springer.com/article/10.1007/s11042-016-3316-3> [retrieved on 20190730], DOI: 10.1007/s11042-016-3316-3 *

Also Published As

Publication number Publication date
KR20200139616A (ko) 2020-12-14

Similar Documents

Publication Publication Date Title
WO2019235776A1 (fr) Dispositif et procédé de détermination d&#39;objet anormal
WO2019212237A1 (fr) Dispositif et procédé de détection d&#39;entité anormale
WO2019083299A1 (fr) Dispositif et procédé de gestion d&#39;un lieu d&#39;élevage
WO2021091021A1 (fr) Système de détection d&#39;incendie
WO2021221249A1 (fr) Système de gestion de bétail intelligent et procédé pour cela
WO2020141888A1 (fr) Dispositif de gestion de l&#39;environnement de ferme d&#39;élevage
WO2021095916A1 (fr) Système de suivi pouvant suivre le trajet de déplacement d&#39;un objet
WO2020017814A1 (fr) Système et procédé de détection d&#39;entité anormale
US20190122383A1 (en) Image processing apparatus and method
WO2012005387A1 (fr) Procédé et système de suivi d&#39;un objet mobile dans une zone étendue à l&#39;aide de multiples caméras et d&#39;un algorithme de poursuite d&#39;objet
WO2020046038A1 (fr) Robot et procédé de commande associé
WO2019168323A1 (fr) Appareil et procédé de détection d&#39;objet anormal, et dispositif de photographie le comprenant
WO2019182355A1 (fr) Téléphone intelligent, véhicule et caméra ayant un capteur d&#39;image thermique, et procédé d&#39;affichage et de détection l&#39;utilisant
WO2022080844A1 (fr) Appareil et procédé de suivi d&#39;objet à l&#39;aide de l&#39;analyse de squelette
KR101944374B1 (ko) 이상 개체 검출 장치 및 방법, 이를 포함하는 촬상 장치
WO2017111257A1 (fr) Appareil de traitement d&#39;images et procédé de traitement d&#39;images
WO2020256179A1 (fr) Marqueur pour la reconnaissance spatiale, procédé d&#39;alignement et de déplacement de robot de chariot par reconnaissance spatiale, et robot de chariot
KR20190103510A (ko) 촬상 장치, 이를 포함하는 가금류 관리 시스템 및 방법
WO2022035054A1 (fr) Robot et son procédé de commande
WO2020005038A1 (fr) Système et terminal pour gérer l&#39;environnement d&#39;un lieu d&#39;élevage et procédé associé
WO2021049730A1 (fr) Modèle de reconnaissance d&#39;image d&#39;entraînement de dispositif électronique et procédé de fonctionnement correspondant
WO2023158205A1 (fr) Élimination de bruit d&#39;une image de caméra de surveillance au moyen d&#39;une reconnaissance d&#39;objets basée sur l&#39;ia
WO2020096192A1 (fr) Dispositif électronique et procédé de commande correspondant
WO2021066275A1 (fr) Dispositif électronique et procédé de commande de celui-ci
WO2020017799A1 (fr) Dispositif et procédé de détection d&#39;un objet anormal, et dispositif d&#39;imagerie les comprenant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19795817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19795817

Country of ref document: EP

Kind code of ref document: A1