WO2020017814A1 - Système et procédé de détection d'entité anormale - Google Patents

Système et procédé de détection d'entité anormale Download PDF

Info

Publication number
WO2020017814A1
WO2020017814A1 PCT/KR2019/008473 KR2019008473W WO2020017814A1 WO 2020017814 A1 WO2020017814 A1 WO 2020017814A1 KR 2019008473 W KR2019008473 W KR 2019008473W WO 2020017814 A1 WO2020017814 A1 WO 2020017814A1
Authority
WO
WIPO (PCT)
Prior art keywords
abnormal object
image
object detection
error
error event
Prior art date
Application number
PCT/KR2019/008473
Other languages
English (en)
Korean (ko)
Inventor
김근웅
신제용
Original Assignee
엘지이노텍 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지이노텍 주식회사 filed Critical 엘지이노텍 주식회사
Publication of WO2020017814A1 publication Critical patent/WO2020017814A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • An embodiment relates to an abnormal object detection system and method.
  • Livestock raised in groups within a small kennel are very vulnerable to the spread of communicable disease. For example, legal epidemics such as foot-and-mouth disease and bird flu are spread through the air, so once they occur, the social costs of protection and prevention of infection are very high, and the social anxiety about food is rapidly spreading. . If abnormal symptoms are found in the kennel, it is important to quarantine the diseased livestock as soon as possible to prevent the spread of the disease.
  • the local machine collects the image data photographed in the kennel, and transmits the collected image data to the learning server.
  • the learning server may learn the image data received from the local machines to extract parameters to be applied to the abnormal symptom detection algorithm.
  • An object of the present invention is to provide an apparatus and method for detecting abnormal objects that can detect an object having a high probability of disease in image data photographed inside a kennel.
  • the abnormal object detection system detects an abnormal object by applying an image including information about the object to the first abnormal object detection model based on deep learning, and an error event occurs in the abnormal object detection result.
  • the apparatus for detecting an abnormal object transmitting the first error event occurrence image in which the error event has occurred to the user terminal, and error review information corresponding to the first error event occurrence image are received, and the first error event occurrence image is received.
  • a learning server configured to learn a deep learning-based second abnormal object detection model based on the error review information, wherein the abnormal object detection apparatus is further configured to perform the first abnormality based on a learning result of the second abnormal object detection model. Update the object detection model.
  • the user terminal may provide the user with the first error event occurrence image and receive the error review information corresponding to the first error event occurrence image from the user.
  • the user terminal transmits the first error event occurrence image and the error review information to an expert terminal, and the expert terminal receives expert review information on the first error event occurrence image and the error review information,
  • the error review information may be transmitted to the learning server according to the expert review information.
  • the abnormal object detecting apparatus may include at least one of a camera, a sensor, a computer, and a server.
  • the abnormal object detection apparatus may update the first abnormal object detection model by applying a learning parameter generated according to a learning result of the second abnormal object detection model to the first abnormal object detection model.
  • the apparatus for detecting an abnormal object may determine that the error event occurs when the number of times of detecting the abnormal object is greater than a preset threshold for a preset time.
  • the abnormal object detection apparatus may store an image for a predetermined period in a database.
  • the user terminal receives an image stored in the database, selects an abnormal object non-detection area among images stored in the database, generates a second error event generation image, and corresponds to the second error event occurrence image. Error review information can be entered.
  • the abnormal object detection apparatus detects the abnormal object by applying an image including information on the object to the first abnormal object detection model based on deep learning, the abnormal object If it is determined that an error event has occurred in the abnormal object detection result, the detecting apparatus transmits a first error event occurrence image in which the error event occurs to a user terminal, and the learning server determines an error corresponding to the first error event occurrence image. Receiving review information, learning by the learning server, based on the first error event occurrence image and the error review information, learning a second abnormal object detection model based on deep learning, and the abnormal object detection device, Updating the first abnormal object detection model based on a learning result of the second abnormal object detection model. It includes.
  • the user terminal transmitting the first error event occurrence image and the error review information to an expert terminal, and the expert terminal receiving expert review information on the first error event occurrence image and the error review information; And transmitting, by the expert terminal, the error review information to the learning server according to the expert review information.
  • the abnormal object detecting apparatus may include at least one of a camera, a sensor, a computer, and a server.
  • the updating of the first abnormal object detection model may include updating a first abnormal object detection model by applying a learning parameter generated according to a learning result of the second abnormal object detection model to the first abnormal object detection model. Can be.
  • the transmitting of the first error event occurrence image to the learning server may determine that the error event has occurred when the number of detection of the abnormal object is greater than a preset threshold for a preset time.
  • the apparatus may further include storing, by the apparatus for detecting an abnormal object, an image for a predetermined period in a database.
  • the method may further include receiving error review information corresponding to the error event occurrence image.
  • the apparatus for detecting an abnormal object detects an abnormal object by applying an image including information about the object to the first abnormal object detecting model based on deep learning, and indicates that an error event occurs in the abnormal object detection result.
  • the first error event generating image is transmitted to the user terminal, and the learning result of the second abnormal object detection model based on the error review information corresponding to the first error event generating image and the first error event generating image is determined. Update the first abnormal object detection model.
  • the first abnormal object detection model may be updated by applying a learning parameter generated according to a learning result of the second abnormal object detection model to the first abnormal object detection model.
  • the detected number of abnormal objects is greater than a preset threshold for a preset time, it may be determined that the error event has occurred.
  • Images for a certain period of time can be stored in the database.
  • the first abnormal object detection model is updated through a learning result of the second abnormal object detection model based on a second error event generating image and error review information corresponding to the second error event generating image, wherein the second error event is updated.
  • the error review information corresponding to the generated image the user terminal receives the image stored in the database, the user is selected from the abnormal object non-detection area of the image stored in the database to generate a second error event occurrence image, Error review information corresponding to the second error event occurrence image may be input.
  • the detection accuracy of the abnormal object may be improved.
  • the reliability of the training data may be improved by correcting the data in which an error occurs.
  • the manager can observe the situation of poultry in real time through a PC or smartphone.
  • the administrator can control the environment inside the kennel through the alarm, and can improve productivity through the steady management.
  • FIG. 1 is a diagram schematically showing a first embodiment of an anomaly detection system according to an embodiment of the present invention.
  • FIG. 2 is a view schematically showing a second embodiment of the anomaly detection system according to an embodiment of the present invention.
  • FIG. 3 is a diagram schematically showing a third embodiment of an anomaly detection system according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a configuration of an imaging device according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating a configuration of an anomaly detecting apparatus according to an embodiment of the present invention.
  • FIG. 6 is a view showing the configuration of a user terminal according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a configuration of a learning server according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating a configuration of an expert terminal according to an embodiment of the present invention.
  • FIG. 9 is a flow chart according to the first embodiment of the method for detecting an abnormal object according to an embodiment of the present invention.
  • FIG. 10 is a flow chart according to a second embodiment of the method for detecting an abnormal object according to an embodiment of the present invention.
  • FIG. 11 is a block diagram of a first abnormal object detection model according to an embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an abnormal object detection algorithm of the abnormal object detection apparatus according to the embodiment of the present invention.
  • FIG. 13 is a diagram for describing a method of detecting an abnormal object by the apparatus for detecting an abnormal object according to an embodiment of the present invention by using a result of re-learning by the learning server.
  • the technical idea of the present invention is not limited to some embodiments described, but may be embodied in different forms, and within the technical idea of the present invention, one or more of the components may be selectively selected between the embodiments. Can be combined and substituted.
  • first, second, A, B, (a), and (b) may be used.
  • a component when a component is described as being 'connected', 'coupled' or 'connected' to another component, the component is not only connected, coupled or connected directly to the other component, It may also include the case of 'connected', 'coupled' or 'connected' due to another component between the other components.
  • top (bottom) or the bottom (bottom) is not only when two components are in direct contact with each other, but also one. It also includes a case where the above-described further components are formed or disposed between two components.
  • up (up) or down (down) may include the meaning of the down direction as well as the up direction based on one component.
  • FIGS. 1 to 3 An abnormal object detection system according to an embodiment of the present invention will be described with reference to FIGS. 1 to 3.
  • Abnormal object detection system is a system for detecting abnormal objects among the objects in the kennel.
  • the kennel means a livestock farm
  • the individual may mean a livestock.
  • the livestock may be not only poultry such as chickens and ducks, but also various kinds of animals bred in groups such as cattle and pigs.
  • the abnormal object detection system analyzes the measurement information of the imaging device 100 using an algorithm, detects the abnormal object in the kennel through the analysis result, and transmits the abnormal object to the user.
  • the abnormal subject may refer to an individual who is not in a normal state due to a disease, pregnancy, or the like.
  • the algorithm may be trained by transmitting the information on detecting the abnormal object and review information thereof to the learning server 400.
  • feedback information about the error information is provided to a user and an expert, and the algorithm may be trained using the feedback information to improve the detection accuracy of the algorithm for detecting the abnormal object.
  • FIG. 1 is a diagram schematically showing a first embodiment of an anomaly detection system according to an embodiment of the present invention.
  • the anomaly detection system includes an imaging device 100, an anomaly detection device 200, a user terminal 300, and a learning server 400.
  • the imaging apparatus 100 is an apparatus for photographing a plurality of objects in a kennel to generate an image.
  • the imaging device 100 may be implemented as a device such as a camera or a camcorder.
  • the imaging apparatus 100 may communicate with the abnormal object detecting apparatus 200 by wire or wirelessly, and transmit the generated image to the abnormal object detecting apparatus 200.
  • the imaging apparatus 100 may be arranged in plurality in a kennel. Each of the plurality of imaging devices 100 may communicate with the abnormal object detecting apparatus 200 to transmit an image to the abnormal object detecting apparatus 200. As another example, any one of the plurality of imaging apparatuses 100 may be communicatively connected to other imaging apparatuses 100 by wire or wirelessly. Any one imaging apparatus 100 may receive an image from another imaging apparatus 100 and transmit the received image to the abnormality object detecting apparatus 200.
  • the image of the imaging apparatus 100 may have high quality and high capacity. Therefore, a large bandwidth may be required to transmit the images captured by the plurality of imaging apparatuses 100 to the abnormal object detecting apparatus 200 in real time, or the data transmission speed may be reduced. Accordingly, the imaging apparatus 100 may reduce the communication bandwidth and improve the transmission speed by collecting and encoding the image and transmitting the image to the abnormal object detecting apparatus 200. Alternatively, the image capturing apparatus 100 may transmit the processed data to the abnormality object detecting apparatus 200 after performing a primary analysis for analyzing a breeding state using an image. That is, the imaging apparatus 100 may efficiently manage communication resources by selecting only an image necessary for analyzing the state of the kennel and transmitting the image to the abnormal object detecting apparatus 200.
  • the abnormal object detecting apparatus 200 analyzes an image received from the imaging apparatus 100 to detect an abnormal object, and when it is determined that an error event occurs in the abnormal object detecting result, the user terminal outputs the first error event generating image. Send to 300.
  • the abnormal object detecting apparatus 200 may transmit not only the first error event occurrence image, but also an image in which an error event does not occur and an image in which the abnormal object is not detected, to the user terminal 300. have.
  • the abnormal object detection apparatus 200 may store not only the first error event occurrence image, but also an image in which an error event does not occur and an image in which an abnormal object is not detected among the images in which an abnormal object is detected for a predetermined period of time.
  • the abnormal object detection apparatus 200 may transmit the abnormal object detection information and the corresponding image to the learning server 400 as a result of detecting the abnormal object.
  • the abnormal object detection apparatus 200 detects the abnormal object in the image through a deep learning based first abnormal object detection model, that is, an algorithm for detecting the abnormal object.
  • the abnormality object detecting apparatus 200 may be implemented through a personal computer (PC), a tablet PC, a server, and the like, and the imaging apparatus 100, the user terminal 300, and the learning server 400 may transmit and receive data. ) Can be performed by wire or wirelessly.
  • the user terminal 300 provides the user with the first error event occurrence image received from the abnormal object detection apparatus 200, and receives the user's error review information on the first error event occurrence image.
  • the error review information refers to a result of the user's examination of whether or not the abnormal object detection information for the first error event occurrence image is an error, that is, a result of determining whether or not an error is detected.
  • the user terminal 300 transmits the error review information of the user to the learning server 400, and at this time, the first error event occurrence image and the abnormal object detection information corresponding thereto may be transmitted together.
  • the user terminal 300 may provide a user with an image in which an error event does not occur and an image in which an abnormal object is not detected among the images in which the abnormal object is detected. The user does not determine the occurrence of the error event or the abnormal object is not detected, even though the abnormal object detection apparatus 200 has an error in the abnormal object detection result in the image in which the error object has not been detected.
  • the error review information may be input to the user terminal 300. According to the result of the review, the user terminal 300 may generate a second error event occurrence image and transmit the second error event occurrence image and the error review information corresponding thereto to the learning server 400.
  • the first error event occurrence image may mean an image in which false detection of an abnormal object has occurred
  • the second error event occurrence image may mean an image in which abnormal detection of an abnormal object or an error has occurred.
  • the image In order to determine that such a misdetection or non-detection occurs, the image must be reproduced for a predetermined time or more. Livestock may have different durations of abnormal behavior, so the data size of the images may be different.
  • the learning server 400 trains the second abnormal object detection model based on the result of detecting the abnormal object received from the abnormal object detecting apparatus 200 and the corresponding image.
  • the second abnormal object detection model is trained based on the first error event occurrence image, the second error event occurrence image, and the error review information received from the user terminal 300.
  • the learning server 400 may train the second abnormal object detection model based on a result of detecting the abnormal object received from the abnormal object detecting apparatus 200 and an image corresponding thereto.
  • the second abnormal object detection model may be implemented by the same algorithm as the first abnormal object detection model.
  • the learning server 400 generates update information from the trained second abnormal object detection model and transmits the updated information to the abnormal object detection apparatus 200.
  • FIG. 2 is a view schematically showing a second embodiment of the anomaly detection system according to an embodiment of the present invention.
  • the abnormal object detection system includes an imaging device 100, an abnormal object detection device 200, a user terminal 300, and a learning server 400, and further includes an expert terminal 500. can do.
  • the user terminal 300 receives error review information on the first error event occurrence image or error review information corresponding to the second error event occurrence image from the user.
  • the first error event occurrence image and the second The error event generation image, error review information, and the like are transmitted to the expert terminal 500.
  • the expert terminal 500 receives expert review information about the first error event generation image, the second error event generation image, and the error review information received from the user terminal 300, and generates the first error event according to the expert review information.
  • the image, the second error event occurrence image, and the error review information are transmitted to the learning server 400.
  • the expert means a person or a group having expert knowledge about livestock in a breeding ground such as a livestock expert, and the expert terminal 500 refers to a terminal device used by such people or a group.
  • the learning server 400 trains the second abnormal object detection model based on the result of detecting the abnormal object received from the abnormal object detecting apparatus 200 and the corresponding image.
  • the second abnormal object detection model is trained based on the first error event occurrence image, the second error event occurrence image, and the error review information received from the expert terminal 500.
  • the learning server 400 may train the second abnormal object detection model based on a result of detecting the abnormal object received from the abnormal object detecting apparatus 200 and an image corresponding thereto.
  • the second abnormal object detection model may be implemented by the same algorithm as the first abnormal object detection model.
  • the learning server 400 generates update information from the trained second abnormal object detection model and transmits the updated information to the abnormal object detection apparatus 200.
  • FIG. 3 is a diagram schematically showing a third embodiment of an anomaly detection system according to an embodiment of the present invention.
  • the abnormal object detecting system includes the imaging apparatus 100, the abnormal object detecting apparatus 200, the user terminal 300, the learning server 400, and the expert terminal 500. ).
  • the abnormal object detection system illustrated in FIG. 3 is identical to the abnormal object detection system illustrated in FIG. 2 except that the abnormal object detection apparatus 200 is included in the imaging apparatus 100 and thus detailed description thereof will be omitted.
  • the abnormal object detection apparatus 200 may be implemented in the form of a program, and may be implemented through a computer processor, a memory, a communication processor, or the like provided in the imaging apparatus 100.
  • the learning server 400 since the learning server 400 needs to learn the second abnormal object detection model through much data in FIGS. 1 to 3, the learning server 400 may be implemented through a high capacity / high performance server or a cloud. In addition, the learning server 400 may transmit and receive data with the abnormal object detection apparatus 200 disposed in the plurality of farms, the user terminal 300 and the expert terminal 500 corresponding thereto. Therefore, the learning server 400 may improve the accuracy of the algorithm through various learning data generated in a plurality of farms.
  • FIG. 4 is a diagram illustrating a configuration of an imaging device according to an embodiment of the present invention.
  • the anomaly detecting apparatus 200 includes a photographing unit 110, an encoding unit 120, a first database 130, a first communication unit 140, a first control unit 150, and a pan tilt unit. 160 may be included.
  • the photographing unit 110 may be implemented to include a lens and an image sensor
  • at least one of the encoding unit 120 and the first control unit 150 may be implemented by a computer processor or a chip
  • the database may be a memory.
  • the first communication unit 140 may be mixed with an antenna or a communication processor.
  • the photographing unit 110 may include one or a plurality of photographing units.
  • the photographing unit 110 may include at least one upper photographing unit disposed at an upper portion of the kennel and at least one side photographing unit disposed at the side of the kennel.
  • Each of the upper photographing unit and the side photographing unit may be an IP camera that can communicate in a wired or wireless manner and transmit real-time images.
  • the photographing unit 110 may generate image data by photographing an image including a plurality of objects.
  • a plurality of individuals may mean poultry farmed in a kennel.
  • image data photographed by the photographing unit 110 may be mixed with original data, original image, photographed image, and the like.
  • the photographing unit 110 may generate a plurality of image data using a plurality of images sequentially photographed.
  • the photographing unit 110 may generate first image data by capturing a first image including a plurality of objects, and generate second image data by capturing a second image including a plurality of objects. can do.
  • the first image and the second image may each be images continuously photographed in time.
  • One image data may mean a single frame.
  • the photographing unit 110 may generate the first image data and the second image data by using the first image and the second image which are sequentially photographed.
  • the photographing unit 110 may be an image sensor photographing a subject using a complementary metal-oxide semiconductor (CMOS) module or a charge coupled device (CCD) module.
  • CMOS complementary metal-oxide semiconductor
  • CCD charge coupled device
  • the photographing unit 110 may include a fisheye lens or a wide angle lens having a wide viewing angle. Accordingly, it is also possible for one photographing unit 110 to photograph the entire space inside the kennel.
  • the photographing unit 110 may be a depth camera.
  • the photographing unit 110 may be driven by any one of various depth recognition methods, and the image captured by the photographing unit 110 may include depth information.
  • the photographing unit 110 may be a Kinect sensor.
  • Kinect sensor is a depth camera of the structured light projection method, it is possible to obtain a three-dimensional information of the scene by projecting a pattern image defined using a projector or a laser, and by obtaining a pattern projected image through the camera.
  • These Kinect sensors include infrared emitters that irradiate patterns using infrared lasers, and infrared cameras that capture infrared images.
  • An RGB camera that functions like a typical webcam is disposed between the infrared emitter and the infrared camera.
  • the Kinect sensor may further comprise a pan tilt unit 160 for adjusting the angle of the microphone array and the camera.
  • the basic principle of the Kinect sensor is that when a laser pattern irradiated from an infrared emitter is projected and reflected on an object, the distance to the surface of the object is obtained using the position and size of the pattern at the reflection point.
  • the photographing unit 110 may generate the image data including the depth information for each object by irradiating the laser pattern into the space in the kennel and sensing the laser pattern reflected from the object.
  • the encoder 120 encodes the image data photographed by the photographing unit 110 or the processed image data processed by the first controller 150 into a digital signal.
  • the encoder 120 may encode image data according to H.264, H.265, Moving Picture Experts Group (MPEG), and Motion Joint Photographic Experts Group (M-JPEG) standards.
  • MPEG Moving Picture Experts Group
  • M-JPEG Motion Joint Photographic Experts Group
  • the first database 130 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (eg, SD or XD memory). Etc.), magnetic memory, magnetic disk, optical disk, random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EPM), PROM It may include at least one storage medium of (Programmable Read-Only Memory).
  • the anomaly detecting apparatus 200 may operate a web storage that performs a storage function of a database on the Internet, or may operate in connection with the web storage.
  • the first database 130 may store image data photographed by the photographing unit 110, and may store image data for a predetermined period of time.
  • the first database 130 may store data and programs required for the abnormal object detection apparatus 200 to operate, and the first controller 150 may apply an algorithm required to extract the abnormal object information and may be applied thereto. You can save the parameter.
  • the database may store various user interfaces (UIs) or graphical user interfaces (GUIs).
  • UIs user interfaces
  • GUIs graphical user interfaces
  • the first communicator 140 may perform data communication with at least one of the abnormal object detecting apparatus 200 and the other imaging apparatus 100.
  • the first communication unit 140 may include a wireless LAN (WLAN), a Wi-Fi, a wireless broadband (Wibro), a WiMAX (World Interoperability for Microwave Access: Wimax), and an HSDPA (High).
  • Data communication may be performed using telecommunication technologies such as Speed Downlink Packet Access), IEEE 802.16, Long Term Evolution (LTE), and Wireless Mobile Broadband Service (WMBS).
  • the first communication unit 140 may include Bluetooth, RadioFrequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Zigbee, and Near Field Communication (NFC).
  • RFID RadioFrequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Zigbee Zigbee
  • NFC Near Field Communication
  • data communication may be performed using short-range communication technologies such as USB communication, Ethernet, serial communication, and optical / coaxial cable.
  • the first communication unit 140 performs data communication with another imaging device 100 using a short range communication technology, and uses the long distance communication technology to detect the abnormal object 200, the user terminal 300, or the like.
  • Data communication with the learning server 400 may be performed, but the present invention is not limited thereto, and various communication technologies may be used in consideration of various matters of the kennel.
  • the first communication unit 140 may transmit the image data photographed by the photographing unit 110 to the abnormality object detecting apparatus 200.
  • the first communication unit 140 may transmit a first error event occurrence image to the user terminal 300 and the learning server 400.
  • Data transmitted through the first communication unit 140 may be compressed data encoded through the encoding unit 120.
  • the first controller 150 controls the image capturing apparatus 100 as a whole.
  • the first controller 150 may perform an operation of the imaging apparatus 100 by performing a command stored in a database, for example.
  • the first controller 150 may control various operations of the abnormality object detecting apparatus 200 using a command received from the manager terminal.
  • the first controller 150 may control the photographing unit 110 to follow-up the specific region in which the abnormal object is located.
  • the tracking target may be set through a control command of the abnormal object detecting apparatus 200.
  • the first controller 150 controls the photographing unit 110 to track and photograph a specific area in which an abnormal object is generated to generate image data, thereby enabling continuous monitoring.
  • the first control unit 150 may control the pan tilt unit 160 of the abnormal object detection apparatus 200 to perform tracking shooting.
  • the pan tilt unit 160 may control two photographing regions of the photographing unit 110 by driving two motors, a pan and a tilt.
  • the pan tilt unit 160 may adjust the directing direction of the photographing unit 110 to photograph a specific area under the control of the first controller 150.
  • the pan tilt unit 160 may adjust the directing direction of the photographing unit 110 to track a specific object under the control of the first controller 150.
  • FIG. 5 is a diagram illustrating a configuration of an anomaly detecting apparatus according to an embodiment of the present invention.
  • the apparatus for detecting an anomaly 200 includes a second communication unit 210, a detector 220, an error determiner 230, and an updater 240. .
  • the second communication unit 210 may perform data communication with at least one of the imaging apparatus 100, the user terminal 300, and the learning server 400.
  • the second communication unit 210 may receive an image from the image capturing apparatus 100, and may transmit abnormal object detection information and an image corresponding thereto to the learning server 400.
  • the second communication unit 210 may transmit the first error event occurrence image or the image stored in the second database to the user terminal 300.
  • the communication technology used by the second communication unit 210 to perform data communication is the same as that of the first communication unit 140, and thus detailed description thereof will be omitted.
  • the detection unit 220 detects an abnormal object from an image including information on the object received from the imaging apparatus 100.
  • the detector 220 detects an abnormal object in the image by applying an image including information about the object to the deep learning-based first abnormal object detection model. A process of detecting an abnormal object in an image using the first abnormal object detection model will be described in detail with reference to the accompanying drawings.
  • the error determination unit 230 determines whether an error event occurs in the abnormal object detection result. For example, the error determination unit 230 may determine that an error event has occurred when an abnormal object is repeatedly detected more than a preset number of times for a predetermined time or when the number of abnormal objects is increased more than a predetermined number of times for a predetermined time. Can be.
  • the error determiner 230 may determine that an error event has occurred if an abnormal object is not detected in a portion detected as the abnormal object in the image for a predetermined time or more. For example, when an abnormal object is not suddenly detected in a portion where an object is detected for more than 3 minutes in the image, this may indicate that an error has occurred in the abnormal object detection result due to the movement of another object. However, when an abnormal object is detected again within a predetermined time in a region where an error is not suddenly detected, it may be determined that an error event has not occurred. For example, when an object is detected in the region A in the image for more than 10 minutes and suddenly the object is not detected, the error determiner 230 may determine that an error event has occurred in the region A. FIG. However, if an abnormal object is detected again in the area A after 1 minute, the error determination unit 230 may determine that an error event has not occurred in the area A.
  • the error determiner 230 may determine whether an error event occurs by calculating a probability value. For example, the probability of an error event occurring in a specific image may be 75% to determine whether an error event occurs.
  • the updater 240 updates the first abnormal object detection model based on the learning result of the second abnormal object detection model.
  • the updater 240 receives the learning parameter generated according to the learning result of the second abnormal object detection model from the learning server 400, and then applies the first abnormal object detection model to the first abnormal object detection model. Can be updated.
  • the learning parameter may be implemented by a tensor.
  • the learning parameter may be a set of parameters that each node constituting the layer included in the second abnormal object detection model has. Therefore, the updater 240 may update the first abnormal object detection model by applying the learning parameter to the corresponding nodes.
  • the abnormality object detecting apparatus 200 may further include a second database, and the second database may store an image received from the imaging apparatus 100 for a predetermined period of time.
  • the second database may store an image of the livestock, etc., for the period until the livestock is shipped.
  • the second database may store abnormal object detection information corresponding to the image.
  • the data stored in the second database can be used to track the onset of disease in the event of future outbreaks of livestock epidemics.
  • FIG. 6 is a view showing the configuration of a user terminal according to an embodiment of the present invention.
  • the user terminal 300 includes a third communication unit 310, a first display unit 320, a first input unit 330, and a second control unit 340. do.
  • the third communication unit 310 may perform data communication with at least one of the abnormality object detecting apparatus 200, the expert terminal 500, and the learning server 400.
  • the third communication unit 310 may receive the first error event occurrence image from the abnormal object detection apparatus 200, and among the images in which the abnormal object is stored in the second database, the image and the abnormal object not having an error event occur. It is also possible to receive an undetected image.
  • the third communication unit 310 may transmit the error review information input through the first input unit 330 and the image corresponding thereto to the learning server 400 or the expert terminal 500.
  • the communication technology used by the third communication unit 310 to perform data communication is the same as that of the first communication unit 140, and thus detailed description thereof will be omitted.
  • the first display unit 320 outputs an image and provides the image to the user.
  • the first display unit 320 may output the first error event occurrence image received from the abnormal object detection apparatus 200 through the liquid crystal screen of the user terminal 300.
  • the first display unit 320 may output the abnormal object detection information and the error event information received together with the first error event occurrence image.
  • the first display unit 320 may output the first error event occurrence image in real time. In addition, it may be output in the form of a timeline and displayed so that a user can select and view the first input unit 330.
  • the first display unit 320 may output an image stored in the second database, that is, an image in which an error event does not occur and an image in which an abnormal object is not detected.
  • the first display unit 320 may include a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display.
  • the display device may include at least one of a flexible display, a 3D display, and an e-ink display.
  • the first display unit 320 may display, via sound or tactile, that an image is received from the abnormal object detecting apparatus 200. Accordingly, the first display unit 320 may include a sound output device such as a speaker or a vibration generating device such as a vibration bell.
  • a sound output device such as a speaker or a vibration generating device such as a vibration bell.
  • the first input unit 330 receives the error review information from the user. For example, if it is determined that the abnormal object detection information of the first error event occurrence image is false detection, the first input unit 330 may receive information of false detection as error review information from the user. In addition, the first input unit 330 may receive various commands for the operation of the abnormal object detecting apparatus 200. For example, after receiving the error review information from the user, the first input unit 330 may receive from the user whether to transmit the received error review information to the expert terminal 500 or the learning server 400. have. In addition, the first input unit 330 may receive a command for outputting an image, in which an error event does not occur, and an image in which an abnormal object is not detected, to the first display unit 320.
  • the first input unit 330 may output various user interfaces or graphic user interfaces for receiving error review information and various commands on the LCD screen.
  • the first input unit 330 may include a keypad, a dome switch, a touch pad, a jog wheel, a jog switch, and the like.
  • the first display unit 320 and the touch pad have a mutual layer structure to form a touch screen, the first display unit 320 may be used as an input device in addition to the output device.
  • the second controller 340 controls the overall operation of the user terminal 300.
  • the second controller 340 controls the display unit to output an image received through the third communication unit 310 or the error review information input by the third communication unit 310 through the first input unit 330.
  • the corresponding image may be controlled to be transmitted to the learning server 400 or the expert terminal 500.
  • the second control unit 340 may include a computing device such as a central processing unit (CPU) or a micro control unit (MCU).
  • FIG. 7 is a diagram illustrating a configuration of a learning server according to an embodiment of the present invention.
  • the learning server 400 includes a fourth communication unit 410, a preprocessor 420, a learner 430, and an update information generator 440. .
  • the fourth communication unit 410 may perform data communication with at least one of the abnormality object detecting apparatus 200, the user terminal 300, and the expert terminal 500.
  • the fourth communication unit 410 may receive the abnormal object detection information and the corresponding image from the abnormal object detection apparatus 200, and transmit the learning parameter, that is, the update information, to the abnormal object detection apparatus 200.
  • the fourth communication unit 410 may receive a first error event occurrence image, a second error event occurrence image, error review information, and expert review information from the user terminal 300 or the expert terminal 500.
  • the communication technology used by the fourth communication unit 410 to perform data communication is the same as that of the first communication unit 140, and thus detailed description thereof will be omitted.
  • the preprocessor 420 may perform abnormal object detection information corresponding to the image received from the abnormal object detection apparatus 200 and the received image, the first error event occurrence image, and the first error event occurrence image.
  • the learning data to be used by the learner 430 is extracted using the detection information and the error review information.
  • the learner 430 trains the second abnormality object detection model using the training data extracted by the preprocessor 420.
  • the second abnormal object detection model may be implemented by the same algorithm as the first abnormal object detection model of the abnormal object detection apparatus 200.
  • the update information generator 440 extracts a learning parameter from the trained second abnormal object detection model to generate update information.
  • the update information may be generated in real time or at regular time intervals.
  • FIG. 8 is a diagram illustrating a configuration of an expert terminal according to an embodiment of the present invention.
  • the expert terminal 500 includes a fifth communication unit 510, a second display unit 520, a second input unit 530, and a third control unit 540. do.
  • the fifth communication unit 510 may perform data communication with at least one of the user terminal 300 and the learning server 400.
  • the fifth communication unit 510 may receive a first error event occurrence image, a second error event occurrence image, and error review information from the user terminal 300.
  • the fifth communication unit 510 may transmit the first error event occurrence image, the second error event occurrence image, and expert review information to the learning server 400.
  • the communication technology used by the fifth communication unit 510 to perform data communication is the same as that of the first communication unit 140, and thus detailed description thereof will be omitted.
  • the second display unit 520 outputs an image and provides it to the expert.
  • the second display unit 520 may output the first error event occurrence image or the second error event occurrence image received from the user terminal 300 through the LCD screen of the expert terminal 500.
  • the second display unit 520 may output the abnormal object detection information and the error event information received together with the first error event occurrence image and the second error event occurrence image.
  • the second display unit 520 may include a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display panel.
  • the display device may include at least one of a flexible display, a 3D display, and an e-ink display.
  • the second display unit 520 may display, via sound or tactile, that an image is received from the abnormal object detecting apparatus 200.
  • the second display unit 520 may include a sound output device such as a speaker or a vibration generator such as a vibration bell.
  • the second input unit 530 receives expert review information from an expert.
  • the second input unit 530 may receive expert review information about the first error event occurrence image and error review information of the user corresponding to the first error event occurrence image or the error review information of the second error event occurrence image and the user corresponding thereto. You can receive expert review information. For example, if it is determined that the error review information of the user regarding the second error event occurrence image is wrong, the second input unit 530 may not provide the expert review information that the error review information is not transmitted from the expert to the learning server 400. Can be input.
  • the second input unit 530 may output various user interfaces or graphic user interfaces for receiving error review information and various commands on the LCD screen.
  • the second input unit 530 may include a keypad, a dome switch, a touch pad, a jog wheel, a jog switch, and the like.
  • the second display unit 520 and the touch pad have a mutual layer structure to form a touch screen, the second display unit 520 may be used as an input device in addition to the output device.
  • the third controller 540 controls the overall operation of the expert terminal 500.
  • the third controller 540 may control whether to transmit the error review information to the learning server 400 according to the input expert review information.
  • the third control unit 540 may include a computing device such as a central processing unit (CPU) or a micro control unit (MCU).
  • FIG. 9 is a flow chart according to the first embodiment of the method for detecting an abnormal object according to an embodiment of the present invention.
  • FIG. 9 illustrates an abnormal object detection method using the abnormal object detection system shown in FIG. 1.
  • the apparatus 200 for detecting an abnormal object acquires an image including information on an object received from the imaging apparatus 100 (S100), and detects an abnormal object from the obtained image (S102).
  • one image data may mean a single frame, and the imaging apparatus 100 may generate a plurality of image data by using images sequentially photographed.
  • the abnormal object detection apparatus 200 may extract the abnormal object information using a pre-stored algorithm and a parameter applied thereto.
  • the pre-stored algorithm is a first abnormal object detection model, which is trained to detect motion by using a first algorithm and an optical flow trained to display a region determined to be located in the image data.
  • the algorithm 2 it can mean an algorithm for displaying an object with no movement among the objects.
  • the first algorithm may use a real-time object detection method using an algorithm indicating an area where an object exists, or the first algorithm may indicate distribution information of an object, that is, density information of an area where an object is located. Can also be used.
  • the abnormal object detection apparatus 200 determines whether an error event has occurred in the abnormal object detection result (S104). For example, the abnormal object detecting apparatus 200 determines that an error event has occurred when the abnormal object is repeatedly detected more than a preset number of times for a predetermined time or when the number of detecting abnormal objects increases more than the predetermined number of times for a predetermined time. can do.
  • the abnormal object detection apparatus 200 transmits the first error event occurrence image in which the error event occurs to the user terminal 300 (S106). At this time, the abnormal object detection apparatus 200 may transmit information on detecting the abnormal object. The abnormal object detection apparatus 200 transmits the abnormal object detection result to the learning server 400 (S108). In this case, the abnormal object detecting apparatus 200 may transmit a corresponding image together.
  • the user terminal 300 provides the user with information on detecting the first error event occurrence image and the abnormal object, and receives error review information corresponding to the first error event occurrence image from the user (S110).
  • the user terminal 300 transmits the error review information to the learning server 400 together with the information of detecting the first error event occurrence image and the abnormal object (S112).
  • the learning server 400 detects the abnormal object corresponding to the image received from the abnormal object detection apparatus 200 and the abnormal object detection information corresponding to the received image, the first error event occurrence image, and the first error event occurrence image.
  • the training data is extracted using the information and the error review information (S114).
  • the learning server 400 may extract training data from an image received from the abnormal object detecting apparatus 200 and abnormal object detection information corresponding to the received image.
  • the learning server 400 may extract the training data from the first error event occurrence image, the abnormal object detection information and the error review information corresponding to the first error event occurrence image.
  • the learning server 400 trains the second abnormality object detection model using the training data (S116).
  • the learning server 400 transmits the learning parameter generated according to the learning result of the second abnormal object detection model, that is, update information, to the abnormal object detecting apparatus 200 (S118).
  • the abnormal object detection apparatus 200 applies the learning parameter to the first abnormal object detection model and updates it (S120).
  • the apparatus 200 for detecting an abnormality may transmit an image stored in the second database to the user terminal 300 instead of performing operations S102 to S106. Then, the user terminal 300 may provide the user with the image stored in the received second database. Instead of performing step S110, the apparatus 200 for detecting an abnormality generates a second error event occurrence image by selecting a region where an abnormal object is not detected from an image provided by a user, and inputs error review information corresponding thereto. You can get it. The second error event occurrence image and the error review information corresponding thereto may be transmitted to the learning server 400 and used for algorithm learning.
  • FIG. 10 is a flow chart according to a second embodiment of the method for detecting an abnormal object according to an embodiment of the present invention.
  • FIG. 10 illustrates an abnormal object detection method using the abnormal object detection system including the expert terminal 500 illustrated in FIGS. 2 and 3.
  • Steps S200 to S210 of FIG. 10 are the same as steps S100 to S110 of FIG. 9, and thus detailed descriptions thereof will be omitted.
  • the user terminal 300 When the user terminal 300 receives error review information corresponding to the first error event occurrence image from the user in step S210, the user terminal 300 transmits the first error event occurrence image and the error review information corresponding thereto to the expert terminal 500 ( S212).
  • the expert terminal 500 provides the expert with the first error event occurrence image and the error review information corresponding thereto, and receives expert review information corresponding to the error review information from the expert (S214).
  • the expert terminal 500 transmits the first error event occurrence image and the error review information to the learning server 400 according to the expert review information (S216). For example, when the expert review information is input that the error review information is wrong, the expert terminal 500 may not transmit the first error event occurrence image and the error review information to the learning server 400.
  • Steps S218 to S224 are the same as steps S114 to S120 of FIG. 9, and a part for the second error event occurrence image is also the same, and thus a detailed description thereof will be omitted.
  • FIG. 11 is a block diagram of a first abnormal object detection model according to an embodiment of the present invention.
  • the first abnormal object detection model includes a first feature extraction unit 600, a second feature extraction unit 602, and an abnormal object information generating unit 604.
  • the first feature extracting unit 600 extracts the position distribution of the plurality of objects in the image data with respect to the image photographed by the photographing unit 110, and the second feature extracting unit 602 extracts the position of the plurality of objects in the image data. Extract the movement.
  • the abnormal object information generating unit 604 then generates the abnormal object information, for example, the pixel based on the position distribution extracted by the first feature extraction unit 600 and the movement extracted by the second feature extraction unit 602. Estimate the odds of a star anomaly.
  • the first feature extracting unit 600 may generate position distribution data indicating the position distribution of the plurality of objects using the image data.
  • the location distribution of the plurality of objects may mean a density distribution of objects for each location, and the location distribution data may be mixed with the density map.
  • the first feature extraction unit 600 may use a real-time object detection method using a first algorithm trained to suggest an area in which the object is located in the image data, that is, an area suggestion algorithm.
  • the first feature extraction unit 600 generates first position distribution data indicating a position distribution of the plurality of objects by using the first image data generated by including the plurality of objects, for example, and includes the plurality of objects.
  • Second position distribution data representing a position distribution of a plurality of objects may be generated using the second image data generated by the method.
  • the first position distribution data and the second position distribution data may be position distribution data of image data generated in time series.
  • the position distribution data does not indicate individual individual positions but is data indicating a probability that an individual, a trunk, or a head may exist in a region or block corresponding to each divided region or block of the image data.
  • the position distribution data may be a heat map expressing a probability that an object exists in each pixel in a different color.
  • the first feature extraction unit 600 may detect an animal object from the image data using the object detection classifier.
  • the object detection classifier is trained by constructing a training DB from the images of the animal objects photographed by changing the posture or the external environment of the animal object.
  • the object detection classifier is a SVM (Support Vector Machine), a neural network, and an AdaBoost algorithm. Create a database of animal subjects through various learning algorithms, including
  • the first feature extracting unit 600 detects an edge of the object corresponding to the foreground in the image data of the background in the previously photographed kennel, applies an edge of the foreground object detected in the image data, and applies the edge of the foreground object.
  • Animal objects may be detected by applying the object detection classifier to the area of the image data to which the is applied.
  • the first feature extraction unit 600 may be trained to detect an object in the captured image.
  • the first feature extraction unit 600 may include a computer readable program.
  • the program may be stored in a recording medium or a storage device that can be executed by a computer.
  • a processor in a computer may read a program stored in a recording medium or a storage device, execute a program, that is, a trained model, calculate input information, and output a calculation result.
  • the input of the first feature extraction unit 600 may be one or a plurality of image data obtained by photographing the inside of the kennel, and the output of the first feature extraction unit 600 may be position distribution data where an object is detected.
  • the first feature extraction unit 600 includes a first neural network trained to learn a correlation between the interior image of the kennel and the individual, using the image of the interior of the kennel as an input layer, and to output the position distribution data from which the object is detected. can do.
  • the first neural network is an example of a deep learning algorithm designed to display an area where an object is located on image data.
  • the first neural network may be an algorithm for inputting image data into a convolution network-based learning machine and then outputting position distribution data in which regions where an object is located are distinguished.
  • the image data photographed inside the kennel becomes the input layer of the first neural network, and the first neural network may learn the correlation between the kennel internal image data and the individual.
  • the output layer of the first neural network may be position distribution data displayed so that an area where an object is located is distinguished from image data photographed inside the kennel.
  • the second feature extraction unit 602 may generate motion data indicating the movement of the motion object among the plurality of objects using the image data.
  • the motion data does not indicate the movement of individual objects, but is data indicating whether motion exists in a corresponding area or block for each divided area or block of the image data, and the motion data may be mixed with the motion map.
  • the motion data may be data indicating whether a motion exists in a pixel corresponding to each pixel.
  • the second feature extraction unit 602 may use a second algorithm trained to detect motion using optical flow.
  • the second feature extraction unit 602 may detect movement at a specific point, a specific object, or a specific pixel on the distribution map using the single image data or the plurality of consecutive image data.
  • the second feature extraction unit 602 generates first motion data indicating the movement of the motion object among the plurality of objects by using the first image data, and operates the motion among the plurality of objects by using the second image data.
  • Second motion data representing the movement of the object may be generated.
  • the first motion data and the second motion data may be motion data for a plurality of image data generated in time series.
  • the second feature extraction unit 602 may detect the movement of the moving object using the Dense Optical Flow method.
  • the second feature extraction unit 602 may calculate a motion vector for all the pixels on the image data to detect a motion for each pixel.
  • the Dense Optical Flow method since the motion vector is calculated for all pixels, the detection accuracy is improved, but the amount of calculation is relatively increased. Therefore, the Dense Optical Flow method can be applied to a specific area that requires a very high detection accuracy, such as a kennel where an abnormal situation is suspected or a kennel with a large number of individuals.
  • the second feature extraction unit 602 may detect the movement of the moving object by using a sparse optical flow method.
  • the second feature extraction unit 602 may detect a motion by calculating a motion vector only for some of the characteristic pixels that are easy to track, such as edges in the image.
  • Sparse Optical Flow reduces the amount of computation, but only results for a limited number of pixels. Therefore, the Sparse Optical Flow method can be applied to a kennel with a small number of individuals or to a specific area where the objects do not overlap.
  • the second feature extraction unit 602 may detect movement of the moving object using block matching.
  • the second feature extraction unit 602 may divide the image evenly or unequally, calculate a motion vector for the divided region, and detect motion.
  • Block Matching reduces the amount of computation because it calculates the motion vector for each partition, but it can have a relatively low detection accuracy because it calculates the results for the motion vector for each region. Accordingly, the block matching method may be applied to a kennel with a small number of individuals or to a specific area where the objects do not overlap.
  • the second feature extraction unit 602 may detect the movement of the moving object by using a continuous frame difference method.
  • the second feature extraction unit 602 may compare the successive image frames for each pixel and calculate a value corresponding to the difference to detect motion. Since the Continuous Frame Difference method detects motion by using the difference between frames, the overall computational amount is reduced, but the detection accuracy of a large object or a duplicate object may be relatively low. In addition, the Continuous Frame Difference method may not distinguish the background image from the moving object and may have a relatively low accuracy. Therefore, the Continuous Frame Difference method may be applied to a kennel with a small number of objects or a specific area where the objects do not overlap.
  • the second feature extraction unit 602 may detect the movement of the moving object by using the background subtraction method.
  • the second feature extraction unit 602 may compare successive image frames for each pixel in a state where the background image is initially learned, and calculate a value corresponding to the difference to detect motion.
  • the Background Subtraction method pre-learns the background image so that the background image can be distinguished from the moving object. Therefore, a separate process of filtering the background image is required, thereby increasing the computation amount but improving the accuracy. Therefore, the background subtraction method can be applied to a specific area where detection accuracy is very high, such as a kennel suspected of an abnormal situation or a kennel with a large number of individuals.
  • the background image can be updated continuously.
  • the second feature extraction unit 602 may be trained to detect motion in the captured image.
  • the second feature extraction unit 602 may comprise a computer readable program.
  • the program may be stored in a recording medium or a storage device that can be executed by a computer.
  • a processor in a computer may read a program stored in a recording medium or a storage device, execute a program, that is, a trained model, calculate input information, and output a calculation result.
  • the input of the second feature extraction unit 602 may be one or a plurality of image data photographed inside the kennel, and the output of the second feature extraction unit 602 may be operation data detected from the image data.
  • the second feature extracting unit 602 uses the image inside the kennel as an input layer, learns the correlation between the image inside the kennel and the movement in the image, and trains the second neural data to be the output layer according to the detected movement. It may include a network.
  • the second neural network is an example of a deep learning algorithm designed to indicate an area in which motion exists in the image data.
  • the second neural network may be an algorithm for inputting image data to a convolution network-based learning machine and then outputting data displayed to distinguish the region where the motion is detected.
  • the image data photographed inside the kennel becomes the input layer of the second neural network, and the second neural network may learn the correlation between the kennel internal image data and the movement.
  • the output layer of the second neural network may be motion data displayed to distinguish the area where the motion is detected from the image data photographed inside the kennel.
  • the second feature extraction unit 602 detects motion on the distribution chart using an appropriate method according to the environment in the kennel and the setting of the outside.
  • the above-described motion detection method is merely an example, and methods capable of displaying a region (eg, a pixel / block) in which a motion occurs in a frame may be used.
  • the process of generating position distribution data by the first feature extracting unit 600 and the process of generating motion data by the second feature extracting unit 602 may be performed simultaneously, in parallel, or sequentially. That is, the process of generating position distribution data by the first feature extracting unit 600 and the process of generating motion data by the second feature extracting unit 602 may be processed independently of each other.
  • the abnormal object information generating unit 604 may generate the abnormal object data indicating the abnormal object by region, block, or pixel by comparing position distribution data and motion data of the image data by region, block, or pixel.
  • the abnormal object information generating unit 604 may generate the first abnormal object data indicating the abnormal object by comparing the first position distribution data and the first motion data, for example.
  • the abnormal object information generating unit 604 may compare the first position distribution data and the first motion data to generate first abnormal object data indicating information about an object on which motion is not detected on the first position distribution data. . That is, the abnormal object information generating unit 604 may estimate that the object has no disease detected on the first position distribution data indicating the position of the object, and generate the first abnormal object data.
  • the first abnormal object data may mean data obtained by determining whether an object is a disease by using the position distribution and motion detection information of the object with respect to the single image data.
  • the abnormal object information generating unit 604 compares the position distribution data and the motion data of the plurality of image data to calculate the cumulative number of motion detection and non-detection motion of the plurality of objects, and calculates the cumulative number of motion detection and the number of motion detection.
  • the abnormal object data may be generated according to the cumulative number of motion non-detections.
  • the abnormal object information generating unit 604 may generate the second abnormal object data by comparing the first abnormal object data, the second position distribution data, and the second motion data, for example.
  • the abnormal object information generating unit 604 compares the first abnormal object data with the second position distribution data and the second motion data to calculate the cumulative number of motion detection of the plurality of objects and the cumulative number of motion non-detection of the plurality of objects, and detect the motion.
  • the second abnormal object data may be generated according to the cumulative number of times and the motion non-detection cumulative number of times. That is, the second abnormal object data may mean data obtained by determining whether an object is a disease by using position information and motion detection information of the object accumulated with respect to the plurality of image data.
  • the abnormality entity information generating unit 604 may be trained to generate abnormality candidate data using the first position distribution data and the first operation data.
  • the abnormality object information generating unit 604 may include a computer readable program.
  • the program may be stored in a recording medium or a storage device that can be executed by a computer.
  • a processor in a computer may read a program stored in a recording medium or a storage device, execute a program, that is, a trained model, calculate input information, and output a calculation result.
  • the input of the abnormal object information generating unit 604 may be first position distribution data and the first operation data, and the output of the abnormal object information generating unit 604 may be abnormal object candidate data.
  • the abnormal entity information generating unit 604 uses the first position distribution data and the first operation data as an input layer, and a set of blocks in which the movement is not detected on the first operation data while there is a probability of existence of the entity in the first position distribution data. And a third neural network trained to learn the correlation between the candidate data and the candidate entity data.
  • the third neural network is an example of a deep learning algorithm designed to determine, as an abnormal entity candidate, a block in which motion is not detected among blocks having an existence probability of an object on image data, and display information on it.
  • the third neural network may be an algorithm for inputting the first position distribution data and the first motion data to the convolutional network-based learning machine and then outputting data displayed to distinguish the region where the abnormal entity candidate is located.
  • the first position distribution data and the first motion data become an input layer of the third neural network, and the third neural network has a probability of existence of the object in the first position distribution data and no motion is detected on the first motion data. You can learn the correlation between sets of blocks.
  • the output layer of the third neural network may be abnormal entity candidate data.
  • abnormal object information generating unit 604 may be trained to generate abnormal object data using the second position distribution data, the second operation data, and the abnormal object candidate data.
  • the input of the abnormal object information generating unit 604 may be second position distribution data, the second operation data, and the abnormal object candidate data, and the output of the abnormal object information generating unit 604 may be abnormal object data.
  • the abnormal object information generating unit 604 uses the second position distribution data, the second operation data, and the abnormal object candidate data as input layers, while there is a probability of existence of the object in the second position distribution data among the blocks determined as the abnormal object candidate.
  • the correlation may include a fourth neural network trained to learn correlations between sets of blocks on which motion is not detected on second motion data, and to generate abnormal object data as an output layer.
  • the fourth neural network is an example of a deep learning algorithm designed to determine an abnormal object among blocks having an object existence probability on image data and display information about the abnormal object.
  • the fourth neural network may be an algorithm for inputting second position distribution data, second motion data, and abnormal object candidate data to a convolutional network-based learning machine, and then outputting data in which the region where the abnormal object is located is distinguished. have.
  • the second position distribution data, the second motion data, and the abnormal entity candidate data become an input layer of the fourth neural network, and the fourth neural network exists in the second position distribution data among the blocks determined as the abnormal entity candidates. It is possible to learn the correlation between a set of blocks in which there is a probability and no motion is detected on the second motion data.
  • the output layer of the fourth neural network may be abnormal entity data.
  • the abnormal object information generating unit 604 may control the pixel display of the plurality of objects on the image data according to the cumulative number of motion detection and the number of motion non-detection of the abnormal object data.
  • the abnormal object information generating unit 604 may control pixel display of the plurality of objects on the image data according to, for example, second abnormal object data.
  • the display of the pixel may include all concepts for distinguishing and displaying a pixel corresponding to an arbitrary point from other pixels, such as the saturation of the pixel, the intensity of the pixel, the color of the pixel, the outline of the pixel, and the mark display.
  • the display of the pixel may be controlled by adjusting the pixel value.
  • the pixel value may be adjusted in stages, and in the case of a pixel having a high pixel value, the pixel value may be visually emphasized than a pixel having a low pixel value.
  • the present invention is not limited thereto, and a pixel having a low pixel value may be set to be displayed with emphasis on a pixel having a high pixel value.
  • the pixel value may mean an abnormality probability for each pixel.
  • one pixel represents one object in order to identify and identify the object in which the motion is detected. This is for convenience of description and in practice, a plurality of pixels represent one object. That is, in order to determine an abnormal situation by detecting only movements of some body regions of poultry, a method of controlling the display of pixels by detecting movements for each pixel will be used.
  • the abnormal object information generating unit 604 may classify the object as an abnormal object as the motion of the specific object is not detected, and classify the pixel as a normal object as the motion is detected.
  • each of the first feature extraction unit 600, the second feature extraction unit 602, and the abnormal object information generating unit 604 is the first update information, the second update information, and the first update information which are learning parameters from the learning server 400. 3 can receive the update information, each of the first update information, the second update information and the third update information is the first feature extraction unit 600, the second feature extraction unit 602 and the abnormal object information generating unit 604 ) Can be applied to each.
  • the first update information, the second update information, and the third update information may be update information extracted by the learning server 400 as a result of learning the training data transmitted by the abnormality object detecting apparatus 200.
  • the training data may include a part of the image data acquired by the abnormal object detecting apparatus 200 and a part of the abnormal object data in which the error is corrected, which means that the abnormal object detecting apparatus 200 uses the algorithm for detecting the abnormal object. It may be obtained using the feedback information of the user terminal 300 to the abnormal object data obtained by driving.
  • each of the first update information, the second update information, and the third update information may include an adjustable matrix.
  • n-th image data is obtained.
  • the image data may be, for example, RGB data having a size of W X H (S1101).
  • the n th image data may be mixed with the n th original data, the n th original image, the n th original image data, and the like.
  • the detector 220 of the abnormal object detecting apparatus 200 detects an object from the n-th image data and generates position distribution data of the object with respect to the n-th image data (S1102). ).
  • the position distribution data may be generated for each region, for each block, or for each pixel, and the detector 220 may use the first algorithm trained to suggest a region in which the object is located in the image data.
  • the first algorithm may use a real-time object detection method using an algorithm indicating an area in which an object exists as described above, or the first algorithm may include distribution information of an object, that is, an area in which an object is located. It is also possible to use a method of representing density information.
  • the position distribution data may be a first density map.
  • the update parameter ⁇ value is applied to the position distribution data.
  • the update parameter ⁇ value and the offset parameter ⁇ value may be applied at the same time.
  • is a very small value, for example 0.001. That is, the position distribution data is controlled to be gradually displayed on the pixels only after accumulating for a long time.
  • the offset parameter ⁇ is a parameter for adjusting the accumulation of the position distribution data and may have a value of 1/10 to 1/100 of ⁇ (S1103).
  • the detector 220 of the abnormal object detecting apparatus 200 detects the movement of the object with respect to the n-th image data by comparing the n-th image data and the n-th image data. do.
  • the n ⁇ 1 th image data may be stored in the latch circuit or the buffer circuit.
  • the detector 220 of the abnormal object detection apparatus 200 for example, the second feature extractor 602, generates motion data according to the detected movement (S1104).
  • the motion data may be a motion map.
  • the motion data may be generated for each region, for each block, or for each pixel, and the detector 220 may use a second algorithm trained to detect motion by using an optical flow.
  • the update parameter ⁇ may be applied to the operation data. ⁇ is a parameter for adjusting the accumulation of motion data (S1105).
  • the detecting unit 220 of the abnormal object detecting apparatus 200 for example, the abnormal object data generating unit 604 adds the n-1th or more object data to the position distribution data of the n-th image data (S1106) and the n-th image.
  • the motion data of the data is subtracted to generate abnormal object data for the n-th image data (S1107).
  • the abnormal object data for the nth image data may be an nth abnormal object density map.
  • the apparatus 200 for detecting an abnormal object repeats steps S1101 to S1107 to display the color of the detected object lightly or close to the original color, and accumulates the color of the detected object. Can be controlled to display dark or close to red.
  • the n-th or more entity data may be matched on the n-th image data, that is, the n-th original data, and the image in which the n-th or more entity data is matched on the n-th original data may be displayed on the user terminal 300.
  • the region where the abnormal object is located in the n-th image may be masked using the n-th or more object density map, and the masked image may be displayed on the user terminal 300.
  • the operation of the abnormality object detecting apparatus 200 may apply the following Equation 1.
  • Pixel t Pixel t -1 (1- ⁇ ) + ⁇ W t -F t
  • Equation 1 may be changed according to a setting as an update parameter.
  • Pixel t and Pixel t ⁇ 1 are abnormal object data and may indicate concentration of a pixel as a value for displaying the presence or absence of the abnormal object in the pixel. Pixels with a higher probability of anomalous objects are displayed in darker colors. For example, if there is no probability of anomaly, it is displayed in primary color (white). The higher the probability of anomaly is displayed, the closer it is to red. If it is determined that the probability of anomaly is very high, it is displayed in the deepest red. To make it happen. Accordingly, Pixel t and Pixel t ⁇ 1 may be set to have a value between 0 and 1, and the closer to 0, the closer to the primary color (white), and the closer to 1, the red.
  • Pixel t - 1 is abnormal object data of a previous frame in which position distribution data and motion data are accumulated.
  • Pixel t is abnormal object data updated by applying position distribution data and motion data of the current frame.
  • W t may be position distribution data of a current frame.
  • the position distribution data may have a value between 0 and 1 as a probability that an object exists in a corresponding pixel.
  • the update parameter ⁇ may be applied to the position distribution data.
  • is a very small value, for example 0.001. That is, the position distribution data is controlled to be gradually displayed on the pixels only after accumulating for a long time.
  • F t may be motion data of a current frame.
  • the motion data is an absolute value of the motion vector and may have a value of 0 or more. Since the magnitude of the motion vector corresponds to the velocity of the object, it may have a value of 1 or more. Since the motion data does not reflect any parameters, if the motion is detected in the pixel, the display of the pixel is initialized.
  • the operation of the abnormality object determining apparatus may apply the following Equation 2.
  • Pixel t Pixel t -1 (1- ⁇ + ⁇ ) + ⁇ W t -F t
  • Equation 2 an offset parameter ⁇ is added in Equation 1, and the same description as in Equation 1 is omitted.
  • the offset parameter ⁇ is a parameter for adjusting accumulation of position distribution data and may have a value of 1/10 to 1/100 of ⁇ .
  • Equation 3 the operation of the apparatus for determining an abnormality object according to the embodiment may apply Equation 3 or Equation 4 below.
  • Pixel t Pixel t -1 (1- ⁇ ) + ⁇ W t - ⁇ F t
  • Pixel t Pixel t -1 (1- ⁇ + ⁇ ) + ⁇ W t - ⁇ F t
  • Equation 3 or 4 is obtained by multiplying the operation parameter F t by the update parameter ⁇ , and the same content as in Equation 1 and Equation 2 is omitted.
  • constant ⁇ is a parameter that adjusts the accumulation of motion data.
  • the operation of the abnormality object determining apparatus may apply Equation 5 below.
  • Equation 5 is an equation for preventing the value of Equations 1, 2, 3 or 4 from falling below zero.
  • the size of the motion data F t is greater than the sum of other parameter values, so that when the values of Equations 1, 2, 3, or 4 become negative numbers less than 0, they are corrected so that they can be displayed as 0. Control method.
  • the abnormality object detecting apparatus 200 may extract the training data and transmit the training data to the learning server 400.
  • the training data may include a part of the image data acquired by the abnormal object detecting apparatus 200, which may be fed back to the abnormal object data extracted by driving the algorithm for detecting the abnormal object. Can be obtained using the information.
  • the feedback information may include information modified by the user terminal 300.
  • the abnormality object detecting apparatus 200 may extract training data and transmit the training data to the learning server 400.
  • the object density prediction network may be retrained, that is, updated using the training data received from the abnormal object detection apparatus 200.
  • the training data may include image data indicated to be in error and abnormal object data in which the error is corrected.
  • the training data is the image data of the nth frame and the abnormal object data of the nth frame.
  • the abnormality of the partial region 806 may include the object data.
  • the abnormality object data having the error corrected may be information in which the density of the object is corrected by the user terminal 300.
  • a density distribution of an object for each region, block, or pixel modified by the user terminal 300 may be referred to as a second density map.
  • the learning server 400 compares the output image of the object density prediction network in the learning server 400 with the error-corrected image included in the training data, that is, the correct answer image, to obtain a loss, and to minimize the loss.
  • variables (eg, feature maps) of the object density prediction network can be learned (corrected).
  • the detection unit 220 of the anomaly detecting apparatus 200 generates position distribution data, that is, in step S1102, the object density network re-learned by the learning server 400. Can be used. That is, the detection unit 220 of the abnormality object detecting apparatus 200 uses the training data from the learning server 400, for example, the n-th image using update information obtained by re-learning using the n-th image and the second density map. The abnormal object density map of the predetermined image may then be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un système de détection d'entité anormale comprenant : un appareil de détection d'entité anormale permettant de détecter une entité anormale par l'application d'une image comprenant des informations concernant une entité à un premier modèle de détection d'entité anormale fondé sur un apprentissage profond et si, en résultat de la détection d'entité anormale, il est déterminé qu'un événement d'erreur s'est produit, de transmettre, à un terminal utilisateur, une première image de survenue d'événement d'erreur dans laquelle l'événement d'erreur s'est produit; et un serveur d'apprentissage permettant de recevoir des informations d'examen d'erreur correspondant à la première image de survenue d'événement d'erreur, et d'entraîner un second modèle de détection d'entité anormale fondé sur un apprentissage profond en fonction de la première image de survenue d'événement d'erreur et des informations d'examen d'erreur, l'appareil de détection d'entité anormale mettant à jour le premier modèle de détection d'entité anormale en fonction du résultat d'apprentissage du second modèle de détection d'entité anormale.
PCT/KR2019/008473 2018-07-19 2019-07-10 Système et procédé de détection d'entité anormale WO2020017814A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0084002 2018-07-19
KR1020180084002A KR20200009530A (ko) 2018-07-19 2018-07-19 이상 개체 검출 시스템 및 방법

Publications (1)

Publication Number Publication Date
WO2020017814A1 true WO2020017814A1 (fr) 2020-01-23

Family

ID=69164550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/008473 WO2020017814A1 (fr) 2018-07-19 2019-07-10 Système et procédé de détection d'entité anormale

Country Status (2)

Country Link
KR (1) KR20200009530A (fr)
WO (1) WO2020017814A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194286A (zh) * 2021-04-26 2021-07-30 读书郎教育科技有限公司 一种智能台灯辅助管控做作业的系统及方法
CN115187929A (zh) * 2022-08-24 2022-10-14 长扬科技(北京)股份有限公司 一种两级异动策略的ai视觉检测方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102321205B1 (ko) * 2021-06-10 2021-11-03 주식회사 스누아이랩 인공지능 서비스장치 및 그 장치의 구동방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140119228A (ko) * 2013-03-27 2014-10-10 한국로봇융합연구원 스마트 단말기 연계형 실시간 검진 방법 및 이를 위한 통합관리시스템
KR20170061016A (ko) * 2015-11-25 2017-06-02 삼성전자주식회사 데이터 인식 모델 구축 장치 및 방법과 데이터 인식 장치
KR101789690B1 (ko) * 2017-07-11 2017-10-25 (주)블루비스 딥 러닝 기반 보안 서비스 제공 시스템 및 방법
KR101830056B1 (ko) * 2017-07-05 2018-02-19 (주)이지팜 딥러닝 기반의 병해 진단 시스템 및 그 이용방법
KR20180040287A (ko) * 2016-10-12 2018-04-20 (주)헬스허브 기계학습을 통한 의료영상 판독 및 진단 통합 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140119228A (ko) * 2013-03-27 2014-10-10 한국로봇융합연구원 스마트 단말기 연계형 실시간 검진 방법 및 이를 위한 통합관리시스템
KR20170061016A (ko) * 2015-11-25 2017-06-02 삼성전자주식회사 데이터 인식 모델 구축 장치 및 방법과 데이터 인식 장치
KR20180040287A (ko) * 2016-10-12 2018-04-20 (주)헬스허브 기계학습을 통한 의료영상 판독 및 진단 통합 시스템
KR101830056B1 (ko) * 2017-07-05 2018-02-19 (주)이지팜 딥러닝 기반의 병해 진단 시스템 및 그 이용방법
KR101789690B1 (ko) * 2017-07-11 2017-10-25 (주)블루비스 딥 러닝 기반 보안 서비스 제공 시스템 및 방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194286A (zh) * 2021-04-26 2021-07-30 读书郎教育科技有限公司 一种智能台灯辅助管控做作业的系统及方法
CN115187929A (zh) * 2022-08-24 2022-10-14 长扬科技(北京)股份有限公司 一种两级异动策略的ai视觉检测方法及装置

Also Published As

Publication number Publication date
KR20200009530A (ko) 2020-01-30

Similar Documents

Publication Publication Date Title
WO2020017814A1 (fr) Système et procédé de détection d'entité anormale
WO2019132518A1 (fr) Dispositif d'acquisition d'image et son procédé de commande
WO2020075888A1 (fr) Programme informatique et terminal permettant de fournir des informations concernant des animaux individuels en fonction d'images de visage et de nez d'animal
WO2019235776A1 (fr) Dispositif et procédé de détermination d'objet anormal
WO2019083299A1 (fr) Dispositif et procédé de gestion d'un lieu d'élevage
WO2019151735A1 (fr) Procédé de gestion d'inspection visuelle et système d'inspection visuelle
WO2019083227A1 (fr) Procédé de traitement d'image médicale, et appareil de traitement d'image médicale mettant en œuvre le procédé
WO2022114731A1 (fr) Système de détection de comportement anormal basé sur un apprentissage profond et procédé de détection pour détecter et reconnaître un comportement anormal
WO2019212237A1 (fr) Dispositif et procédé de détection d'entité anormale
WO2022139111A1 (fr) Procédé et système de reconnaissance d'objet marin sur la base de données hyperspectrales
WO2019168323A1 (fr) Appareil et procédé de détection d'objet anormal, et dispositif de photographie le comprenant
WO2020141888A1 (fr) Dispositif de gestion de l'environnement de ferme d'élevage
WO2021091161A1 (fr) Dispositif électronique et son procédé de commande
WO2021006482A1 (fr) Appareil et procédé de génération d'image
WO2013165048A1 (fr) Système de recherche d'image et serveur d'analyse d'image
KR101944374B1 (ko) 이상 개체 검출 장치 및 방법, 이를 포함하는 촬상 장치
WO2020045702A1 (fr) Programme d'ordinateur et terminal pour fournir une analyse d'urine à l'aide d'une table de colorimétrie
WO2020005038A1 (fr) Système et terminal pour gérer l'environnement d'un lieu d'élevage et procédé associé
WO2020149493A1 (fr) Dispositif électronique et son procédé de commande
WO2022225102A1 (fr) Ajustement d'une valeur d'obturateur d'une caméra de surveillance par le biais d'une reconnaissance d'objets basée sur l'ia
WO2022080844A1 (fr) Appareil et procédé de suivi d'objet à l'aide de l'analyse de squelette
WO2020017799A1 (fr) Dispositif et procédé de détection d'un objet anormal, et dispositif d'imagerie les comprenant
WO2022039575A1 (fr) Système de surveillance de processus en temps réel à base d'apprentissage profond et procédé associé
WO2020116983A1 (fr) Appareil électronique, procédé de commande d'appareil électronique et support lisible par ordinateur
WO2021066275A1 (fr) Dispositif électronique et procédé de commande de celui-ci

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19837348

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19837348

Country of ref document: EP

Kind code of ref document: A1