WO2023113183A1 - Apparatus and method for analyzing children's behavior using edutech technology to improve educational culture for children - Google Patents

Apparatus and method for analyzing children's behavior using edutech technology to improve educational culture for children Download PDF

Info

Publication number
WO2023113183A1
WO2023113183A1 PCT/KR2022/015540 KR2022015540W WO2023113183A1 WO 2023113183 A1 WO2023113183 A1 WO 2023113183A1 KR 2022015540 W KR2022015540 W KR 2022015540W WO 2023113183 A1 WO2023113183 A1 WO 2023113183A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
unit
analysis
identification
Prior art date
Application number
PCT/KR2022/015540
Other languages
French (fr)
Korean (ko)
Inventor
박윤하
김대수
김종일
서우석
Original Assignee
주식회사 우경정보기술
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 우경정보기술 filed Critical 주식회사 우경정보기술
Publication of WO2023113183A1 publication Critical patent/WO2023113183A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0004Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by the type of physiological signal transmitted
    • A61B5/0013Medical image data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1123Discriminating type of movement, e.g. walking or running
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/167Personality evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/06Children, e.g. for attention deficit diagnosis

Definitions

  • the present invention relates to a device and method for analyzing the behavior of a user such as a child and providing various services based on the analysis result.
  • the analysis device of the present invention includes a collection unit for collecting a first image captured by a user; a de-identification unit that de-identifies the user appearing in the first image; an analyzer configured to analyze a behavior of the user appearing in the first image; It may include a storage unit for matching and storing the second image corresponding to the first image de-identified by the de-identification unit and the analysis information analyzed by the analysis unit.
  • the analysis method of the present invention includes a collection step of collecting a first image captured by a user; a de-identification step of de-identifying the user appearing in the first image; a pre-processing step of selecting a target image in which the user appears from among a plurality of first images output from the collecting step and standardizing only the plurality of target images; Analyzing changes in user areas scraped from consecutive frames of the standardized target image to extract deviant behaviors of the user, and storing analysis information on the scraped user areas and the user's deviant behaviors in frame units in a scrapbook analysis phase; A re-identification step of identifying whether the user analyzed for deviant behavior by the analyzer is a pre-registered first user, and outputting only the deviant behavior analysis information included in the scrapbook of the first user and the scraped user area among a plurality of scrapbooks. ; and a storage step of matching and storing a second image corresponding to the first image de-identified in the de-identification step, the analysis
  • the analysis apparatus and method of the present invention may analyze abnormal behavior of users such as children through image analysis.
  • the analysis device may de-identify and store the image that is the basis for determining the abnormal behavior.
  • the de-identified image and the user's behavior analysis information may be provided to various service means in pairs.
  • the user's behavioral analysis information can be utilized by various service means related to accident prevention, safety, education, psychological counseling, and the like.
  • a safer educational environment can be created through child behavior analysis and artificial intelligence-based smart environment service.
  • the present invention can provide a service platform based on child behavior analysis using edutech technology for improving educational culture for children.
  • 1 is a schematic diagram showing the analysis device of the present invention.
  • FIG. 2 is a flowchart illustrating the operation of the collection unit.
  • 3 is a flowchart illustrating the operation of the non-identification unit.
  • FIG. 4 is a flowchart illustrating the operation of a pre-processing unit.
  • 5 is a flowchart illustrating the operation of the analyzer.
  • FIG. 6 is a flowchart illustrating the operation of the re-identification unit.
  • FIG. 7 is a flowchart showing the analysis method of the present invention.
  • FIG. 8 is a diagram illustrating a computing device according to an embodiment of the present invention.
  • 'and/or' includes a combination of a plurality of listed items or any item among a plurality of listed items.
  • 'A or B' may include 'A', 'B', or 'both A and B'.
  • FIG. 1 is a schematic diagram showing an analysis device 100 of the present invention.
  • the analysis device 100 shown in FIG. 1 includes a collection unit 110, a de-identification unit 120, a pre-processing unit 130, an analysis unit 140, a re-identification unit 150, a storage unit 160, an API A server 170 and a control unit 180 may be included.
  • the collection unit 110 may collect a first image in which a user is photographed.
  • the first image may include a video.
  • the collecting unit 110 may include a photographing means 10 such as a camera, a Closed Circuit TV (CCTV), or a smartphone for photographing a user.
  • a photographing means 10 such as a camera, a Closed Circuit TV (CCTV), or a smartphone for photographing a user.
  • the collection unit 110 may include various communication modules that communicate with the photographing means 10 by wire or wireless.
  • the de-identification unit 120 may de-identify a user appearing in the first image.
  • De-identification processing may mean processing (including image processing) so that a user appearing in the first image is not recognized.
  • the first image may include a face, a resident registration number, a phone number, a vehicle number, an address, a company name, and the like, which can identify the user's personal information.
  • the first image containing these personal information cannot be provided to a third party due to moral regulations or laws. According to this, it may be difficult for a user to use various service means involving a third party.
  • the personal information is deleted by the de-identification unit 120, the user can comfortably provide images corresponding to the conditions for receiving the service without any difficulty.
  • a user with a violent tendency may wish to receive a psychological counseling service.
  • a psychological counseling service For example, a user with a violent tendency may wish to receive a psychological counseling service.
  • the provision of the first image required to accurately analyze the user's condition is limited, it is difficult to provide realistic counseling through a counselor.
  • a so-called face mosaic-processed image can be provided to the counselor, and the counselor can conduct psychological counseling with the user through this.
  • the analysis unit 140 may analyze the user's behavior appearing in the first image.
  • the second image corresponding to the first image de-identified by the de-identification unit 120 and the analysis information analyzed by the analysis unit 140 may be matched and stored.
  • FIG. 2 is a flowchart illustrating the operation of the collection unit 110.
  • the collection unit 110 may receive a new video corresponding to the first video from the photographing unit 10 (S511). The collecting unit 110 may transmit the collected first image to the de-identifying unit 120 in order to de-identify personal information included in the image (S512). Also, the collecting unit 110 may transmit the collected first image to the pre-processing unit 130 (S513).
  • the collection unit 110 may set some information included in the first image as identification information or set newly generated identification information to the image.
  • the identification information at this time may not be used to identify the user, but may be used to distinguish the first image from other images.
  • the collection unit 110 may provide the first image for which identification information is set to the non-identification unit 120 and the analysis unit 140, respectively.
  • Identification information included in the first image may also be included in the second image output from the non-identifying unit 120 . Identification information included in the first image may also be included in analysis information output from the analyzer 140 .
  • the storage unit 160 may match the second image and the analysis information by using the identification number included in the second image and the identification information included in the analysis information.
  • 3 is a flowchart illustrating the operation of the non-identification unit 120.
  • the de-identification unit 120 Before storing the collected videos in the storage unit 160, the de-identification unit 120 identifies objects related to personal information, for example, face, resident registration number, phone number, vehicle number, address, area where the company name is displayed, and the object. can be identified.
  • the de-identification unit 120 may detect an object representing personal identification information included in the first image and damage an area of the object in the first image to be unrecognizable.
  • the de-identified first image may be stored in the storage unit 160 .
  • the de-identification unit 120 may first receive the first image to be de-identified from the collection unit 110 (S521). At this time, the non-identification unit 120 may read (load) the first image frame by frame (S522).
  • the de-identification unit 120 may detect an object representing personal identification information (personal information) in units of frames of the first image (S523).
  • the non-identification unit 120 may mosaic-process the area of the detected object and output a second image corresponding to the mosaic-processed first image (S524).
  • the second image may refer to an image in which a region in which personal identification information is displayed in the first image is damaged or mosaic-processed. In this way, all personal identification information elements included in the first image can be de-identified.
  • the second image subjected to de-identification processing may be stored in the storage unit 160 (S526).
  • the pre-processing unit 130 may divide the plurality of first images into a target image and a dummy image.
  • the target image may include an image in which a user to be analyzed by the analyzer 140 appears among a plurality of first images.
  • the dummy image may include an image in which a user to be analyzed by the analyzer 140 does not appear among the plurality of first images.
  • the pre-processing unit 130 may exclude the dummy image from among the target image and the dummy image and provide only the target image to the analysis unit 140 .
  • the pre-processing unit 130 may divide the single first image into a target section and a dummy section.
  • the target section may refer to a frame section in which a user to be analyzed by the analyzer 140 appears.
  • the dummy section may refer to a frame section in which a user to be analyzed by the analyzer 140 does not appear.
  • the preprocessor 130 may exclude the dummy section and generate a new simplified image using only the target section.
  • the preprocessor 130 may provide the simplified image to the analyzer 140 instead of the first image.
  • the pre-processing unit 130 may perform standardization to convert the format of the first image according to the input format of the analysis unit 140 .
  • the pre-processing unit 130 may convert at least one of the file size, frame per second (FPS), and image format of the first shape according to the input format of the analysis unit 140 .
  • properties such as size and FPS of the first image vary according to setting values of the photographing unit 10 and collection environments, the form of input data of the artificial intelligence model may be fixed. Accordingly, the properties of the first image need to be modified to match the shape of the input data of the artificial function model.
  • the size and FPS of the first image may be standardized according to the artificial intelligence model loaded in the analysis unit 140 by the pre-processing unit 130 .
  • the first image may be standardized according to the input data form of the analysis model loaded in the analysis unit 140 and the re-identification model provided in the re-identification unit 150 through a pre-processing process.
  • the target image selection and standardization performed by the pre-processing unit 130 are performed together.
  • FIG. 4 is a flowchart illustrating the operation of the pre-processing unit 130.
  • the pre-processing unit 130 may receive the first image from the collecting unit 110 (S531).
  • the pre-processing unit 130 may select and standardize a target image so that the artificial intelligence model loaded in the analysis unit 140 can analyze the first image. Through image screening and standardization, the processing efficiency of artificial intelligence models can be improved.
  • a selection module and a standardization module may be provided in the preprocessing unit 130 .
  • the selection module may select only the target image in which the user appears (S533) from among the plurality of first images output from the collection unit 110 (S532). According to the screening module, unnecessary images are excluded from analysis by the analyzer 140, and the processing efficiency of the entire system can be improved.
  • the standardization module may standardize only the target image among the plurality of first images (S534).
  • standardization may include an operation of converting the format of the target image according to the input format of the analyzer 140 .
  • the standardization module may provide the standardized target image to the analysis unit 140 (S535).
  • 5 is a flowchart illustrating the operation of the analyzer 140.
  • the analysis unit 140 may receive the first image or simplified image that has passed the pre-processing process from the pre-processing unit 130 (S541).
  • the analyzer 140 may load the first image in frame units (S542).
  • the analysis unit 140 may detect and scrap the user area included in each frame (S543).
  • the analyzer 140 may analyze changes in the user area scraped from consecutive frames, and may extract and analyze the user's abnormal behavior through the analysis of the area change (S544).
  • the analyzer 140 may match the scraped user area with the extracted abnormal behavior analysis information and store it in a scrapbook (S545).
  • the scrapbook is a set of data generated by the analyzer 140, and may include a plurality of pairs of scraped user areas and abnormal behavior analysis information.
  • the scraped user area may be a cutout of only the user's body part excluding the background in a specific frame.
  • Analysis information of abnormal behavior may include vector information of the user's movement or evaluation results of abnormal behavior analyzed through a machine learning model, for example, classification results such as violence, fall, fall, quarrel, collision, and violence. can
  • the analyzer 140 may output a specific scrapbook created targeting the specific first image when the analysis of the specific first image is completed or the first image ends (S548).
  • a specific scrapbook may include all scraps and analysis information extracted from a specific first image.
  • the scrapbook may be transferred to the storage unit 160 or the re-identification unit 150 .
  • the analysis unit 140 may analyze the user's behavior appearing in the first image and detect the user's abnormal behavior through the analysis.
  • the analysis unit 140 sends a notification message to the control unit 180 or the API server 170 'API server' that manages user-related services when an abnormal behavior, in particular, an emergency abnormal thawing related to a safety accident is detected (S546). can be transmitted (S 546).
  • the notification message may include analysis information and a scraped user area (image). A supervisor or guardian who recognizes the notification message can quickly solve a dangerous situation for the user or his or her surroundings.
  • FIG. 6 is a flowchart illustrating the operation of the re-identification unit 150.
  • the re-identification unit 150 may identify whether the user analyzed by the analysis unit 140 as an abnormal behavior is a pre-registered first user.
  • the second image and analysis information stored in the storage unit 160 may be limited to those of the first user.
  • the second image and analysis information stored in the storage unit 160 may be used for various services for a specific user. In order to provide a service, it is necessary to specify a user to be provided with the service. Even if a specific user included in the first image shows an abnormal behavior, there is a practical problem in that the service cannot be provided even if the user wants to provide the service because they do not know where they live or who they are. As a result, since analysis of an unspecified user included in the first image has no practical benefit, it may be advantageous to store only the second image and analysis information for the already registered first user in the storage unit 160. .
  • the analyzer 140 may store analysis information on the scraped user region and the user's abnormal behavior in frame units of the first image in a scrapbook.
  • the re-identification unit 150 may receive the scrapbook from the analysis unit 140 (S551). Then, analysis information and scrap images may be loaded in the scrapbook (S552).
  • the re-identification unit 150 may use the scraped user area to identify (re-identify) whether a specific user has already been registered (S553).
  • the re-identification unit 150 stores the abnormal behavior analysis information of the specific user and the scrapped user area (scrap image) in the storage server corresponding to the storage unit 160. ) can be stored (S 555).
  • the above operations of the re-identification unit 150 may be continuously performed until the scrapbook ends (S556).
  • the re-identification unit 150 may collect deviant behavior information of the first user through re-identification using a machine-learned re-identification model. Based on the collected information, analysis services such as emotion recognition and behavior patterns through experts and artificial intelligence can be performed. Based on the collected user information, services such as personalized learning and risk prediction can be provided.
  • a first user may be registered in advance.
  • a feature vector of the first user may be obtained by inputting an image of the first user to a pretrained artificial intelligence model, and a distance between feature vectors may be compared to specify or identify the feature vector of the first user.
  • the re-identification model has a characteristic of deriving an approximate feature vector even if the shooting environment of the object image is changed. Therefore, it is possible to identify a user by comparing distances between feature vectors of images collected in various shooting environments.
  • Registration of the first user may be performed by storing a feature vector obtained by inputting the image of the first user into the re-identification model in the user database. It is preferable to extract a feature vector using the front, side, and back images of the first user so that the re-identification performance is maintained even when the direction is changed.
  • the scrapbook transmitted from the analysis unit 140 is first received. It reads analysis information and scraped user area (scrap image) from the received scrapbook. Identification is performed by inputting the scrapped user area into the re-identification model and comparing the obtained feature vector with feature vectors of pre-registered first users. When identified as the first user, the user behavior analysis information and the scraped user area may be stored in the storage unit 160 .
  • the controller 180 may provide functions for managing users such as children and educational facilities.
  • the control unit 180 may support accident response and customized education by providing the collected second video, analysis information, notification message, etc. to educational facilities, parents, and experts.
  • the control unit 180 may manage and supervise information collected through artificial intelligence.
  • the control unit 180 may be composed of educational facilities, parents, and experts.
  • Educational facilities may provide safety and childcare assistance services. Videos collected from multiple sensors are detected and analyzed through artificial intelligence, and the analysis results can be provided to education facility personnel (supervisors, etc.). Notifications may be provided to the person in charge of the educational facility in the event of a dangerous situation, such as a child falling and fighting. In addition, behavioral patterns collected through follow-up observation are provided to the person in charge of educational facilities and can be used as basic data to provide a customized childcare environment.
  • the controller 180 may provide various types of user information to the guardian.
  • the guardian may check the location information of the child in the educational facility. Through this, it is possible to check whether or not to go to and from school or to prevent various ranges.
  • the control unit 180 provides experts with abnormal behaviors and data related to children so that they can manage them individually.
  • Individual management systems such as child psychology and psychiatry can be established by providing the collected behavioral and psychological data of children, such as behavioral information, to experts.
  • the quality of personalized education and care can be improved by providing feedback on children's behavior by experts.
  • An API (Application Programming Interface) server connects the platform and the control unit 180 to provide various services.
  • the API server 170 may provide functions such as first user registration and notification messages to educational facilities, parents, and experts. API services for real-time accident prevention and response and customized training can be provided.
  • the main functions of the API server 170 are four and are as follows.
  • a function of managing a re-identified user corresponding to the first user is provided. Provides re-identification user registration, inquiry and deletion functions. You can register and track and manage target children you want to observe.
  • Records of detected abnormal behaviors can be queried. You can receive abnormal behavior information such as abnormal behavior time, behavior, and snapshots for each registered user. It is possible to track and observe the child's abnormal behavior, so it is possible to support personalized care learning.
  • Recorded video can be managed.
  • An education facility manager may add, download, or delete images collected from the photographing unit 10 .
  • Notifications can be sent when risky abnormal behavior is detected.
  • a notification message is delivered to the educational facility. Send notifications to the person in charge of the educational facility (supervisor) to assist in immediate response to emergency situations.
  • the storage unit 160 may include a media database and a user database.
  • the first image or the second image collected through the collection unit 110 may be stored in the media database.
  • Analysis information of the abnormal behavior detected by the analyzer 140 may be stored in the user database.
  • FIG. 7 is a flowchart showing the analysis method of the present invention.
  • the analysis method of FIG. 7 may be performed by the analysis device 100 shown in FIG. 1 .
  • the analysis method includes a collection step (S 510), a non-identification step (S 520), a preprocessing step (S 530), an analysis step (S 540), a re-identification step (S 550), a storage step (S 560), a control step, It may include service steps.
  • a first image captured by a user may be collected.
  • the collecting step (S510) may be performed by the collecting unit 110.
  • a user appearing in the first image may be de-identified.
  • the non-identification step (S520) may be performed by the non-identification unit 120.
  • a target image in which the user appears is selected from among the plurality of first images output from the collection step (S520), and standardization may be performed only on the plurality of target images.
  • the pre-processing step (S530) may be performed by the pre-processing unit 130.
  • the deviant behavior of the user is extracted by analyzing the change in the clipped user area from the continuous frames of the standardized target image, and the analysis information on the scrapped user area and the user's deviant behavior on a frame-by-frame basis is stored in the scrapbook. can be stored in
  • the analysis step (S540) may be performed by the analyzer 140.
  • the re-identification step (S550) it is identified whether the user analyzed as deviant behavior by the analyzer 140 is a pre-registered first user, and deviant behavior analysis information and scraps included in the scrapbook of the first user among a plurality of scrapbooks. Only the user area that has been selected can be output.
  • the re-identification step (S550) may be performed by the re-identification unit 150.
  • the second image corresponding to the first image de-identified in the de-identification step, the analysis information of the first user output from the re-identification step, and the scraped user area may be matched and stored.
  • the storage step (S560) may be performed by the storage unit 160.
  • a notification message may be transmitted to a supervisor's terminal supervising the user or a guardian's terminal of the user.
  • the control step may be performed by the control unit 180 .
  • the analysis information of the first user stored in the storage unit 160 and the scrapped user area may be provided to the service means through the storage step.
  • the service step may be performed by the API server 170.
  • At least one of the de-identification step, the pre-processing step, the analysis step, the re-identification step, the control step, and the service step may be implemented through various artificial intelligence models generated through machine learning.
  • the computing device TN100 of FIG. 8 may be a device described in this specification (eg, the analysis device 100, etc.).
  • the computing device TN100 may include at least one processor TN110, a transceiver TN120, and a memory TN130.
  • the computing device TN100 may further include a storage device TN140, an input interface device TN150, and an output interface device TN160. Elements included in the computing device TN100 may communicate with each other by being connected by a bus TN170.
  • the processor TN110 may execute program commands stored in at least one of the memory TN130 and the storage device TN140.
  • the processor TN110 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed.
  • Processor TN110 may be configured to implement procedures, functions, methods, and the like described in relation to embodiments of the present invention.
  • the processor TN110 may control each component of the computing device TN100.
  • Each of the memory TN130 and the storage device TN140 may store various information related to the operation of the processor TN110.
  • Each of the memory TN130 and the storage device TN140 may include at least one of a volatile storage medium and a non-volatile storage medium.
  • the memory TN130 may include at least one of read only memory (ROM) and random access memory (RAM).
  • the transmitting/receiving device TN120 may transmit or receive a wired signal or a wireless signal.
  • the transmitting/receiving device TN120 may perform communication by being connected to a network.
  • the embodiments of the present invention are not implemented only through the devices and/or methods described so far, and may be implemented through a program that realizes functions corresponding to the configuration of the embodiments of the present invention or a recording medium in which the program is recorded. And, such implementation can be easily implemented by those skilled in the art from the description of the above-described embodiment.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Tourism & Hospitality (AREA)
  • Developmental Disabilities (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Primary Health Care (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physiology (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Educational Administration (AREA)
  • Epidemiology (AREA)
  • Computer Networks & Wireless Communication (AREA)

Abstract

An analysis apparatus is provided. The analysis apparatus may comprise: a collection unit that collects a first image in which a user is captured; a de-identification unit that de-identifies the user appearing in the first image; an analysis unit that analyzes a behavior of the user appearing in the first image; and a storage unit for matching and storing a second image corresponding to the first image de-identified by the de-identification unit and analysis information analyzed by the analysis unit.

Description

아동 대상의 교육 문화 개선을 위한 에듀테크 기술을 활용한 아동 행동 분석 장치 및 방법Apparatus and method for analyzing children's behavior using edutech technology to improve educational culture for children
본 발명은 아동 등 사용자의 행동을 분석하고, 분석 결과를 토대로 각종 서비스를 제공하는 장치 및 방법에 관한 것이다.The present invention relates to a device and method for analyzing the behavior of a user such as a child and providing various services based on the analysis result.
아동 관련 시설에서의 부족한 인프라로 인해서 발생하는 안전 사고와 보육 사고가 사회적 문제가 되고 있다.Safety accidents and childcare accidents caused by insufficient infrastructure in child-related facilities are becoming social problems.
이로 인해, 치료가 필요한 아동을 조기에 판별해서, 맞춤형 교육과 돌봄 서비스를 하는 것에 대한 필요성이 높아지고 있다.As a result, the need for early identification of children in need of treatment and customized education and care services is increasing.
사용자의 행동을 분석하고, 개인 정보의 노출을 최소화하면서도 행동 분석 결과를 토대로 각종 서비스를 제공하는 분석 장치 및 방법을 제공하기 위한 것이다.It is an object of the present invention to provide an analysis device and method that analyzes a user's behavior and provides various services based on the result of the behavioral analysis while minimizing exposure of personal information.
본 발명의 분석 장치는 사용자가 촬영된 제1 영상을 수집하는 수집부; 상기 제1 영상에 등장하는 상기 사용자를 비식별 처리하는 비식별부; 상기 제1 영상에 등장하는 상기 사용자의 행동을 분석하는 분석부; 상기 비식별부에서 비식별 처리된 제1 영상에 해당하는 제2 영상과 상기 분석부에서 분석된 분석 정보가 서로 매칭되어 저장되는 저장부;를 포함할 수 있다.The analysis device of the present invention includes a collection unit for collecting a first image captured by a user; a de-identification unit that de-identifies the user appearing in the first image; an analyzer configured to analyze a behavior of the user appearing in the first image; It may include a storage unit for matching and storing the second image corresponding to the first image de-identified by the de-identification unit and the analysis information analyzed by the analysis unit.
본 발명의 분석 방법은 사용자가 촬영된 제1 영상을 수집하는 수집 단계; 상기 제1 영상에 등장하는 상기 사용자를 비식별 처리하는 비식별 단계; 상기 수집 단계로부터 출력된 복수의 제1 영상 중에서 상기 사용자가 등장하는 타겟 영상을 선별하며, 상기 복수의 타겟 영상만을 대상으로 표준화를 수행하는 전처리 단계; 표준화된 상기 타겟 영상의 연속된 프레임으로부터 스크랩된 사용자 영역의 변화를 분석하여 상기 사용자의 이상 행동을 추출하고, 상기 프레임 단위로 스크랩된 사용자 영역과 상기 사용자의 이상 행동의 분석 정보를 스크랩북에 저장하는 분석 단계; 상기 분석부에 의해 이상 행동으로 분석된 사용자가 기등록된 제1 사용자인지 식별하고, 복수의 스크랩북 중에서 상기 제1 사용자의 스크랩북에 포함된 이상 행동 분석 정보와 스크랩된 사용자 영역만을 출력하는 재식별 단계; 상기 비식별 단계에서 비식별 처리된 제1 영상에 해당하는 제2 영상, 상기 재식별 단계로부터 출력된 상기 제1 사용자의 분석 정보, 상기 스크랩된 사용자 영역을 매칭시켜 저장하는 저장 단계;를 포함할 수 있다.The analysis method of the present invention includes a collection step of collecting a first image captured by a user; a de-identification step of de-identifying the user appearing in the first image; a pre-processing step of selecting a target image in which the user appears from among a plurality of first images output from the collecting step and standardizing only the plurality of target images; Analyzing changes in user areas scraped from consecutive frames of the standardized target image to extract deviant behaviors of the user, and storing analysis information on the scraped user areas and the user's deviant behaviors in frame units in a scrapbook analysis phase; A re-identification step of identifying whether the user analyzed for deviant behavior by the analyzer is a pre-registered first user, and outputting only the deviant behavior analysis information included in the scrapbook of the first user and the scraped user area among a plurality of scrapbooks. ; and a storage step of matching and storing a second image corresponding to the first image de-identified in the de-identification step, the analysis information of the first user output from the re-identification step, and the scraped user area. can
본 발명의 분석 장치 및 분석 방법은 영상 분석을 통해 아동 등 사용자의 이상 행동을 분석할 수 있다.The analysis apparatus and method of the present invention may analyze abnormal behavior of users such as children through image analysis.
이와 함께 분석 장치는 이상 행동의 판단 근거가 되는 영상을 비식별 처리하여 함께 저장할 수 있다.In addition, the analysis device may de-identify and store the image that is the basis for determining the abnormal behavior.
비식별 처리된 영상과 사용자의 행동 분석 정보는 쌍으로 각종 서비스 수단에 제공될 수 있다.The de-identified image and the user's behavior analysis information may be provided to various service means in pairs.
비식별 처리로 인해, 사용자의 행동 분석 정보가 사고 방지, 안전, 교육, 심리 상담 등과 관련된 각종 서비스 수단에 의해 활용될 수 있다.Due to non-identification processing, the user's behavioral analysis information can be utilized by various service means related to accident prevention, safety, education, psychological counseling, and the like.
본 발명에 따르면, 아동 행동 분석과 인공지능 기반의 스마트 환경 서비스를 통해서 보다 안전한 교육 환경이 조성될 수 있다. 이를 위해, 본 발명은 아동 대상의 교육 문화 개선을 위한 에듀테크 기술을 활용한 아동 행동 분석 기반의 서비스 플랫폼을 제공할 수 있다.According to the present invention, a safer educational environment can be created through child behavior analysis and artificial intelligence-based smart environment service. To this end, the present invention can provide a service platform based on child behavior analysis using edutech technology for improving educational culture for children.
도 1은 본 발명의 분석 장치를 나타낸 개략도이다.1 is a schematic diagram showing the analysis device of the present invention.
도 2는 수집부의 동작을 나타낸 흐름도이다.2 is a flowchart illustrating the operation of the collection unit.
도 3은 비식별부의 동작을 나타낸 흐름도이다.3 is a flowchart illustrating the operation of the non-identification unit.
도 4는 전처리부의 동작을 나타낸 흐름도이다.4 is a flowchart illustrating the operation of a pre-processing unit.
도 5는 분석부의 동작을 나타낸 흐름도이다.5 is a flowchart illustrating the operation of the analyzer.
도 6은 재식별부의 동작을 나타낸 흐름도이다.6 is a flowchart illustrating the operation of the re-identification unit.
도 7은 본 발명의 분석 방법을 나타낸 흐름도이다.7 is a flowchart showing the analysis method of the present invention.
도 8은 본 발명의 실시예에 따른, 컴퓨팅 장치를 나타내는 도면이다.8 is a diagram illustrating a computing device according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다. Hereinafter, with reference to the accompanying drawings, embodiments of the present invention will be described in detail so that those skilled in the art can easily carry out the present invention. However, the present invention may be embodied in many different forms and is not limited to the embodiments described herein. And in order to clearly explain the present invention in the drawings, parts irrelevant to the description are omitted, and similar reference numerals are attached to similar parts throughout the specification.
본 명세서에서, 동일한 구성요소에 대해서 중복된 설명은 생략한다.In this specification, redundant descriptions of the same components are omitted.
또한 본 명세서에서, 어떤 구성요소가 다른 구성요소에 '연결되어' 있다거나 '접속되어' 있다고 언급된 때에는, 그 다른 구성요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성요소가 존재할 수도 있다고 이해되어야 할 것이다. 반면에 본 명세서에서, 어떤 구성요소가 다른 구성요소에 '직접 연결되어' 있다거나 '직접 접속되어' 있다고 언급된 때에는, 중간에 다른 구성요소가 존재하지 않는 것으로 이해되어야 할 것이다.In addition, in this specification, when a component is referred to as being 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but another component in the middle It should be understood that may exist. On the other hand, in this specification, when a component is referred to as 'directly connected' or 'directly connected' to another component, it should be understood that no other component exists in the middle.
또한, 본 명세서에서 사용되는 용어는 단지 특정한 실시예를 설명하기 위해 사용되는 것으로써, 본 발명을 한정하려는 의도로 사용되는 것이 아니다.In addition, terms used in this specification are only used to describe specific embodiments and are not intended to limit the present invention.
또한 본 명세서에서, 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함할 수 있다. Also, in this specification, a singular expression may include a plurality of expressions unless the context clearly indicates otherwise.
또한 본 명세서에서, '포함하다' 또는 '가지다' 등의 용어는 명세서에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품, 또는 이들을 조합한 것이 존재함을 지정하려는 것일 뿐, 하나 또는 그 이상의 다른 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 할 것이다.In addition, in this specification, terms such as 'include' or 'having' are only intended to designate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, and one or more It should be understood that the presence or addition of other features, numbers, steps, operations, components, parts, or combinations thereof is not precluded.
또한 본 명세서에서, '및/또는' 이라는 용어는 복수의 기재된 항목들의 조합 또는 복수의 기재된 항목들 중의 어느 항목을 포함한다. 본 명세서에서, 'A 또는 B'는, 'A', 'B', 또는 'A와 B 모두'를 포함할 수 있다.Also in this specification, the term 'and/or' includes a combination of a plurality of listed items or any item among a plurality of listed items. In this specification, 'A or B' may include 'A', 'B', or 'both A and B'.
또한 본 명세서에서, 본 발명의 요지를 흐리게 할 수 있는 공지 기능 및 구성에 대한 상세한 설명은 생략될 것이다.Also, in this specification, detailed descriptions of well-known functions and configurations that may obscure the subject matter of the present invention will be omitted.
도 1은 본 발명의 분석 장치(100)를 나타낸 개략도이다.1 is a schematic diagram showing an analysis device 100 of the present invention.
도 1에 도시된 분석 장치(100)는 수집부(110), 비식별부(120), 전처리부(130), 분석부(140), 재식별부(150), 저장부(160), API 서버(170), 관제부(180)를 포함할 수 있다.The analysis device 100 shown in FIG. 1 includes a collection unit 110, a de-identification unit 120, a pre-processing unit 130, an analysis unit 140, a re-identification unit 150, a storage unit 160, an API A server 170 and a control unit 180 may be included.
수집부(110)는 사용자가 촬영된 제1 영상을 수집할 수 있다. 제1 영상은 동영상을 포함할 수 있다.The collection unit 110 may collect a first image in which a user is photographed. The first image may include a video.
일 예로, 수집부(110)는 사용자를 촬영하는 카메라, CCTV(Closed Circuit TV), 스마트폰 등의 촬영 수단(10)을 포함할 수 있다. 촬영 수단(10)이 별도로 마련된 경우, 수집부(110)는 상기 촬영 수단(10)과 유무선으로 통신하는 각종 통신 모듈을 포함할 수 있다.For example, the collecting unit 110 may include a photographing means 10 such as a camera, a Closed Circuit TV (CCTV), or a smartphone for photographing a user. When the photographing means 10 is provided separately, the collection unit 110 may include various communication modules that communicate with the photographing means 10 by wire or wireless.
비식별부(120)는 제1 영상에 등장하는 사용자를 비식별 처리할 수 있다. 비식별 처리는 제1 영상에 등장하는 사용자가 누구인지 알아보지 못하도록 처리(영상 처리 포함)하는 의미할 수 있다. 제1 영상에 사용자의 개인 정보를 알 수 있는 얼굴, 주민등록번호, 전화번호, 차량번호, 주소, 상호 등이 포함될 수 있다. 이들 개인 정보가 포함된 제1 영상은 도덕적 규제 또는 법규 등에 의해 제3 자에게 제공될 수 없다. 이에 따르면, 사용자는 제3 자가 관여하는 각종 서비스 수단을 이용하기 어려울 수 있다. 비식별부(120)에 의해 개인 정보가 삭제되면 사용자는 별다른 무리없이 서비스 수혜를 위한 조건에 해당하는 영상 제공을 편안하게 수행할 수 있다.The de-identification unit 120 may de-identify a user appearing in the first image. De-identification processing may mean processing (including image processing) so that a user appearing in the first image is not recognized. The first image may include a face, a resident registration number, a phone number, a vehicle number, an address, a company name, and the like, which can identify the user's personal information. The first image containing these personal information cannot be provided to a third party due to moral regulations or laws. According to this, it may be difficult for a user to use various service means involving a third party. When the personal information is deleted by the de-identification unit 120, the user can comfortably provide images corresponding to the conditions for receiving the service without any difficulty.
예를 들어, 폭력적인 성향을 갖는 사용자는 심리 상담 서비스를 받기를 희망할 수 있다. 이러한 상태에서 사용자의 상태를 정확하게 분석하는데 필요한 제1 영상의 제공이 제한된다면 상담사를 통한 현실적인 상담이 이루어지기 어렵다.For example, a user with a violent tendency may wish to receive a psychological counseling service. In this state, if the provision of the first image required to accurately analyze the user's condition is limited, it is difficult to provide realistic counseling through a counselor.
하지만, 비식별부(120)에 따르면, 소위 얼굴이 모자이크 처리된 영상이 상담사에게 제공 가능하고, 상담사는 이를 통해 사용자와 심리 상담을 진행할 수 있다.However, according to the non-identification unit 120, a so-called face mosaic-processed image can be provided to the counselor, and the counselor can conduct psychological counseling with the user through this.
분석부(140)는 제1 영상에 등장하는 사용자의 행동을 분석할 수 있다.The analysis unit 140 may analyze the user's behavior appearing in the first image.
저장부(160)에는 비식별부(120)에서 비식별 처리된 제1 영상에 해당하는 제2 영상과 분석부(140)에서 분석된 분석 정보가 서로 매칭되어 저장될 수 있다.In the storage unit 160 , the second image corresponding to the first image de-identified by the de-identification unit 120 and the analysis information analyzed by the analysis unit 140 may be matched and stored.
도 2는 수집부(110)의 동작을 나타낸 흐름도이다.2 is a flowchart illustrating the operation of the collection unit 110.
수집부(110)는 촬영 수단(10)으로부터 제1 영상에 해당하는 신규 동영상을 수신할 수 있다(S 511). 수집부(110)는 영상에 포함된 개인 정보를 비식별화하기 위해 수집된 제1 영상을 비식별부(120)로 전달할 수 있다(S 512). 또한, 수집부(110)는 수집된 제1 영상을 전처리부(130)로 전달할 수 있다(S 513).The collection unit 110 may receive a new video corresponding to the first video from the photographing unit 10 (S511). The collecting unit 110 may transmit the collected first image to the de-identifying unit 120 in order to de-identify personal information included in the image (S512). Also, the collecting unit 110 may transmit the collected first image to the pre-processing unit 130 (S513).
수집부(110)는 촬영 수단(10)으로부터 제1 영상이 입수되면, 제1 영상에 포함된 일부 정보를 식별 정보로 설정하거나, 새롭게 생성된 식별 정보를 영상에 설정할 수 있다. 이때의 식별 정보는 사용자를 식별하기 위한 것이 아니라, 제1 영상을 다른 영상과 구분하기 위한 것일 수 있다.When the first image is acquired from the photographing unit 10, the collection unit 110 may set some information included in the first image as identification information or set newly generated identification information to the image. The identification information at this time may not be used to identify the user, but may be used to distinguish the first image from other images.
수집부(110)는 식별 정보가 설정된 제1 영상을 비식별부(120) 및 분석부(140)에 각각 제공할 수 있다.The collection unit 110 may provide the first image for which identification information is set to the non-identification unit 120 and the analysis unit 140, respectively.
제1 영상에 포함된 식별 정보는 비식별부(120)로부터 출력되는 제2 영상에도 포함될 수 있다. 제1 영상에 포함된 식별 정보는 분석부(140)로부터 출력되는 분석 정보에도 포함될 수 있다.Identification information included in the first image may also be included in the second image output from the non-identifying unit 120 . Identification information included in the first image may also be included in analysis information output from the analyzer 140 .
저장부(160)는 제2 영상에 포함된 식별 번호와 분석 정보에 포함된 식별 정보를 이용하여 제2 영상과 분석 정보를 매칭시킬 수 있다.The storage unit 160 may match the second image and the analysis information by using the identification number included in the second image and the identification information included in the analysis information.
도 3은 비식별부(120)의 동작을 나타낸 흐름도이다.3 is a flowchart illustrating the operation of the non-identification unit 120.
비식별부(120)는 수집된 동영상을 저장부(160)에 저장하기 전에 개인 정보와 관련된 객체, 예를 들어, 얼굴, 주민등록번호, 전화번호, 차량번호, 주소, 상호가 표시된 영역, 객체를 비식별화할 수 있다.Before storing the collected videos in the storage unit 160, the de-identification unit 120 identifies objects related to personal information, for example, face, resident registration number, phone number, vehicle number, address, area where the company name is displayed, and the object. can be identified.
일 예로, 비식별부(120)는 제1 영상에 포함된 개인 식별 정보를 나타내는 객체를 탐지하고, 제1 영상에서 객체의 영역을 인식 불가하게 훼손시킬 수 있다. 비식별화된 제1 영상은 저장부(160)에 저장될 수 있다.For example, the de-identification unit 120 may detect an object representing personal identification information included in the first image and damage an area of the object in the first image to be unrecognizable. The de-identified first image may be stored in the storage unit 160 .
비식별부(120)는 먼저 수집부(110)로부터 비식별화할 제1 영상을 수신할 수 있다(S 521). 이때, 비식별부(120)는 제1 영상을 프레임 단위로 읽어올 수 있다(로딩)(S 522).The de-identification unit 120 may first receive the first image to be de-identified from the collection unit 110 (S521). At this time, the non-identification unit 120 may read (load) the first image frame by frame (S522).
비식별부(120)는 제1 영상의 프레임 단위로 개인 식별 정보(개인 정보)를 나타내는 객체를 탐지할 수 있다(S 523).The de-identification unit 120 may detect an object representing personal identification information (personal information) in units of frames of the first image (S523).
비식별부(120)는 탐지된 객체의 영역을 모자이크 처리하며, 모자이크 처리된 제1 영상에 해당되는 제2 영상을 출력할 수 있다(S 524). 제2 영상은 제1 영상에서 개인 식별 정보가 표시되는 영역이 훼손되거나 모자이크 처리된 영상을 의미할 수 있다. 이러한 방식으로 제1 영상에 포함된 개인 식별 정보 요소들이 전부 비식별화될 수 있다. 제1 영상이 종료되면(S 525), 비식별화 처리된 제2 영상은 저장부(160)에 저장될 수 있다(S 526).The non-identification unit 120 may mosaic-process the area of the detected object and output a second image corresponding to the mosaic-processed first image (S524). The second image may refer to an image in which a region in which personal identification information is displayed in the first image is damaged or mosaic-processed. In this way, all personal identification information elements included in the first image can be de-identified. When the first image ends (S525), the second image subjected to de-identification processing may be stored in the storage unit 160 (S526).
전처리부(130)는 복수의 제1 영상을 타겟 영상과 더미 영상으로 구분할 수 있다.The pre-processing unit 130 may divide the plurality of first images into a target image and a dummy image.
타겟 영상은 복수의 제1 영상 중에서 분석부(140)의 분석 대상이 되는 사용자가 등장하는 영상을 포함할 수 있다.The target image may include an image in which a user to be analyzed by the analyzer 140 appears among a plurality of first images.
더미 영상은 복수의 제1 영상 중에서 분석부(140)의 분석 대상이 되는 사용자가 미등장하는 영상을 포함할 수 있다.The dummy image may include an image in which a user to be analyzed by the analyzer 140 does not appear among the plurality of first images.
전처리부(130)는 타겟 영상과 더미 영상 중에서 더미 영상을 배제하고, 타겟 영상만을 분석부(140)에 제공할 수 있다.The pre-processing unit 130 may exclude the dummy image from among the target image and the dummy image and provide only the target image to the analysis unit 140 .
또는, 전처리부(130)는 단일의 제1 영상을 타겟 구간과 더미 구간으로 구분할 수 있다.Alternatively, the pre-processing unit 130 may divide the single first image into a target section and a dummy section.
타겟 구간은 분석부(140)의 분석 대상이 되는 사용자가 등장하는 프레임 구간을 의미할 수 있다.The target section may refer to a frame section in which a user to be analyzed by the analyzer 140 appears.
더미 구간은 분석부(140)의 분석 대상이 되는 사용자가 미등장하는 프레임 구간을 의미할 수 있다.The dummy section may refer to a frame section in which a user to be analyzed by the analyzer 140 does not appear.
전처리부(130)는 더미 구간을 배제하고, 타겟 구간만을 이용하여 새로운 간소화 영상을 생성할 수 있다.The preprocessor 130 may exclude the dummy section and generate a new simplified image using only the target section.
전처리부(130)는 제1 영상을 대신하여 간소화 영상을 분석부(140)에 제공할 수 있다.The preprocessor 130 may provide the simplified image to the analyzer 140 instead of the first image.
한편, 전처리부(130)는 분석부(140)의 입력 형식에 맞춰 제1 영상의 형식을 변환하는 표준화를 수행할 수 있다. 일 예로, 전처리부(130)는 분석부(140)의 입력 형식에 맞춰 제1 형상의 파일 크기, FPS(Frame Per Second), 영상 포맷 중 적어도 하나를 변환할 수 있다. 제1 영상은 촬영 수단(10)의 설정값 및 수집 환경에 따라 크기, FPS 등의 속성이 달라지는데 반해 인공 지능 모델은 입력 데이터 형태가 고정될 수 있다. 이에 따라 제1 영상의 속성은 인공 기능 모델의 입력 데이터 형태와 일치하도록 수정될 필요가 있다. 전처리부(130)에 의해 제1 영상은 분석부(140)에 탑재된 인공 지능 모델에 맞게 크기와 FPS가 표준화될 수 있다. 예를 들어, 전처리 과정을 통해 분석부(140)에 탑재된 분석 모델과 재식별부(150)에 마련된 재식별 모델의 입력 데이터 형태에 맞게 제1 영상이 표준화될 수 있다.Meanwhile, the pre-processing unit 130 may perform standardization to convert the format of the first image according to the input format of the analysis unit 140 . For example, the pre-processing unit 130 may convert at least one of the file size, frame per second (FPS), and image format of the first shape according to the input format of the analysis unit 140 . While properties such as size and FPS of the first image vary according to setting values of the photographing unit 10 and collection environments, the form of input data of the artificial intelligence model may be fixed. Accordingly, the properties of the first image need to be modified to match the shape of the input data of the artificial function model. The size and FPS of the first image may be standardized according to the artificial intelligence model loaded in the analysis unit 140 by the pre-processing unit 130 . For example, the first image may be standardized according to the input data form of the analysis model loaded in the analysis unit 140 and the re-identification model provided in the re-identification unit 150 through a pre-processing process.
바람직하게 전처리부(130)에 의해 수행되는 타겟 영상 선별과 표준화는 함께 수행되는 것이 좋다.Preferably, the target image selection and standardization performed by the pre-processing unit 130 are performed together.
도 4는 전처리부(130)의 동작을 나타낸 흐름도이다.4 is a flowchart illustrating the operation of the pre-processing unit 130.
전처리부(130)는 수집부(110)로부터 제1 영상을 수신할 수 있다(S 531).The pre-processing unit 130 may receive the first image from the collecting unit 110 (S531).
전처리부(130)는 분석부(140)에 탑재된 인공 지능 모델이 제1 영상을 분석할 수 있도록 타겟 영상을 선별하고 표준화를 수행할 수 있다. 영상 선별 및 표준화를 통해 인공 지능 모델의 처리 효율이 개선될 수 있다.The pre-processing unit 130 may select and standardize a target image so that the artificial intelligence model loaded in the analysis unit 140 can analyze the first image. Through image screening and standardization, the processing efficiency of artificial intelligence models can be improved.
전처리부(130)에는 선별 모듈 및 표준화 모듈이 마련될 수 있다.A selection module and a standardization module may be provided in the preprocessing unit 130 .
선별 모듈은 수집부(110)로부터 출력된 복수의 제1 영상 중에서(S 532) 사용자가 등장하는 타겟 영상만을 선별할 수 있다(S 533). 선별 모듈에 따르면, 불필요한 영상이 분석부(140)의 분석에서 제외되고, 전체 시스템의 처리 효율이 개선될 수 있다.The selection module may select only the target image in which the user appears (S533) from among the plurality of first images output from the collection unit 110 (S532). According to the screening module, unnecessary images are excluded from analysis by the analyzer 140, and the processing efficiency of the entire system can be improved.
표준화 모듈은 복수의 제1 영상 중에서 타겟 영상만을 대상으로 표준화를 수행할 수 있다(S 534). 이때, 표준화는 분석부(140)의 입력 형식에 맞춰 타겟 영상의 형식을 변환하는 동작을 포함할 수 있다.The standardization module may standardize only the target image among the plurality of first images (S534). In this case, standardization may include an operation of converting the format of the target image according to the input format of the analyzer 140 .
표준화 모듈은 표준화가 완료된 타겟 영상을 분석부(140)에 제공할 수 있다(S 535).The standardization module may provide the standardized target image to the analysis unit 140 (S535).
도 5는 분석부(140)의 동작을 나타낸 흐름도이다.5 is a flowchart illustrating the operation of the analyzer 140.
분석부(140)는 전처리부(130)로부터 전처리 과정을 통과한 제1 영상 또는 간소화 영상을 수신할 수 있다(S 541).The analysis unit 140 may receive the first image or simplified image that has passed the pre-processing process from the pre-processing unit 130 (S541).
일 예로, 분석부(140)는 제1 영상을 프레임 단위로 로딩할 수 있다(S 542).For example, the analyzer 140 may load the first image in frame units (S542).
분석부(140)는 각 프레임에 포함된 사용자 영역을 탐지하고 스크랩할 수 있다(S 543).The analysis unit 140 may detect and scrap the user area included in each frame (S543).
분석부(140)는 연속된 프레임으로부터 스크랩된 사용자 영역의 변화를 분석하고, 영역 변화의 분석을 통해 사용자의 이상 행동을 추출하고 분석할 수 있다(S 544).The analyzer 140 may analyze changes in the user area scraped from consecutive frames, and may extract and analyze the user's abnormal behavior through the analysis of the area change (S544).
분석부(140)는 스크랩된 사용자 영역과 추출된 이상 행동의 분석 정보를 매칭시켜 스크랩북에 저장할 수 있다(S 545). 스크랩북은 분석부(140)에 의해 생성된 데이터의 집합으로, 스크랩된 사용자 영역과 이상 행동의 분석 정보의 쌍이 복수로 포함될 수 있다. 스크랩된 사용자 영역은 특정 프레임에서 배경을 제외하고 사용자의 신체 부위만 오려낸 것일 수 있다. 이상 행동의 분석 정보는 사용자의 움직임의 벡터 정보를 포함하거나 기계 학습된 모델을 통해 분석된 이상 행동의 평가 결과, 예를 들어 폭력, 넘어짐, 쓰러짐, 다툼, 충돌, 과격 등의 분류 결과를 포함할 수 있다.The analyzer 140 may match the scraped user area with the extracted abnormal behavior analysis information and store it in a scrapbook (S545). The scrapbook is a set of data generated by the analyzer 140, and may include a plurality of pairs of scraped user areas and abnormal behavior analysis information. The scraped user area may be a cutout of only the user's body part excluding the background in a specific frame. Analysis information of abnormal behavior may include vector information of the user's movement or evaluation results of abnormal behavior analyzed through a machine learning model, for example, classification results such as violence, fall, fall, quarrel, collision, and violence. can
분석부(140)는 특정 제1 영상에 대한 분석이 모두 완료되거나 제1 영상이 종료되면(S 548), 특정 제1 영상을 대상으로 생성된 특정 스크랩북을 출력할 수 있다(S 549). 특정 스크랩북에는 특정 제1 영상에 추출된 모든 스크랩과 분석 정보가 포함될 수 있다. 스크랩북은 저장부(160) 또는 재식별부(150)로 전달될 수 있다.The analyzer 140 may output a specific scrapbook created targeting the specific first image when the analysis of the specific first image is completed or the first image ends (S548). A specific scrapbook may include all scraps and analysis information extracted from a specific first image. The scrapbook may be transferred to the storage unit 160 or the re-identification unit 150 .
분석부(140)는 제1 영상에 등장하는 사용자의 행동을 분석하고, 해당 분석을 통해 사용자의 이상 행동을 탐지할 수 있다.The analysis unit 140 may analyze the user's behavior appearing in the first image and detect the user's abnormal behavior through the analysis.
분석부(140)는 이상 행동, 특히 안전 사고와 관련된 비상 이상 해동이 탐지되면(S 546), 사용자와 관련된 서비스를 관리하는 관제부(180) 또는 API 서버(170) 'API server'로 알림 메시지를 전송할 수 있다(S 546). 알림 메시지에는 분석 정보와 스크랩된 사용자 영역(이미지)이 포함될 수 있다. 알림 메시지를 인지한 감독자 또는 보호자는 신속하게 사용자 또는 그 주변의 위험 상황을 해결할 수 있다.The analysis unit 140 sends a notification message to the control unit 180 or the API server 170 'API server' that manages user-related services when an abnormal behavior, in particular, an emergency abnormal thawing related to a safety accident is detected (S546). can be transmitted (S 546). The notification message may include analysis information and a scraped user area (image). A supervisor or guardian who recognizes the notification message can quickly solve a dangerous situation for the user or his or her surroundings.
도 6은 재식별부(150)의 동작을 나타낸 흐름도이다.6 is a flowchart illustrating the operation of the re-identification unit 150.
재식별부(150)는 분석부(140)에 의해 이상 행동으로 분석된 사용자가 기등록된 제1 사용자인지 식별할 수 있다. 이때, 저장부(160)에 저장되는 제2 영상 및 분석 정보는 제1 사용자의 것으로 한정될 수 있다. 저장부(160)에 저장된 제2 영상 및 분석 정보는 특정 사용자를 대한 각종 서비스에 이용될 수 있다. 서비스를 제공하기 위해서는 서비스를 제공받을 사용자가 특정될 필요가 있다. 제1 영상에 포함된 붙특정 사용자는 이상 행동을 보인다고 하더라도 어디에 사는지 누구인지 모르므로, 서비스를 제공하고 싶어도 제공하지 못하는 현실적인 문제가 있다. 결과적으로, 제1 영상에 포함된 불특정 사용자의 분석을 별다른 실익이 없으므로, 저장부(160)에는 이미 등록된 상태의 제1 사용자를 대상으로 하는 제2 영상 및 분석 정보만 저장되는 것이 유리할 수 있다.The re-identification unit 150 may identify whether the user analyzed by the analysis unit 140 as an abnormal behavior is a pre-registered first user. At this time, the second image and analysis information stored in the storage unit 160 may be limited to those of the first user. The second image and analysis information stored in the storage unit 160 may be used for various services for a specific user. In order to provide a service, it is necessary to specify a user to be provided with the service. Even if a specific user included in the first image shows an abnormal behavior, there is a practical problem in that the service cannot be provided even if the user wants to provide the service because they do not know where they live or who they are. As a result, since analysis of an unspecified user included in the first image has no practical benefit, it may be advantageous to store only the second image and analysis information for the already registered first user in the storage unit 160. .
설명된 바와 같이, 분석부(140)는 제1 영상의 프레임 단위로 스크랩된 사용자 영역과 사용자의 이상 행동의 분석 정보를 스크랩북에 저장할 수 있다.As described above, the analyzer 140 may store analysis information on the scraped user region and the user's abnormal behavior in frame units of the first image in a scrapbook.
이때, 재식별부(150)는 분석부(140)로부터 스크랩북을 수신할 수 있다(S 551). 그리고, 스크랩북에서 분석 정보 및 스크랩 이미지를 로딩할 수 있다(S 552).At this time, the re-identification unit 150 may receive the scrapbook from the analysis unit 140 (S551). Then, analysis information and scrap images may be loaded in the scrapbook (S552).
재식별부(150)는 스크랩된 사용자 영역을 이용하여 특정 사용자의 기등록 여부를 식별(재식별)할 수 있다(S 553).The re-identification unit 150 may use the scraped user area to identify (re-identify) whether a specific user has already been registered (S553).
재식별부(150)는 특정 사용자가 기등록된 제1 사용자로 식별되면(S 554), 저장부(160)에 해당하는 스토리지 서버에 특정 사용자의 이상 행동 분석 정보와 스크랩된 사용자 영역(스크랩 이미지)을 저장할 수 있다(S 555).When a specific user is identified as a pre-registered first user (S554), the re-identification unit 150 stores the abnormal behavior analysis information of the specific user and the scrapped user area (scrap image) in the storage server corresponding to the storage unit 160. ) can be stored (S 555).
재식별부(150)의 이상의 동작을 스크랩북이 종료될 때까지 지속적으로 수행할 수 있다(S 556).The above operations of the re-identification unit 150 may be continuously performed until the scrapbook ends (S556).
재식별부(150)는 기계 학습된 재식별 모델을 이용한 재식별을 통해 제1 사용자의 이상 행동 정보를 수집할 수 있다. 수집된 정보를 바탕으로 전문가 및 인공 지능을 통한 정서 인지, 행동 패턴 등과 같은 분석 서비스가 수행될 수 있다. 수집된 사용자의 정보를 바탕으로 개인별 맞춤형 학습 및 위험 발생 예측 등의 서비스가 제공될 수 있다.The re-identification unit 150 may collect deviant behavior information of the first user through re-identification using a machine-learned re-identification model. Based on the collected information, analysis services such as emotion recognition and behavior patterns through experts and artificial intelligence can be performed. Based on the collected user information, services such as personalized learning and risk prediction can be provided.
사전에 제1 사용자가 등록될 수 있다. 예를 들어, 제1 사용자의 이미지를 기학습된 인공 지능 모델에 입력해 특징 벡터를 구하고 특징 벡터 간 거리를 비교하여 제1 사용자의 특징 벡터를 특정하거나 식별할 수 있다.A first user may be registered in advance. For example, a feature vector of the first user may be obtained by inputting an image of the first user to a pretrained artificial intelligence model, and a distance between feature vectors may be compared to specify or identify the feature vector of the first user.
재식별 모델은 객체 이미지의 촬영 환경이 달라져도 근사한 특징 벡터를 도출하는 특성이 있다. 그렇기 때문에 다양한 촬영 환경에서 수집된 이미지의 특징 벡터 간 거리를 비교하여 사용자를 식별할 수 있다.The re-identification model has a characteristic of deriving an approximate feature vector even if the shooting environment of the object image is changed. Therefore, it is possible to identify a user by comparing distances between feature vectors of images collected in various shooting environments.
제1 사용자의 등록은 재식별 모델에 제1 사용자의 이미지를 입력하여 얻은 특징 벡터를 사용자 데이터베이스에 저장하는 방식으로 이루어질 수 있다. 제1 사용자의 정면, 옆면 및 후면 이미지를 사용해 특징 벡터를 추출하여 방향이 달라져도 재식별 성능이 유지되도록 하는 것이 좋다.Registration of the first user may be performed by storing a feature vector obtained by inputting the image of the first user into the re-identification model in the user database. It is preferable to extract a feature vector using the front, side, and back images of the first user so that the re-identification performance is maintained even when the direction is changed.
재식별부(150)의 수행 과정은 먼저 분석부(140)에서 송신한 스크랩북을 수신한다. 수신한 스크랩북에서 분석 정보 및 스크랩된 사용자 영역(스크랩 이미지)를 읽어온다. 재식별 모델에 스크랩된 사용자 영역을 입력하고 얻은 특징 벡터를 기등록된 제1 사용자들의 특징 벡터와 비교하여 식별을 수행한다. 제1 사용자로 식별되면 저장부(160)에 사용자 행동 분석 정보와 스크랩된 사용자 영역이 저장될 수 있다.In the process of performing the re-identification unit 150, the scrapbook transmitted from the analysis unit 140 is first received. It reads analysis information and scraped user area (scrap image) from the received scrapbook. Identification is performed by inputting the scrapped user area into the re-identification model and comparing the obtained feature vector with feature vectors of pre-registered first users. When identified as the first user, the user behavior analysis information and the scraped user area may be stored in the storage unit 160 .
관제부(180)는 아동 등의 사용자와 교육 시설을 관리하기 위한 기능을 제공할 수 있다. 관제부(180)는 수집된 제2 영상, 분석 정보, 알림 메시지 등을 교육 시설, 학부모 및 전문가에게 제공하여 사고 대처 및 맞춤형 교육을 지원할 수 있다. 또한, 관제부(180)는 인공 지능을 통해 수집된 정보를 관리 감독할 수 있다.The controller 180 may provide functions for managing users such as children and educational facilities. The control unit 180 may support accident response and customized education by providing the collected second video, analysis information, notification message, etc. to educational facilities, parents, and experts. In addition, the control unit 180 may manage and supervise information collected through artificial intelligence.
관제부(180)는 교육 시설, 학부모 및 전문가로 구성될 수 있다.The control unit 180 may be composed of educational facilities, parents, and experts.
교육 시설은 안전 및 보육 보조 서비스를 제공할 수 있다. 다중 센서로부터 수집된 동영상은 인공 지능을 통해 탐지 및 분석되며, 분석 결과는 교육 시설 담당자(감독자 등)에게 제공될 수 있다. 아동의 쓰러짐 및 다툼과 같은 위험 상황 발생 시 교육 시설의 담당자에게 알림이 제공될 수 있다. 또한, 추적 관찰을 통해 수집된 행동 패턴 등은 교육 시설 담당자에게 제공되고, 맞춤형 보육 환경을 제공하는 기반 자료로 활용될 수 있다.Educational facilities may provide safety and childcare assistance services. Videos collected from multiple sensors are detected and analyzed through artificial intelligence, and the analysis results can be provided to education facility personnel (supervisors, etc.). Notifications may be provided to the person in charge of the educational facility in the event of a dangerous situation, such as a child falling and fighting. In addition, behavioral patterns collected through follow-up observation are provided to the person in charge of educational facilities and can be used as basic data to provide a customized childcare environment.
관제부(180)는 보호자에게 사용자의 각종 정보를 제공할 수 있다. 일 예로, 보호자는 교육 시설에 있는 자녀의 위치 정보 등을 확인할 수 있다. 이를 통해 등하교 여부의 확인 가능하거나 각종 범위 예방이 가능하다.The controller 180 may provide various types of user information to the guardian. For example, the guardian may check the location information of the child in the educational facility. Through this, it is possible to check whether or not to go to and from school or to prevent various ranges.
관제부(180)는 전문가에게는 아동에 대한 이상 행동과 관련 데이터를 제공하여 개별적으로 관리할 수 있게 한다. 행동 정보 등 수집된 아동의 행동 심리 데이터를 전문가에게 제공하여 아동 심리, 정신 의학 등 개별적인 관리체계를 구축할 수 있다. 전문가가 아동 행동에 대한 피드백을 제공해 개인별 맞춤 교육과 보육의 질이 향상될 수 있다.The control unit 180 provides experts with abnormal behaviors and data related to children so that they can manage them individually. Individual management systems such as child psychology and psychiatry can be established by providing the collected behavioral and psychological data of children, such as behavioral information, to experts. The quality of personalized education and care can be improved by providing feedback on children's behavior by experts.
API(Application Programming Interface) 서버는 플랫폼과 관제부(180)를 연결해 각종 서비스를 할 수 있도록 한다. 일 예로, API 서버(170)는 교육 시설, 학부모 및 전문가에게 제1 사용자 등록 및 알림 메시지 등의 기능을 제공할 수 있다. 실시간 사고 예방과 대처 및 맞춤형 교육을 위한 API 서비스가 제공될 수 있다.An API (Application Programming Interface) server connects the platform and the control unit 180 to provide various services. For example, the API server 170 may provide functions such as first user registration and notification messages to educational facilities, parents, and experts. API services for real-time accident prevention and response and customized training can be provided.
API 서버(170)의 주요 기능은 4가지이며 다음과 같다.The main functions of the API server 170 are four and are as follows.
1. 재식별 사용자 관리1. Re-identified User Management
제1 사용자에 해당하는 재식별 사용자를 관리하는 기능을 제공한다. 재식별 사용자 등록, 조회 및 삭제 기능을 제공한다. 관찰하고자 하는 대상 아동을 등록하고 추적 관리할 수 있다.A function of managing a re-identified user corresponding to the first user is provided. Provides re-identification user registration, inquiry and deletion functions. You can register and track and manage target children you want to observe.
2. 이상행동 목록 조회2. Inquiry of abnormal behavior list
탐지된 이상 행동들에 관한 기록을 조회할 수 있다. 등록된 사용자별 이상 행동한 시간, 행위, 스냅샷 등 이상 행동 정보를 받을 수 있다. 아동의 이상 행동을 추적 관찰할 수 있어 개인별 맞춤형 돌봄 학습을 지원할 수 있다.Records of detected abnormal behaviors can be queried. You can receive abnormal behavior information such as abnormal behavior time, behavior, and snapshots for each registered user. It is possible to track and observe the child's abnormal behavior, so it is possible to support personalized care learning.
3. 영상 관리3. Video Management
녹화된 영상을 관리할 수 있다. 교육 시설 관리자(감독자)는 촬영 수단(10)으로부터 수집된 영상을 추가, 다운로드, 삭제할 수 있다.Recorded video can be managed. An education facility manager (supervisor) may add, download, or delete images collected from the photographing unit 10 .
4. 알림 전송4. Send notifications
위험 이상 행동 탐지 시 알림을 전송할 수 있다. 분석부(140)에서 이상 행동 검출 시 교육 시설에 알림 메시지를 전달한다. 교육 시설 담당자(감독자)에게 알림을 송신하여 위급 상황에 즉각적으로 대응을 할 수 있도록 보조한다.Notifications can be sent when risky abnormal behavior is detected. When an abnormal behavior is detected by the analysis unit 140, a notification message is delivered to the educational facility. Send notifications to the person in charge of the educational facility (supervisor) to assist in immediate response to emergency situations.
저장부(160)는 미디어 데이터베이스와 사용자 데이터베이스를 포함할 수 있다.The storage unit 160 may include a media database and a user database.
미디어 데이터베이스에는 수집부(110)를 통해 수집된 제1 영상 또는 제2 영상이 저장될 수 있다. 바람직하게, 미디어 데이터베이스에는 제2 영상만 저장되는 것이 좋다.The first image or the second image collected through the collection unit 110 may be stored in the media database. Preferably, only the second image is stored in the media database.
사용자 데이터베이스에는 분석부(140)에서 탐지된 이상 행동의 분석 정보가 저장될 수 있다.Analysis information of the abnormal behavior detected by the analyzer 140 may be stored in the user database.
도 7은 본 발명의 분석 방법을 나타낸 흐름도이다.7 is a flowchart showing the analysis method of the present invention.
도 7의 분석 방법은 도 1에 도시된 분석 장치(100)에 의해 수행될 수 있다.The analysis method of FIG. 7 may be performed by the analysis device 100 shown in FIG. 1 .
분석 방법은 수집 단계(S 510), 비식별 단계(S 520), 전처리 단계(S 530), 분석 단계(S 540), 재식별 단계(S 550), 저장 단계(S 560), 관제 단계, 서비스 단계를 포함할 수 있다.The analysis method includes a collection step (S 510), a non-identification step (S 520), a preprocessing step (S 530), an analysis step (S 540), a re-identification step (S 550), a storage step (S 560), a control step, It may include service steps.
수집 단계(S 510)는 사용자가 촬영된 제1 영상을 수집할 수 있다. 수집 단계(S 510)는 수집부(110)에 의해 수행될 수 있다.In the collecting step (S510), a first image captured by a user may be collected. The collecting step (S510) may be performed by the collecting unit 110.
*비식별 단계(S 520)는 제1 영상에 등장하는 사용자를 비식별 처리할 수 있다. 비식별 단계(S 520)는 비식별부(120)에 의해 수행될 수 있다.* In the de-identification step (S 520), a user appearing in the first image may be de-identified. The non-identification step (S520) may be performed by the non-identification unit 120.
전처리 단계(S 530)는 수집 단계(S 520)로부터 출력된 복수의 제1 영상 중에서 사용자가 등장하는 타겟 영상을 선별하며, 복수의 타겟 영상만을 대상으로 표준화를 수행할 수 있다. 전처리 단계(S 530)는 전처리부(130)에 의해 수행될 수 있다.In the preprocessing step (S530), a target image in which the user appears is selected from among the plurality of first images output from the collection step (S520), and standardization may be performed only on the plurality of target images. The pre-processing step (S530) may be performed by the pre-processing unit 130.
분석 단계(S 540)는 표준화된 타겟 영상의 연속된 프레임으로부터 스크랩된 사용자 영역의 변화를 분석하여 사용자의 이상 행동을 추출하고, 프레임 단위로 스크랩된 사용자 영역과 사용자의 이상 행동의 분석 정보를 스크랩북에 저장할 수 있다. 분석 단계(S 540)는 분석부(140)에 의해 수행될 수 있다.In the analysis step (S540), the deviant behavior of the user is extracted by analyzing the change in the clipped user area from the continuous frames of the standardized target image, and the analysis information on the scrapped user area and the user's deviant behavior on a frame-by-frame basis is stored in the scrapbook. can be stored in The analysis step (S540) may be performed by the analyzer 140.
재식별 단계(S 550)는 분석부(140)에 의해 이상 행동으로 분석된 사용자가 기등록된 제1 사용자인지 식별하고, 복수의 스크랩북 중에서 제1 사용자의 스크랩북에 포함된 이상 행동 분석 정보와 스크랩된 사용자 영역만을 출력할 수 있다. 재식별 단계(S 550)는 재식별부(150)에 의해 수행될 수 있다.In the re-identification step (S550), it is identified whether the user analyzed as deviant behavior by the analyzer 140 is a pre-registered first user, and deviant behavior analysis information and scraps included in the scrapbook of the first user among a plurality of scrapbooks. Only the user area that has been selected can be output. The re-identification step (S550) may be performed by the re-identification unit 150.
저장 단계(S 560)는 비식별 단계에서 비식별 처리된 제1 영상에 해당하는 제2 영상, 재식별 단계로부터 출력된 제1 사용자의 분석 정보, 스크랩된 사용자 영역을 매칭시켜 저장할 수 있다. 저장 단계(S 560)는 저장부(160)에 의해 수행될 수 있다.In the storage step (S560), the second image corresponding to the first image de-identified in the de-identification step, the analysis information of the first user output from the re-identification step, and the scraped user area may be matched and stored. The storage step (S560) may be performed by the storage unit 160.
관제 단계는 분석 단계에서 비상 이상 행동(위급 상황, 비상 상황)이 탐지되면, 사용자를 감독하는 감독자의 단말기 또는 사용자의 보호자의 단말기로 알림 메시지를 전송할 수 있다. 관제 단계는 관제부(180)에 의해 수행될 수 있다.In the control step, when an emergency abnormal behavior (emergency situation, emergency situation) is detected in the analysis step, a notification message may be transmitted to a supervisor's terminal supervising the user or a guardian's terminal of the user. The control step may be performed by the control unit 180 .
서비스 단계는 저장 단계를 통해 저장부(160)에 저장된 제1 사용자의 분석 정보와 스크랩된 사용자의 영역을 서비스 수단에 제공할 수 있다. 서비스 단계는 API 서버(170)에 의해 수행될 수 있다.In the service step, the analysis information of the first user stored in the storage unit 160 and the scrapped user area may be provided to the service means through the storage step. The service step may be performed by the API server 170.
이상의 비식별 단계, 전처리 단계, 분석 단계, 재식별 단계, 관제 단계, 서비스 단계 중 적어도 하나는 기계 학습을 통해 생성된 각종 인공 지능 모델을 통해 구현될 수 있다.At least one of the de-identification step, the pre-processing step, the analysis step, the re-identification step, the control step, and the service step may be implemented through various artificial intelligence models generated through machine learning.
도 8은 본 발명의 실시예에 따른, 컴퓨팅 장치를 나타내는 도면이다. 도 8의 컴퓨팅 장치(TN100)는 본 명세서에서 기술된 장치(예, 분석 장치(100) 등) 일 수 있다. 8 is a diagram illustrating a computing device according to an embodiment of the present invention. The computing device TN100 of FIG. 8 may be a device described in this specification (eg, the analysis device 100, etc.).
도 8의 실시예에서, 컴퓨팅 장치(TN100)는 적어도 하나의 프로세서(TN110), 송수신 장치(TN120), 및 메모리(TN130)를 포함할 수 있다. 또한, 컴퓨팅 장치(TN100)는 저장 장치(TN140), 입력 인터페이스 장치(TN150), 출력 인터페이스 장치(TN160) 등을 더 포함할 수 있다. 컴퓨팅 장치(TN100)에 포함된 구성 요소들은 버스(bus)(TN170)에 의해 연결되어 서로 통신을 수행할 수 있다.In the embodiment of FIG. 8 , the computing device TN100 may include at least one processor TN110, a transceiver TN120, and a memory TN130. In addition, the computing device TN100 may further include a storage device TN140, an input interface device TN150, and an output interface device TN160. Elements included in the computing device TN100 may communicate with each other by being connected by a bus TN170.
프로세서(TN110)는 메모리(TN130) 및 저장 장치(TN140) 중에서 적어도 하나에 저장된 프로그램 명령(program command)을 실행할 수 있다. 프로세서(TN110)는 중앙 처리 장치(CPU: central processing unit), 그래픽 처리 장치(GPU: graphics processing unit), 또는 본 발명의 실시예에 따른 방법들이 수행되는 전용의 프로세서를 의미할 수 있다. 프로세서(TN110)는 본 발명의 실시예와 관련하여 기술된 절차, 기능, 및 방법 등을 구현하도록 구성될 수 있다. 프로세서(TN110)는 컴퓨팅 장치(TN100)의 각 구성 요소를 제어할 수 있다.The processor TN110 may execute program commands stored in at least one of the memory TN130 and the storage device TN140. The processor TN110 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed. Processor TN110 may be configured to implement procedures, functions, methods, and the like described in relation to embodiments of the present invention. The processor TN110 may control each component of the computing device TN100.
메모리(TN130) 및 저장 장치(TN140) 각각은 프로세서(TN110)의 동작과 관련된 다양한 정보를 저장할 수 있다. 메모리(TN130) 및 저장 장치(TN140) 각각은 휘발성 저장 매체 및 비휘발성 저장 매체 중에서 적어도 하나로 구성될 수 있다. 예를 들어, 메모리(TN130)는 읽기 전용 메모리(ROM: read only memory) 및 랜덤 액세스 메모리(RAM: random access memory) 중에서 적어도 하나로 구성될 수 있다. Each of the memory TN130 and the storage device TN140 may store various information related to the operation of the processor TN110. Each of the memory TN130 and the storage device TN140 may include at least one of a volatile storage medium and a non-volatile storage medium. For example, the memory TN130 may include at least one of read only memory (ROM) and random access memory (RAM).
송수신 장치(TN120)는 유선 신호 또는 무선 신호를 송신 또는 수신할 수 있다. 송수신 장치(TN120)는 네트워크에 연결되어 통신을 수행할 수 있다.The transmitting/receiving device TN120 may transmit or receive a wired signal or a wireless signal. The transmitting/receiving device TN120 may perform communication by being connected to a network.
한편, 본 발명의 실시예는 지금까지 설명한 장치 및/또는 방법을 통해서만 구현되는 것은 아니며, 본 발명의 실시예의 구성에 대응하는 기능을 실현하는 프로그램 또는 그 프로그램이 기록된 기록 매체를 통해 구현될 수도 있으며, 이러한 구현은 상술한 실시예의 기재로부터 본 발명이 속하는 기술 분야의 통상의 기술자라면 쉽게 구현할 수 있는 것이다. Meanwhile, the embodiments of the present invention are not implemented only through the devices and/or methods described so far, and may be implemented through a program that realizes functions corresponding to the configuration of the embodiments of the present invention or a recording medium in which the program is recorded. And, such implementation can be easily implemented by those skilled in the art from the description of the above-described embodiment.
이상에서 본 발명의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 통상의 기술자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.Although the embodiments of the present invention have been described in detail above, the scope of the present invention is not limited thereto, and various modifications and improvements of those skilled in the art using the basic concept of the present invention defined in the following claims are also provided. belong to the scope of the invention.

Claims (15)

  1. 사용자가 촬영된 제1 영상을 수집하는 수집부;a collection unit that collects a first image captured by a user;
    상기 제1 영상에 등장하는 상기 사용자를 비식별 처리하는 비식별부;a de-identification unit that de-identifies the user appearing in the first image;
    상기 제1 영상에 등장하는 상기 사용자의 행동을 분석하는 분석부;an analyzer configured to analyze a behavior of the user appearing in the first image;
    상기 비식별부에서 비식별 처리된 제1 영상에 해당하는 제2 영상과 상기 분석부에서 분석된 분석 정보가 서로 매칭되어 저장되는 저장부;a storage unit for matching and storing a second image corresponding to the first image de-identified by the de-identification unit and analysis information analyzed by the analysis unit;
    를 포함하는 분석 장치.Analysis device comprising a.
  2. 제1항에 있어서,According to claim 1,
    상기 수집부는 촬영 수단으로부터 상기 제1 영상이 입수되면, 상기 제1 영상에 포함된 일부 정보를 식별 정보로 설정하거나, 새롭게 생성된 식별 정보를 상기 영상에 설정하고,When the first image is obtained from the photographing means, the collection unit sets some information included in the first image as identification information or sets newly generated identification information to the image;
    상기 수집부는 상기 식별 정보가 설정된 상기 제1 영상을 상기 비식별부 및 상기 분석부에 각각 제공하며,The collection unit provides the first image for which the identification information is set to the non-identification unit and the analysis unit, respectively;
    상기 제1 영상에 포함된 식별 정보는 상기 비식별부로부터 출력되는 제2 영상에도 포함되고,The identification information included in the first image is also included in the second image output from the non-identification unit,
    상기 제1 영상에 포함된 식별 정보는 상기 분석부로부터 출력되는 분석 정보에도 포함되며,Identification information included in the first image is also included in analysis information output from the analyzer,
    상기 저장부는 상기 제2 영상에 포함된 식별 번호와 상기 분석 정보에 포함된 식별 정보를 이용하여 상기 제2 영상과 상기 분석 정보를 매칭시키는 분석 장치.Wherein the storage unit matches the second image and the analysis information using an identification number included in the second image and identification information included in the analysis information.
  3. 제1항에 있어서,According to claim 1,
    상기 비식별부는 상기 제1 영상에 포함된 개인 식별 정보를 나타내는 객체를 탐지하고,The non-identification unit detects an object representing personal identification information included in the first image,
    상기 비식별부는 상기 제1 영상에서 상기 객체의 영역을 인식 불가하게 훼손시키는 분석 장치.Wherein the non-identification unit damages the region of the object in the first image to be unrecognizable.
  4. 제1항에 있어서,According to claim 1,
    상기 비식별부는 상기 제1 영상의 프레임 단위로 개인 식별 정보를 나타내는 객체를 탐지하고,The de-identification unit detects an object representing personal identification information in units of frames of the first image,
    상기 비식별부는 탐지된 상기 객체의 영역을 모자이크 처리하며, 모자이크 처리된 제1 영상에 해당되는 상기 제2 영상을 출력하는 분석 장치.The non-identification unit mosaic-processes the region of the detected object, and outputs the second image corresponding to the mosaic-processed first image.
  5. 제1항에 있어서,According to claim 1,
    복수의 제1 영상을 타겟 영상과 더미 영상으로 구분하는 전처리부가 마련되고,A pre-processing unit is provided to divide the plurality of first images into target images and dummy images;
    상기 타겟 영상은 상기 분석부의 분석 대상이 되는 사용자가 등장하는 영상이며,The target image is an image in which a user to be analyzed by the analysis unit appears,
    상기 더미 영상은 상기 분석부의 분석 대상이 되는 사용자가 미등장하는 영상이고,The dummy image is an image in which a user to be analyzed by the analyzer does not appear,
    상기 전처리부는 상기 타겟 영상과 상기 더미 영상 중에서 상기 더미 영상을 배제하고, 상기 타겟 영상만을 상기 분석부에 제공하는 분석 장치.wherein the pre-processing unit excludes the dummy image from among the target image and the dummy image, and provides only the target image to the analyzing unit.
  6. 제1항에 있어서,According to claim 1,
    단일의 제1 영상을 타겟 구간과 더미 구간으로 구분하는 전처리부가 마련되고,A pre-processing unit is provided to divide a single first image into a target section and a dummy section;
    상기 타겟 구간은 상기 분석부의 분석 대상이 되는 사용자가 등장하는 프레임 구간이며,The target section is a frame section in which a user to be analyzed by the analyzer appears,
    상기 더미 구간은 상기 분석부의 분석 대상이 되는 사용자가 미등장하는 프레임 구간이고,The dummy section is a frame section in which a user to be analyzed by the analyzer does not appear,
    상기 전처리부는 상기 더미 구간을 배제하고, 상기 타겟 구간만을 이용하여 새로운 간소화 영상을 생성하며,The pre-processor excludes the dummy section and generates a new simplified image using only the target section;
    상기 전처리부는 상기 제1 영상을 대신하여 상기 간소화 영상을 상기 분석부에 제공하는 분석 장치.Wherein the pre-processing unit provides the simplified image to the analysis unit instead of the first image.
  7. 제1항에 있어서,According to claim 1,
    상기 분석부의 입력 형식에 맞춰 상기 제1 영상의 형식을 변환하는 표준화를 수행하는 전처리부가 마련되고,A pre-processing unit is provided to perform standardization of converting the format of the first image according to the input format of the analysis unit;
    상기 전처리부는 상기 분석부의 입력 형식에 맞춰 상기 제1 형상의 파일 크기, FPS(Frame Per Second), 포맷 중 적어도 하나를 변환하는 분석 장치.The pre-processing unit converts at least one of a file size, a frame per second (FPS), and a format of the first shape according to an input format of the analysis unit.
  8. 제1항에 있어서,According to claim 1,
    선별 모듈 및 표준화 모듈이 마련되고,Selection modules and standardization modules are prepared,
    상기 선별 모듈은 상기 수집부로부터 출력된 복수의 제1 영상 중에서 상기 사용자가 등장하는 타겟 영상만을 선별하며,The screening module selects only a target image in which the user appears from among a plurality of first images output from the collection unit;
    상기 표준화 모듈은 상기 복수의 제1 영상 중에서 상기 타겟 영상만을 대상으로 표준화를 수행하고,The standardization module performs standardization on only the target image among the plurality of first images;
    상기 표준화는 상기 분석부의 입력 형식에 맞춰 상기 타겟 영상의 형식을 변환하는 동작을 포함하며,The standardization includes an operation of converting a format of the target image according to an input format of the analysis unit,
    상기 표준화 모듈은 상기 표준화가 완료된 타겟 영상을 상기 분석부에 제공하는 분석 장치.The normalization module provides the analysis unit with the standardized target image.
  9. 제1항에 있어서,According to claim 1,
    상기 분석부는 상기 행동의 분석을 통해 상기 사용자의 이상 행동을 탐지하고,The analyzer detects the user's abnormal behavior through analysis of the behavior;
    상기 분석부는 상기 이상 행동이 탐지되면, 상기 사용자와 관련된 서비스를 관리하는 관제부로 알림 메시지를 전송하는 분석 장치.The analyzer transmits a notification message to a control unit that manages a service related to the user when the abnormal behavior is detected by the analyzer.
  10. 제1항에 있어서,According to claim 1,
    상기 분석부는 상기 제1 영상을 프레임 단위로 로딩하고,The analysis unit loads the first image frame by frame,
    상기 분석부는 각 프레임에 포함된 사용자 영역을 스크랩하며,The analysis unit scraps the user area included in each frame,
    상기 분석부는 연속된 프레임으로부터 스크랩된 사용자 영역의 변화를 분석하고,The analysis unit analyzes changes in the user area scraped from consecutive frames;
    상기 분석부는 상기 영역 변화의 분석을 통해 상기 사용자의 이상 행동을 추출하고 분석하는 분석 장치.The analysis unit extracts and analyzes the abnormal behavior of the user through analysis of the change in the area.
  11. 제10항에 있어서,According to claim 10,
    상기 분석부는 상기 스크랩된 사용자 영역과 상기 추출된 이상 행동의 분석 정보를 매칭시켜 스크랩북에 저장하고,The analyzer matches the scraped user area with the extracted abnormal behavior analysis information and stores it in a scrapbook;
    상기 분석부는 특정 제1 영상에 대한 분석이 모두 완료되면, 상기 특정 제1 영상을 대상으로 생성된 특정 스크랩북을 출력하는 분석 장치.The analyzer outputs a specific scrapbook generated for the specific first image when the analysis unit completes the analysis of the specific first image.
  12. 제1항에 있어서,According to claim 1,
    상기 분석부에 의해 이상 행동으로 분석된 사용자가 기등록된 제1 사용자인지 식별하는 재식별부가 마련되고,A re-identification unit is provided to identify whether the user analyzed as abnormal behavior by the analysis unit is a pre-registered first user;
    상기 저장부에 저장되는 상기 제2 영상 및 상기 분석 정보는 상기 제1 사용자의 것으로 한정되는 분석 장치.The second image and the analysis information stored in the storage unit are limited to those of the first user.
  13. 제12항에 있어서,According to claim 12,
    상기 분석부는 상기 제1 영상의 프레임 단위로 스크랩된 사용자 영역과 상기 사용자의 이상 행동의 분석 정보를 스크랩북에 저장하고,The analysis unit stores analysis information on a user region scraped in frame units of the first image and an abnormal behavior of the user in a scrapbook;
    상기 재식별부는 상기 스크랩된 사용자 영역을 이용하여 특정 사용자의 기등록 여부를 식별하고,The re-identification unit identifies whether a specific user has already been registered using the scraped user area;
    상기 재식별부는 상기 특정 사용자가 상기 제1 사용자로 식별되면, 상기 저장부에 상기 특정 사용자의 이상 행동 분석 정보와 상기 스크랩된 사용자 영역을 저장하는 분석 장치.Wherein the re-identification unit stores the abnormal behavior analysis information of the specific user and the scraped user area in the storage unit when the specific user is identified as the first user.
  14. 분석 장치에 의해 수행되는 분석 방법에 있어서,In the analysis method performed by the analysis device,
    사용자가 촬영된 제1 영상을 수집하는 수집 단계;A collection step of collecting a first image captured by a user;
    상기 제1 영상에 등장하는 상기 사용자를 비식별 처리하는 비식별 단계;a de-identification step of de-identifying the user appearing in the first image;
    상기 수집 단계로부터 출력된 복수의 제1 영상 중에서 상기 사용자가 등장하는 타겟 영상을 선별하며, 상기 복수의 타겟 영상만을 대상으로 표준화를 수행하는 전처리 단계;a pre-processing step of selecting a target image in which the user appears from among the plurality of first images output from the collecting step and standardizing only the plurality of target images;
    표준화된 상기 타겟 영상의 연속된 프레임으로부터 스크랩된 사용자 영역의 변화를 분석하여 상기 사용자의 이상 행동을 추출하고, 상기 프레임 단위로 스크랩된 사용자 영역과 상기 사용자의 이상 행동의 분석 정보를 스크랩북에 저장하는 분석 단계;Analyzing changes in user areas scraped from consecutive frames of the standardized target image to extract deviant behaviors of the user, and storing analysis information on the scraped user areas and the user's deviant behaviors in frame units in a scrapbook analysis phase;
    상기 분석부에 의해 이상 행동으로 분석된 사용자가 기등록된 제1 사용자인지 식별하고, 복수의 스크랩북 중에서 상기 제1 사용자의 스크랩북에 포함된 이상 행동 분석 정보와 스크랩된 사용자 영역만을 출력하는 재식별 단계;A re-identification step of identifying whether the user analyzed for deviant behavior by the analyzer is a pre-registered first user, and outputting only the deviant behavior analysis information included in the scrapbook of the first user and the scraped user area among a plurality of scrapbooks. ;
    상기 비식별 단계에서 비식별 처리된 제1 영상에 해당하는 제2 영상, 상기 재식별 단계로부터 출력된 상기 제1 사용자의 분석 정보, 상기 스크랩된 사용자 영역을 매칭시켜 저장하는 저장 단계;a storage step of matching and storing a second image corresponding to the first image de-identified in the de-identification step, the analysis information of the first user output from the re-identification step, and the scraped user area;
    를 포함하는 분석 방법.Analysis method comprising a.
  15. 제14항에 있어서,According to claim 14,
    상기 분석 단계에서 비상 이상 행동이 탐지되면, 상기 사용자를 감독하는 감독자의 단말기 또는 상기 사용자의 보호자의 단말기로 알림 메시지를 전송하는 관제 단계;a control step of transmitting a notification message to a terminal of a supervisor supervising the user or a terminal of a guardian of the user when an emergency or abnormal behavior is detected in the analysis step;
    상기 저장 단계를 통해 저장부에 저장된 상기 제1 사용자의 분석 정보와 상기 스크랩된 사용자의 영역을 서비스 수단에 제공하는 서비스 단계;를 더 포함하는 분석 방법.and a service step of providing the analysis information of the first user and the scraped user area stored in a storage unit through the storing step to a service means.
PCT/KR2022/015540 2021-12-14 2022-10-13 Apparatus and method for analyzing children's behavior using edutech technology to improve educational culture for children WO2023113183A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210178773A KR20230089956A (en) 2021-12-14 2021-12-14 Apparatus and method for analyzing children's behavior using edutech technology to improve the educational culture of children
KR10-2021-0178773 2021-12-14

Publications (1)

Publication Number Publication Date
WO2023113183A1 true WO2023113183A1 (en) 2023-06-22

Family

ID=86772846

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/015540 WO2023113183A1 (en) 2021-12-14 2022-10-13 Apparatus and method for analyzing children's behavior using edutech technology to improve educational culture for children

Country Status (2)

Country Link
KR (1) KR20230089956A (en)
WO (1) WO2023113183A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5002131B2 (en) * 2005-04-05 2012-08-15 キヤノン株式会社 Imaging device for monitoring and control method thereof
KR101980551B1 (en) * 2018-11-08 2019-05-21 주식회사 다누시스 System For Detecting An Action Through Real-Time Intelligent CCTV Image Analysis Using Machine Learning Object Detection And Method For Detecting An Action Through Real-Time Intelligent CCTV Image Analysis Using Machine Learning Object Detection
KR20190139808A (en) * 2019-12-04 2019-12-18 주식회사 블루비즈 A behavior pattern abnormality discrimination system and method for providing the same
KR20200036656A (en) * 2018-09-28 2020-04-07 한국전자통신연구원 Face image de-identification apparatus and method
KR20210083913A (en) * 2019-12-27 2021-07-07 주식회사 베어테크 Method and apparatus for recording behavior and response of child

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5002131B2 (en) * 2005-04-05 2012-08-15 キヤノン株式会社 Imaging device for monitoring and control method thereof
KR20200036656A (en) * 2018-09-28 2020-04-07 한국전자통신연구원 Face image de-identification apparatus and method
KR101980551B1 (en) * 2018-11-08 2019-05-21 주식회사 다누시스 System For Detecting An Action Through Real-Time Intelligent CCTV Image Analysis Using Machine Learning Object Detection And Method For Detecting An Action Through Real-Time Intelligent CCTV Image Analysis Using Machine Learning Object Detection
KR20190139808A (en) * 2019-12-04 2019-12-18 주식회사 블루비즈 A behavior pattern abnormality discrimination system and method for providing the same
KR20210083913A (en) * 2019-12-27 2021-07-07 주식회사 베어테크 Method and apparatus for recording behavior and response of child

Also Published As

Publication number Publication date
KR20230089956A (en) 2023-06-21

Similar Documents

Publication Publication Date Title
WO2019132170A1 (en) Learning data management method, device, and program
AU2021201597A1 (en) Systems and Methods for Supplementing Captured Data
JP5339476B2 (en) Image processing system, fever tracking method, image processing apparatus, control method thereof, and control program
WO2018135696A1 (en) Artificial intelligence platform using self-adaptive learning technology based on deep learning
WO2020119403A1 (en) Hospitalization data abnormity detection method, apparatus and device, and readable storage medium
WO2018124510A1 (en) Event storage device, event search device and event notification device
WO2016171341A1 (en) Cloud-based pathology analysis system and method
JP6930126B2 (en) Person detection system
WO2015147499A1 (en) Animal estrus detection system and device, and method therefor
WO2017164478A1 (en) Method and apparatus for recognizing micro-expressions through deep learning analysis of micro-facial dynamics
WO2016099084A1 (en) Security service providing system and method using beacon signal
WO2021100919A1 (en) Method, program, and system for determining whether abnormal behavior occurs, on basis of behavior sequence
CN112364696A (en) Method and system for improving family safety by using family monitoring video
US20230102479A1 (en) Anonymization device, monitoring apparatus, method, computer program, and storage medium
CN212208379U (en) Automatic attendance temperature measuring system
WO2021002722A1 (en) Method for perceiving event tagging-based situation and system for same
KR101879444B1 (en) Method and apparatus for operating CCTV(closed circuit television)
WO2021235833A1 (en) Artificial intelligence-based smart pass system using mask detection and body temperature correction of entering or exiting person
WO2015152690A2 (en) Access control apparatus and method using face recognition
WO2021040287A1 (en) Person re-identification device and method
WO2021194008A1 (en) Cloud service method and device for detecting and managing abnormal symptom of sow
WO2022146050A1 (en) Federated artificial intelligence training method and system for depression diagnosis
WO2023113183A1 (en) Apparatus and method for analyzing children's behavior using edutech technology to improve educational culture for children
WO2022114895A1 (en) System and method for providing customized content service by using image information
WO2019083073A1 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22907654

Country of ref document: EP

Kind code of ref document: A1