WO2021125550A1 - Electronic device and method for controlling the electronic device. - Google Patents

Electronic device and method for controlling the electronic device. Download PDF

Info

Publication number
WO2021125550A1
WO2021125550A1 PCT/KR2020/015300 KR2020015300W WO2021125550A1 WO 2021125550 A1 WO2021125550 A1 WO 2021125550A1 KR 2020015300 W KR2020015300 W KR 2020015300W WO 2021125550 A1 WO2021125550 A1 WO 2021125550A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
event
fall down
static object
Prior art date
Application number
PCT/KR2020/015300
Other languages
French (fr)
Inventor
Dongjin Kim
Seonghun Jeong
Jonghee Han
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2021125550A1 publication Critical patent/WO2021125550A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0446Sensor means for detecting worn on the body to detect changes of posture, e.g. a fall, inclination, acceleration, gait
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/18Prevention or correction of operating errors
    • G08B29/185Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
    • G08B29/186Fuzzy logic; neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/20Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming only infrared radiation into image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Definitions

  • This disclosure relates to an electronic device and a method for controlling thereof and, more particularly, to an electronic device capable of detecting a fall down event based on an image obtained through a visual sensor and a method for controlling thereof.
  • An AI system is a system in which a machine learns, judges, and iteratively improves analysis and decision making, unlike an existing rule-based smart system.
  • An accuracy, a recognition rate and understanding or anticipation of a user's taste may be correspondingly increased.
  • existing rule-based smart systems are gradually being replaced by deep learning-based AI systems.
  • An artificial intelligence (IA) system in which an image obtained by using a visual sensor is input to a trained(or, learned) neural network model to sense a person's fall down or a loss or occupancy (presence in a room).
  • the objective of the disclosure is to an electronic device capable of identifying whether a person falls down by comparing a reference image including a static object and an event image including a static object after detecting a fall down event and a method for controlling thereof.
  • an electronic device includes a visual sensor, a memory configured to store at least one instruction, a processor, connected to the visual sensor and the memory, configured to control the electronic device, and the processor, by executing the at least one instruction, may identify a static object from a plurality of image frames obtained through the visual sensor, and obtain a reference image comprising the identified static object, based on a fall down event being detected through a trained neural network model, identify a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtain an event image comprising the identified static object from the at least one image frame, and identify whether a person falls down by comparing the reference image and the event image.
  • a method of controlling an electronic device includes identifying a static object from a plurality of image frames obtained through a visual sensor, and obtaining a reference image comprising the identified static object, based on a fall down event being detected through a trained neural network model, identifying a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtaining an event image comprising the identified static object from the at least one image frame, and identifying whether a person falls down by comparing the reference image and the event image.
  • an electronic device may provide a user with a more accurate fall down event detection fact.
  • FIG. 1 is a diagram illustrating an operation of an electronic device detecting a fall down event according to an embodiment
  • FIG. 2 is a block diagram illustrating a configuration of an electronic device according to an embodiment
  • FIG. 3 is a flowchart illustrating a method for detecting a fall down event by an electronic device according to an embodiment
  • FIG. 4 is a flowchart illustrating a method for adjusting a light change amount by a dynamic vision sensor based on a brightness value obtained through an illuminance sensor according to an embodiment
  • FIGS. 5A and 5B are diagrams illustrating images obtained through a dynamic vision sensor according to an embodiment
  • FIG. 6 is a diagram illustrating a method of generating a reference image according to an embodiment
  • FIGS. 7A and 7B are diagrams illustrating a false positive embodiment and a true positive embodiment of the fall down event according to an embodiment.
  • FIG. 8 is a sequence diagram illustrating an alert and an image according to an event of a person by an electronic device according to an embodiment.
  • the expressions "have,” “may have,” “including,” or “may include” may be used to denote the presence of a feature (e.g., a numerical value, a function, an operation, or a component such as a part), and does not exclude the presence of additional features.
  • a or B “at least one of A and / or B,” or “one or more of A and / or B,” and the like include all possible combinations of the listed items.
  • “A or B,” “at least one of A and B,” or “at least one of A or B” includes (1) at least one A, (2) at least one B, (3) at least one A and at least one B all together.
  • first first
  • second second
  • first first
  • second second
  • module such as “module,” “unit,” “part,” and so on may be used to refer to an element that performs at least one function or operation, and such element may be implemented as hardware or software, or a combination of hardware and software. Further, except for when each of a plurality of “modules,” “units,” “parts,” and the like needs to be realized in an individual hardware, the components may be integrated in at least one module or chip and may be realized in at least one processor.
  • an element e.g., a first element
  • another element e.g., a second element
  • any such element may be directly connected to the other element or may be connected via another element (e.g., a third element).
  • an element e.g., a first element
  • another element e.g., a second element
  • there is no other element e.g., a third element between the other elements.
  • the expression “configured to” can be used interchangeably with, for example, “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of.”
  • the expression “configured to” does not necessarily mean “specifically designed to” in a hardware sense.
  • "a device configured to” may indicate that such a device can perform an action along with another device or part.
  • the expression “a processor configured to perform A, B, and C” may indicate an exclusive processor (e.g., an embedded processor) to perform the corresponding action, or a generic-purpose processor (e.g., a central processor (CPU) or application processor (AP)) that can perform the corresponding actions by executing one or more software programs stored in the memory device.
  • an exclusive processor e.g., an embedded processor
  • a generic-purpose processor e.g., a central processor (CPU) or application processor (AP)
  • FIG. 1 is a diagram illustrating an operation of an electronic device 100 for sensing a fall down event, according to an embodiment.
  • the electronic device 100 may be a closed-circuit television (CCTV), a home gateway, or the like, but this is only one embodiment, and may be implemented as a home appliance such as a TV, a refrigerator, a washing machine, an AI speaker, or a portable terminal such as a smart phone, a tablet personal computer (PC), or the like.
  • the electronic device 100 may include a visual sensor to obtain an image frame for sensing a fall down event occurring within a particular space.
  • the electronic device 100 may obtain a plurality of image frames 10-1, 10-2, 10-3 using a visual sensor.
  • the visual sensor is a dynamic vision sensor (DVS), and the dynamic vision sensor is a sensor capable of detecting a pixel having a change by a movement on a pixel unit to obtain an image, wherein the sensor is capable of sensing a moving object.
  • the visual sensor being a dynamic vision sensor is merely an example, and the visual sensor may be implemented as a general image sensor.
  • the electronic device 100 may identify a static object from the plurality of obtained image frames 10-1, 10-2, 10-3.
  • the DVS may sense a moving object rather than a static object, as it is a sensor capable of sensing a change in light due to movement.
  • the electronic device 100 may adjust a threshold value capable of sensing a change in light in the DVS.
  • the electronic device 100 may adjust the threshold depending on illuminance of an external environment. When the threshold value of the change in light sensed by the DVS is lowered, the electronic device 100 may extract the static object as well as the moving object from the image frame.
  • the electronic device 100 may identify an object commonly detected in the plurality of image frames 10-1, 10-2, 103 as a static object.
  • the electronic device 100 may include an infrared (IR) light source portion capable of irradiating the IR.
  • the electronic device 100 can control the IR light source portion to emit IR while changing the intensity of the IR emitted by the IR light source portion, and the electronic device 100 may identify the moving object and the static object by sensing the IR of which intensity is changed through the DVS.
  • the electronic device may identify an object commonly detected in the plurality of image frames 10-1, 10-2, 10-3 as a static object.
  • the electronic device 100 may sense a change in light using a shutter, or may change a pixel value using a vibration element such as an actuator and a motor to identify a moving object and a static object included in each of the plurality of image frames.
  • the electronic device 100 may identify an object commonly detected in the plurality of image frames 10-1, 10-2, 10-3 as a static object.
  • the electronic device 100 may obtain a pixel value of a plurality of images through the image sensor and identify a static object based on a fixed pixel value among the obtained pixel values.
  • the electronic device 100 may obtain a reference image 15 based on the static object identified from the plurality of images 10-1, 10-2, 10-3.
  • the reference image may be an image including a static object within a specific space.
  • An electronic device 100 may obtain a reference image that includes a static object based on a data value per pixel included in the plurality of image frames.
  • the electronic device 100 may obtain a reference image that includes a static object using a representative value of pixels included in the plurality of image frames (e.g., a mean, a mode, a value obtained by the AI, etc.)
  • the electronic device may sense a fall down event by inputting image frames 20-1 and 20-2 obtained through the visual sensor to the trained neural network model.
  • the trained neural network model is a neural network model trained to sense a fall down event of a person based on the image frame obtained through the visual sensor, and may be implemented as a deep neural network (DNN).
  • DNN deep neural network
  • the electronic device may identify the static object from the at least one image frame obtained via the visual sensor after the fall down event has been detected.
  • the electronic device 100 may identify the static object from one image frame after a predetermined time after the fall down event is detected, but this is only one embodiment, and the static object may be identified from the plurality of image frames after the fall down event is detected by the method of obtaining the reference image as described above.
  • the electronic device may then obtain an event image 30 that includes a static object identified from the at least one image frame.
  • the event image 30 may be an image that includes a static object obtained after the fall down event is detected.
  • the electronic device 100 may compare the reference image with the event image to identify a fall down event of a person. That is, the electronic device 100 may obtain similarity between the reference image and the event image.
  • the electronic device 100 may identify(or, determine) that the detected fall down event is true positive. That is, as illustrated in FIG. 1, if an additional object 35 that was not included in the reference image 15 is further included in the obtained event image 30 after the fall down event is detected, the electronic device 100 may identify that a person's fall down event has occurred.
  • the electronic device 100 may identify that the sensed fall down event is false positive and continue to monitor the fall down event. For example, an event similar to the fall down event, such as an event in which a person is out of a field of view (FOV), an event in which a person is hidden by an object, an event in which a person leaves a door, or the like, may not be identified as a fall down event.
  • FOV field of view
  • the electronic device may provide an alert message to an external user terminal.
  • the alert message may include at least one of a message including information on the fall down even and an image frame after the fall down event.
  • the electronic device may reduce false positive probability of the fall down event.
  • FIG. 2 is a block diagram illustrating a configuration of the electronic device according to an embodiment.
  • the electronic device 100 may include a visual sensor 110, an illuminance sensor 120, an IR light source unit 130, a memory 140, a processor 150, and a communicator 160. Some configurations may be added to or omitted from the configurations of the electronic device 100 as illustrated in FIG. 2.
  • the visual sensor 110 is configured to obtain an image for a specific space.
  • the visual sensor 110 may be implemented as the DVS, but this is merely exemplary, and may be implemented as a general image sensor.
  • the illuminance sensor 120 is configured to detect illuminance of an external environment.
  • the electronic device 100 may adjust a threshold value of change in light detectable by the DVS based on the illuminance detected by the illuminance sensor 120.
  • the IR light source unit 130 is configured to illuminance IR light.
  • the IR light source unit 130 may change intensity of IR light by the control of the processor 150, but this is merely exemplary, and may change the light emitting cycle of IR light.
  • the memory 140 may store instructions or data related to at least one other component of the electronic device 100.
  • the memory 140 may include non-volatile memory and volatile memory, for example, a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • the memory 140 may be accessed by the processor 150, and read/write/modify/update data by the processor 150 may be performed.
  • the memory 140 may also store a trained neural network model for sensing a fall down event.
  • the trained neural network model can be an AI model trained to sense whether a person falls down by inputting a plurality of image frames obtained through the visual sensor 110.
  • the trained neural network model may be executed by an existing general purpose processor (e.g., central processing unit (CPU)) or a separate AI dedicated processor (e.g., a graphics processing unit (GPU), a neural processing unit (NPU), etc.).
  • the memory 140 may also store a plurality of configurations (or modules) for sensing the fall down event shown in FIG. 2. When a program for sensing the fall down event is executed or the electronic device is powered on, the plurality of configurations stored in the memory 140 may be loaded into the processor 150 as shown in FIG. 2.
  • the communicator 160 is configured to communicate with various types of external devices in accordance with various types of communication schemes.
  • the communicator 160 may include a Wi-Fi module, a Bluetooth module, an infrared communication module, a wireless communication module, or the like.
  • the processor 150 may communicate with various external user terminals using the communicator 160. Specifically, the communicator 160 may transmit at least one of information on the fall down event and an image frame obtained after the fall down event to the external user terminal.
  • the communicator 160 may transmit at least one of a message including information that the moving object has not been detected for a threshold time or an image frame obtained after the threshold time to the external user terminal. If the electronic device 100 is not equipped with the visual sensor 110, the communicator 160 may receive an image from an external camera device.
  • the processor 150 may be electrically connected to the memory 140 to control the overall operation of the electronic device 100.
  • the processor 150 may execute at least one instruction stored in the memory 140 to identify a static object from a plurality of image frames obtained through the visual sensor 110 and obtain a reference image that includes the identified static object.
  • the processor 150 may identify a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtain an event image including the static object identified from the at least one image frame, and compare the reference image and the event image to identify whether the person has fallen down.
  • the processor 150 may include a static object detection module 151, an event detection module 152, a reference image acquisition module 153, an event image acquisition module 154, a comparison module 155, and an alert module 156.
  • the plurality of modules 151 to 156 may be implemented as software but this is merely exemplary, and may be implemented as the combination of software and hardware.
  • the static object detection module 151 may sense (or identify) a static object from a plurality of image frames.
  • the static object detection module 151 can obtain a boundary of the object by adjusting a threshold value of the change of light that the dynamic vision sensor can sense, identify an object included in each of the plurality of image frames based on the boundary of the object, and identify the object commonly included in the plurality of image frames as a static object.
  • the threshold value of the light change can be changed according to the illuminance around the electronic device 100 obtained through the illuminance sensor 120.
  • the static object detection module 151 may control an IR light source unit 130 so as to cause the IR light source unit 130 emit light while changing the intensity of IR emitted by the IR light source unit 130, and detect the IR of which intensity is changed through a dynamic vision sensor, to identify an object.
  • the static object detection module 151 may identify an object commonly included in the plurality of image frames as a static object.
  • the static object detection module 151 may identify an object by detecting a change in light using a shutter or changing a pixel value using an actuator or a motor.
  • the static object detection module 151 may identify an object commonly included in the object included in a plurality of image frames as a static object.
  • the static object detection module 151 may obtain a pixel value of a plurality of images obtained through the sensor and identify a static object based on a fixed pixel value among the pixel values obtained from the plurality of image frames.
  • the event detection module 152 may detect a fall down event of a person using the trained neural network model 157.
  • the event detection module 152 may detect a fall down event by inputting a plurality of image frames obtained through the visual sensor 110 to the neural network model 157 on a real time basis.
  • the reference image acquisition module 153 may obtain a reference image that includes a static object sensed from the static object detection module 151.
  • the reference image may be obtained with a plurality of image frames acquired prior to sensing the fall down event.
  • the reference image acquisition module 153 may obtain a reference image that includes a static object based on a data value per pixel included in the plurality of image frames.
  • the reference image acquisition module 153 may obtain a reference image that includes a static object using a representative value (e.g., a mean, a mode, a value obtained by the AI, etc.) of pixels included in the plurality of image frames.
  • a representative value e.g., a mean, a mode, a value obtained by the AI, etc.
  • the event image acquisition module 154 may obtain an event image that includes a static object sensed from the static object detection module 151.
  • the event image may be obtained with at least one image frame obtained after detecting the fall down event.
  • the event image acquisition module 154 may obtain one image frame acquired after the fall down event detection as an event image.
  • the event image acquisition module 154 may obtain an event image that includes a static object based on a data value per pixel included in the plurality of image frames acquired after the fall down event is detected, such as the reference image acquisition module 153.
  • the event image acquisition module 154 may obtain an event image that includes a static object using a representative value (e.g., a mean, a mode, a value obtained by the AI, etc.) of pixels included in the plurality of image frames acquired after the fall down event is detected.
  • a representative value e.g., a mean, a mode, a value obtained by the AI, etc.
  • the comparison module 155 may compare the reference image obtained from the reference image acquisition module 153 with the event image obtained from the event image acquisition module 154.
  • the comparison module 155 may identify the similarity of the reference image and the event image. If the visual sensor 110 is a dynamic vision sensor, the similarity may be a similarity of the position of the pixel at which the change in light is detected, and if the visual sensor 110 is a conventional image sensor, the similarity may be a similarity of the pixel values.
  • the comparison module 155 identifies that the person's fall down event is true positive, and if the similarity is greater than or equal to the threshold, the comparison module 155 may identify that the person's fall down event is false positive. for example, if the similarity of the reference image and the event image is less than 98%, then the comparison module 155 may identify that the person's fall down event is true positive, and if the similarity is greater than or equal to 98%, the comparison module 155 may identify that the person's fall down event is false positive.
  • an alert module 156 may provide a user with an alert message.
  • the alert module 156 may transmit, to an external user terminal, an alert message which includes a message including information on the fall down event and at least one of an image frame obtained after detecting the fall down event through the communicator 160.
  • the processor 150 may be configured with one or a plurality of processors.
  • the one or more processors may be a general-purpose processor such as a central processing unit (CPU), an application processor (AP), a digital signal processor (DSP), or the like, a graphics-only processor such as a graphics processing unit (GPU), a vision processing unit (VPU), or an artificial intelligence-only processor such as a neural network processing unit (NPU).
  • the one or a plurality of processors control the processing of the input data according to a predefined operating rule or artificial intelligence model stored in the memory 140.
  • the artificial intelligence-only processor may be designed with a hardware structure specialized for the processing of a particular AI model.
  • the predetermined operating rule or AI model is made through learning.
  • being made through learning may refer to a predetermined operating rule or AI model set to perform a desired feature (or purpose) is made by making a basic AI model trained using various training data using learning algorithm.
  • the learning may be accomplished through a separate server and/or system, but is not limited thereto and may be implemented in an electronic apparatus. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • the AI model may include a plurality of neural network layers.
  • Each of the plurality of neural network layers includes a plurality of weight values), and may perform a neural network processing operation through an iterative operation leveraging results of a previous layer and a plurality of parameters.
  • the plurality of weight values included in the plurality of neural network layers may be optimized by learning results of the AI model. For example, the plurality of weight values may be updated such that a loss value or a cost value obtained by the AI model is reduced or minimized during the learning process.
  • the artificial neural network may include deep neural network (DNN) and may include, for example, but is not limited to, convolutional neural network (CNN), recurrent neural network (RNN), restricted Boltzmann machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), deep Q-networks, or the like.
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • RBM restricted Boltzmann machine
  • DNN deep belief network
  • BNN bidirectional recurrent deep neural network
  • Q-networks or the like.
  • the electronic device 100 may further include an output device such as a display (not shown) or a speaker (not shown).
  • an output device such as a display (not shown) or a speaker (not shown).
  • the electronic device 100 may output information on the fall down event using an output device such as a display or a speaker.
  • FIG. 3 is a flowchart illustrating a method for detecting a fall down event by an electronic device according to an embodiment.
  • the electronic device 100 may obtain a plurality of image frames through a visual sensor in operation S310.
  • the electronic device 100 may obtain a plurality of image frames via a visual sensor that captures a particular space.
  • the electronic device 100 can obtain a plurality of image frames through the DVS, but this is only one embodiment, and can obtain a plurality of image frames through a general image sensor.
  • the electronic device 100 may identify a static object from a plurality of image frames and obtain a reference image including the static object in operation S320.
  • the electronic device 100 may use the dynamic vision sensor to obtain a plurality of image frames that include both the moving object and the static object.
  • a method for obtaining a plurality of image frames including both a moving object and a static object will be described with reference to FIG. 4.
  • the electronic device 100 may detect illuminance around the electronic device in operation S410.
  • the electronic device 100 may adjust a threshold value of the change of light for detecting an object according to the detected illuminance in operation S420. Specifically, it can be adjusted such that the higher the detected illuminance value, the higher the threshold value of the change of light for detecting the object, and the lower the detected illumination value, the smaller the threshold of the change of light for detecting the object.
  • step S430 the electronic device 100 may identify whether a boundary of the object is identified in the obtained image frame by adjusting a threshold value of the change of light in operation S430. If the obtained image frame by adjusting the threshold of the change of light is a first image frame 510 as shown in FIG. 5A, the electronic device 100 may identify that the boundary of the object is not identified. Alternatively, if the obtained image frame by adjusting the threshold of the change of light is a second image frame 520 as shown in FIG. 5B, the electronic device 100 may identify that the boundary of the object is identified.
  • the electronic device 100 may fix the threshold value to identify the object from the plurality of image frames in operation S440. If the interface of the object is not identified in operation S430-N, the electronic device 100 may again adjust (reduce) the threshold to identify if the boundary of the object is identified.
  • the electronic device 100 may obtain a plurality of frames including the object by adjusting a threshold value of the change of light to detect an object through the method of FIG. 4.
  • the electronic device 100 may use the IR light source unit 130 to change the intensity of the IR to emit IR or change the light emission period of the IR to emit IR to detect objects included in the plurality of image frames. That is, the electronic device 100 may sense a change in light for detecting an object through the dynamic vision sensor by changing the light intensity or the light emitting period of the IR light source unit 130. Thus, the electronic device 100 may obtain a plurality of image frames including the object.
  • the electronic device 100 may identify an object by detecting a change of light using a shutter or changing a pixel value using an actuator or a motor.
  • the electronic device 100 may obtain a reference image 620 including a static object using a plurality of image frames 610-1 through 610-6 obtained at a time before an event occurs.
  • the electronic device 100 may identify an object commonly detected from a plurality of image frames among objects included in a plurality of image frames as a static object, and can obtain a reference image 620 including a static object.
  • the electronic device 100 may obtain a reference image 620 based on the image frame obtained at a particular period (e.g., 10 minutes) of the image frame acquired at the time before the fall down event occurs, but this is only one embodiment, and may obtain the reference image 620 based on the image frame obtained at a particular time point (e.g., an afternoon time) before the fall down event occurs.
  • the electronic device 100 may identify the static object based on a region in which the pixel value is maintained constant among the pixel values of the plurality of image frames obtained through the image sensor.
  • the electronic device 100 can identify the static object based on the image frame obtained during the time having the illuminance value within a predetermined range.
  • the electronic device 100 may identify the static object based on the pixel value of the image frame obtained at the same time period (e.g., morning or evening).
  • the electronic device 100 may then obtain a reference image that includes the identified static object.
  • the electronic device 100 may identify whether the fall down event is detected using the trained neural network model in operation S330.
  • the trained neural network model may be an AI model trained sense the fall down event by inputting a plurality of frames.
  • the electronic device 100 may obtain (or update) the reference image using a plurality of image frames obtained after the certain time in operation S320. If the fall down event is detected in operation S330-Y, the electronic device 100 may obtain at least one image frame through the visual sensor in operation S340.
  • the electronic device 100 may obtain an event image including a static object from at least one image frame in operation S350.
  • the electronic device 100 may obtain an image frame of a specific time (e.g., one minute after detecting a fall down event) as an event image after a fall down event detection, but this is only one embodiment, and may obtain an event image including a static object using the method described in S320 based on a plurality of image frames obtained after detecting the fall down event.
  • the electronic device 100 may compare a reference image with an event image in operation S360.
  • the electronic device 100 may identify the similarity between the reference image and the event image to identify whether the reference image and the event image are the same.
  • step S370 the electronic device 100 may identify the fall down event based on the comparison result identified in operation S360. As shown in FIG. 7A, if a reference image 710 and an event image 720 are different from each other, that is, if the similarity between the reference image 710 and the event image 720 is below a threshold value, the electronic device 100 may identify that a fall down event is true positive and identify that a person's fall down is present. As illustrated in FIG.
  • the electronic device 100 may identify that the fall down event is false positive and identify that the fall down of the person is not present.
  • the electronic device 100 may provide an alert message in operation S380. However, if it is identified that fall down is not present in operation S370-N, the electronic device 100 may monitor the fall down event in operation S330.
  • FIG. 8 is a sequence diagram illustrating an alert and an image according to an event of a person by an electronic device according to an embodiment.
  • the electronic device 100 may detect an event in operation S810.
  • the electronic device 100 may detect the fall down event described in FIGS. 1 to 7B, but this is merely exemplary, and the event in which the moving object is not detected for a predetermined time, or the like, may be detected.
  • the electronic device 100 may transmit information about the event to a user terminal 800 in operation S820.
  • the information about the event may include information about the event occurrence fact, the type of event, the time at which the event occurred, the location where the event occurred, and the information about the person for whom the event occurred.
  • the user terminal 800 may output information about the received event in operation S830.
  • the user terminal 800 may visually output information about the event through an output device such as a display, but this is only one embodiment, and the user terminal 800 may audibly output information about the event through the speaker and visually output through a vibration device.
  • the user terminal 800 may receive a user command to identify an image in operation S840.
  • the user terminal 800 may request an image to the electronic device 100 in operation S850.
  • the electronic device 100 may transmit an image obtained after the event detection to the user terminal 800 in response to the image request in operation S860.
  • the user terminal 800 may display a transmitted image in operation S870.
  • a user may more quickly identify the occurrence of an event and the content of the event. Accordingly, a user may handle emergency situations more quickly.
  • the reference image is obtained on the basis of the plurality of image frames obtained before the event detection, and true positive of the event is identified by comparing the obtained reference image with the event image, but this is merely one embodiment, and the image frame obtained before the event detection and the image obtained after the event detection may be compared to identify whether the true positive of the event.
  • the true positive of the event may be identified by comparing an image frame obtained at a specific timing (e.g., one day before the event detection) before the event detection and an image frame obtained at a specific timing (e.g., one minute after the event detection) after the event detection.
  • unit or “module” used in the disclosure includes units consisting of hardware, software, or firmware, and is used interchangeably with terms such as, for example, logic, logic blocks, parts, or circuits.
  • a “unit” or “module” may be an integrally constructed component or a minimum unit or part thereof that performs one or more functions.
  • the module may be configured as an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • various embodiments of the disclosure may be implemented in software, including instructions stored on machine-readable storage media readable by a machine (e.g., a computer).
  • An apparatus may call instructions from the storage medium, and execute the called instruction, including an electronic apparatus (for example, electronic apparatus A) according to the disclosed embodiments.
  • the processor may perform a function corresponding to the instructions directly or by using other components under the control of the processor.
  • the instructions may include a code generated by a compiler or a code executable by an interpreter.
  • a machine-readable storage medium may be provided in the form of a non-transitory storage medium.
  • non-transitory only denotes that a storage medium does not include a signal but is tangible, and does not distinguish the case in which a data is semi-permanently stored in a storage medium from the case in which a data is temporarily stored in a storage medium.
  • “non-transitory storage medium” may include a buffer temporarily stored.
  • the method according to the above-described embodiments may be provided as being included in a computer program product.
  • the computer program product may be traded as a product between a seller and a consumer.
  • the computer program product may be distributed online in the form of machine-readable storage media (e.g., compact disc read only memory (CD-ROM)) or through an application store (e.g., Play Store TM and App Store TM) or distributed online directly between to users.
  • an application store e.g., Play Store TM and App Store TM
  • at least a portion of the computer program product e.g.: downloadable app
  • the respective elements (e.g., module or program) of the elements mentioned above may include a single entity or a plurality of entities.
  • at least one element or operation from among the corresponding elements mentioned above may be omitted, or at least one other element or operation may be added.
  • a plurality of components e.g., module or program
  • the integrated entity may perform functions of at least one function of an element of each of the plurality of elements in the same manner as or in a similar manner to that performed by the corresponding element from among the plurality of elements before integration.
  • the module, a program module, or operations executed by other elements may be executed consecutively, in parallel, repeatedly, or heuristically, or at least some operations may be executed according to a different order, may be omitted, or the other operation may be added thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Psychology (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Fuzzy Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are an electronic device and a method for controlling thereof. The electronic device includes a visual sensor, a memory configured to store at least one instruction, a processor, connected to the visual sensor and the memory, configured to control the electronic device, and the processor, by executing the at least one instruction, may identify a static object from a plurality of image frames obtained through the visual sensor, and obtain a reference image comprising the identified static object, based on a fall down event being detected through a trained neural network model, identify a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtain an event image comprising the identified static object from the at least one image frame, and identify a fall down presence of a person by comparing the reference image and the event image.

Description

ELECTRONIC DEVICE AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE.
This disclosure relates to an electronic device and a method for controlling thereof and, more particularly, to an electronic device capable of detecting a fall down event based on an image obtained through a visual sensor and a method for controlling thereof.
In recent years, AI systems have been used in various fields. An AI system is a system in which a machine learns, judges, and iteratively improves analysis and decision making, unlike an existing rule-based smart system. As the use of AI systems increases, for example, an accuracy, a recognition rate and understanding or anticipation of a user's taste may be correspondingly increased. As such, existing rule-based smart systems are gradually being replaced by deep learning-based AI systems.
An artificial intelligence (IA) system is provided in which an image obtained by using a visual sensor is input to a trained(or, learned) neural network model to sense a person's fall down or a loss or occupancy (presence in a room).
However, in the current AI system, even though an image is input to a trained neural network model, there is an error of detecting a fall down event even in an unintended situation. For example, in an event in which a person in an image is suddenly hidden by an object larger than the person or an event in which a person is going to exit a room at a fast speed, an action of the person disappears after the person's quick action and thus, there is an error that the AI system cannot not distinguish a fall down event from these events.
Therefore, there is a necessity to sense a fall down event more accurately by an AI system for detecting a fall down event using an image obtained through a visual sensor.
The objective of the disclosure is to an electronic device capable of identifying whether a person falls down by comparing a reference image including a static object and an event image including a static object after detecting a fall down event and a method for controlling thereof.
According to an embodiment, an electronic device includes a visual sensor, a memory configured to store at least one instruction, a processor, connected to the visual sensor and the memory, configured to control the electronic device, and the processor, by executing the at least one instruction, may identify a static object from a plurality of image frames obtained through the visual sensor, and obtain a reference image comprising the identified static object, based on a fall down event being detected through a trained neural network model, identify a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtain an event image comprising the identified static object from the at least one image frame, and identify whether a person falls down by comparing the reference image and the event image.
According to an embodiment, a method of controlling an electronic device includes identifying a static object from a plurality of image frames obtained through a visual sensor, and obtaining a reference image comprising the identified static object, based on a fall down event being detected through a trained neural network model, identifying a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtaining an event image comprising the identified static object from the at least one image frame, and identifying whether a person falls down by comparing the reference image and the event image.
According to an embodiment, by reducing a false positive probability of a fall down event, an electronic device may provide a user with a more accurate fall down event detection fact.
FIG. 1 is a diagram illustrating an operation of an electronic device detecting a fall down event according to an embodiment;
FIG. 2 is a block diagram illustrating a configuration of an electronic device according to an embodiment;
FIG. 3 is a flowchart illustrating a method for detecting a fall down event by an electronic device according to an embodiment;
FIG. 4 is a flowchart illustrating a method for adjusting a light change amount by a dynamic vision sensor based on a brightness value obtained through an illuminance sensor according to an embodiment;
FIGS. 5A and 5B are diagrams illustrating images obtained through a dynamic vision sensor according to an embodiment;
FIG. 6 is a diagram illustrating a method of generating a reference image according to an embodiment;
FIGS. 7A and 7B are diagrams illustrating a false positive embodiment and a true positive embodiment of the fall down event according to an embodiment; and
FIG. 8 is a sequence diagram illustrating an alert and an image according to an event of a person by an electronic device according to an embodiment.
Hereinafter, embodiments of the disclosure will be described with reference to the accompanying drawings. However, this disclosure is not intended to limit the embodiments described herein but includes various modifications, equivalents, and / or alternatives.
In this document, the expressions "have," "may have," "including," or "may include" may be used to denote the presence of a feature (e.g., a numerical value, a function, an operation, or a component such as a part), and does not exclude the presence of additional features.
In this document, the expressions "A or B," "at least one of A and / or B," or "one or more of A and / or B," and the like include all possible combinations of the listed items. For example, "A or B," "at least one of A and B," or "at least one of A or B" includes (1) at least one A, (2) at least one B, (3) at least one A and at least one B all together.
The terms such as "first," "second," and so on may be used to describe a variety of elements, but the elements may not be limited by these terms. The terms are labels used only for the purpose of distinguishing one element from another. For example, the first user device and the second user device may represent different user devices, regardless of the order or importance. For example, a first component can be termed a second component, and similarly, a second component can be termed a first component without departing from the scope of the claims set forth in this disclosure.
The term such as "module," "unit," "part," and so on may be used to refer to an element that performs at least one function or operation, and such element may be implemented as hardware or software, or a combination of hardware and software. Further, except for when each of a plurality of "modules," "units," "parts," and the like needs to be realized in an individual hardware, the components may be integrated in at least one module or chip and may be realized in at least one processor.
It is to be understood that an element (e.g., a first element) is "operatively or communicatively coupled with / to" another element (e.g., a second element) is that any such element may be directly connected to the other element or may be connected via another element (e.g., a third element). On the other hand, when an element (e.g., a first element) is "directly connected" or "directly accessed" to another element (e.g., a second element), it can be understood that there is no other element (e.g., a third element) between the other elements.
Herein, the expression "configured to" can be used interchangeably with, for example, "suitable for," "having the capacity to," "designed to," "adapted to," "made to," or "capable of." The expression "configured to" does not necessarily mean "specifically designed to" in a hardware sense. Instead, under some circumstances, "a device configured to" may indicate that such a device can perform an action along with another device or part. For example, the expression "a processor configured to perform A, B, and C" may indicate an exclusive processor (e.g., an embedded processor) to perform the corresponding action, or a generic-purpose processor (e.g., a central processor (CPU) or application processor (AP)) that can perform the corresponding actions by executing one or more software programs stored in the memory device.
The terms used in the description are used to describe an embodiment, but may not intend to limit the scope of other embodiments. Unless otherwise defined specifically, a singular expression may encompass a plural expression. All terms including technical and scientific terms used in the description could be used as meanings commonly understood by those ordinary skilled in the art to which the disclosure belongs. The terms that are used in the disclosure and are defined in a general dictionary may be used as meanings that are identical or similar to the meanings of the terms from the context of the related art, and they are not interpreted ideally or excessively unless they have been clearly and specially defined. According to circumstances, even the terms defined in the embodiments of the disclosure may not be interpreted as excluding the embodiments of the disclosure.
The disclosure will now be described in more detail with reference to the drawings. In the following description, a detailed description of known functions or configurations incorporated herein will be omitted when it may unnecessary obscure the gist of the disclosure. In connection with the description of the drawings, like reference numerals may be used for like elements.
Hereinafter, the embodiments will be described in a greater detail with reference to the drawings.
FIG. 1 is a diagram illustrating an operation of an electronic device 100 for sensing a fall down event, according to an embodiment. The electronic device 100 may be a closed-circuit television (CCTV), a home gateway, or the like, but this is only one embodiment, and may be implemented as a home appliance such as a TV, a refrigerator, a washing machine, an AI speaker, or a portable terminal such as a smart phone, a tablet personal computer (PC), or the like. The electronic device 100 may include a visual sensor to obtain an image frame for sensing a fall down event occurring within a particular space.
The electronic device 100 may obtain a plurality of image frames 10-1, 10-2, 10-3 using a visual sensor. The visual sensor is a dynamic vision sensor (DVS), and the dynamic vision sensor is a sensor capable of detecting a pixel having a change by a movement on a pixel unit to obtain an image, wherein the sensor is capable of sensing a moving object. However, the visual sensor being a dynamic vision sensor is merely an example, and the visual sensor may be implemented as a general image sensor.
The electronic device 100 may identify a static object from the plurality of obtained image frames 10-1, 10-2, 10-3. In general, the DVS may sense a moving object rather than a static object, as it is a sensor capable of sensing a change in light due to movement. The electronic device 100 according to an embodiment may adjust a threshold value capable of sensing a change in light in the DVS. The electronic device 100 may adjust the threshold depending on illuminance of an external environment. When the threshold value of the change in light sensed by the DVS is lowered, the electronic device 100 may extract the static object as well as the moving object from the image frame.
The electronic device 100 may identify an object commonly detected in the plurality of image frames 10-1, 10-2, 103 as a static object.
According to another embodiment, the electronic device 100 may include an infrared (IR) light source portion capable of irradiating the IR. The electronic device 100 can control the IR light source portion to emit IR while changing the intensity of the IR emitted by the IR light source portion, and the electronic device 100 may identify the moving object and the static object by sensing the IR of which intensity is changed through the DVS. The electronic device may identify an object commonly detected in the plurality of image frames 10-1, 10-2, 10-3 as a static object.
According to another embodiment, the electronic device 100 may sense a change in light using a shutter, or may change a pixel value using a vibration element such as an actuator and a motor to identify a moving object and a static object included in each of the plurality of image frames. The electronic device 100 may identify an object commonly detected in the plurality of image frames 10-1, 10-2, 10-3 as a static object.
According to another embodiment, the electronic device 100 may obtain a pixel value of a plurality of images through the image sensor and identify a static object based on a fixed pixel value among the obtained pixel values.
The electronic device 100 may obtain a reference image 15 based on the static object identified from the plurality of images 10-1, 10-2, 10-3. The reference image may be an image including a static object within a specific space.
An electronic device 100 may obtain a reference image that includes a static object based on a data value per pixel included in the plurality of image frames. The electronic device 100 may obtain a reference image that includes a static object using a representative value of pixels included in the plurality of image frames (e.g., a mean, a mode, a value obtained by the AI, etc.)
The electronic device may sense a fall down event by inputting image frames 20-1 and 20-2 obtained through the visual sensor to the trained neural network model. The trained neural network model is a neural network model trained to sense a fall down event of a person based on the image frame obtained through the visual sensor, and may be implemented as a deep neural network (DNN).
If a fall down event is detected, the electronic device may identify the static object from the at least one image frame obtained via the visual sensor after the fall down event has been detected. The electronic device 100 may identify the static object from one image frame after a predetermined time after the fall down event is detected, but this is only one embodiment, and the static object may be identified from the plurality of image frames after the fall down event is detected by the method of obtaining the reference image as described above. The electronic device may then obtain an event image 30 that includes a static object identified from the at least one image frame. The event image 30 may be an image that includes a static object obtained after the fall down event is detected.
The electronic device 100 may compare the reference image with the event image to identify a fall down event of a person. That is, the electronic device 100 may obtain similarity between the reference image and the event image.
As illustrated in FIG. 1, if the reference image 15 and an event image 30 are different, that is, if the similarity of the reference image 15 and the event image 30 is below a threshold, the electronic device 100 may identify(or, determine) that the detected fall down event is true positive. That is, as illustrated in FIG. 1, if an additional object 35 that was not included in the reference image 15 is further included in the obtained event image 30 after the fall down event is detected, the electronic device 100 may identify that a person's fall down event has occurred.
If the reference image and the event image are the same, that is, if the similarity of the reference image and the event image is greater than or equal to the threshold value, the electronic device 100 may identify that the sensed fall down event is false positive and continue to monitor the fall down event. For example, an event similar to the fall down event, such as an event in which a person is out of a field of view (FOV), an event in which a person is hidden by an object, an event in which a person leaves a door, or the like, may not be identified as a fall down event.
When it is identified that the detected fall down event is true positive, the electronic device may provide an alert message to an external user terminal. The alert message may include at least one of a message including information on the fall down even and an image frame after the fall down event.
As described above, by comparing the static object before and after detecting the fall down event, the electronic device may reduce false positive probability of the fall down event.
FIG. 2 is a block diagram illustrating a configuration of the electronic device according to an embodiment. As illustrated in FIG. 2, the electronic device 100 may include a visual sensor 110, an illuminance sensor 120, an IR light source unit 130, a memory 140, a processor 150, and a communicator 160. Some configurations may be added to or omitted from the configurations of the electronic device 100 as illustrated in FIG. 2.
The visual sensor 110 is configured to obtain an image for a specific space. The visual sensor 110 according to an embodiment may be implemented as the DVS, but this is merely exemplary, and may be implemented as a general image sensor.
The illuminance sensor 120 is configured to detect illuminance of an external environment. The electronic device 100 may adjust a threshold value of change in light detectable by the DVS based on the illuminance detected by the illuminance sensor 120.
The IR light source unit 130 is configured to illuminance IR light. The IR light source unit 130 may change intensity of IR light by the control of the processor 150, but this is merely exemplary, and may change the light emitting cycle of IR light.
The memory 140 may store instructions or data related to at least one other component of the electronic device 100. The memory 140 may include non-volatile memory and volatile memory, for example, a flash memory, a hard disk drive (HDD), or a solid state drive (SSD). The memory 140 may be accessed by the processor 150, and read/write/modify/update data by the processor 150 may be performed. The memory 140 may also store a trained neural network model for sensing a fall down event. The trained neural network model can be an AI model trained to sense whether a person falls down by inputting a plurality of image frames obtained through the visual sensor 110. The trained neural network model may be executed by an existing general purpose processor (e.g., central processing unit (CPU)) or a separate AI dedicated processor (e.g., a graphics processing unit (GPU), a neural processing unit (NPU), etc.). The memory 140 may also store a plurality of configurations (or modules) for sensing the fall down event shown in FIG. 2. When a program for sensing the fall down event is executed or the electronic device is powered on, the plurality of configurations stored in the memory 140 may be loaded into the processor 150 as shown in FIG. 2.
The communicator 160 is configured to communicate with various types of external devices in accordance with various types of communication schemes. The communicator 160 may include a Wi-Fi module, a Bluetooth module, an infrared communication module, a wireless communication module, or the like. The processor 150 may communicate with various external user terminals using the communicator 160. Specifically, the communicator 160 may transmit at least one of information on the fall down event and an image frame obtained after the fall down event to the external user terminal. The communicator 160 may transmit at least one of a message including information that the moving object has not been detected for a threshold time or an image frame obtained after the threshold time to the external user terminal. If the electronic device 100 is not equipped with the visual sensor 110, the communicator 160 may receive an image from an external camera device.
The processor 150 may be electrically connected to the memory 140 to control the overall operation of the electronic device 100. The processor 150 may execute at least one instruction stored in the memory 140 to identify a static object from a plurality of image frames obtained through the visual sensor 110 and obtain a reference image that includes the identified static object. When a fall down event is detected through the trained neural network model, the processor 150 may identify a static object from at least one image frame obtained through the visual sensor after the fall down event is detected, obtain an event image including the static object identified from the at least one image frame, and compare the reference image and the event image to identify whether the person has fallen down.
As illustrated in FIG. 2, the processor 150 may include a static object detection module 151, an event detection module 152, a reference image acquisition module 153, an event image acquisition module 154, a comparison module 155, and an alert module 156. The plurality of modules 151 to 156 may be implemented as software but this is merely exemplary, and may be implemented as the combination of software and hardware.
In one example, the static object detection module 151 may sense (or identify) a static object from a plurality of image frames. The static object detection module 151 can obtain a boundary of the object by adjusting a threshold value of the change of light that the dynamic vision sensor can sense, identify an object included in each of the plurality of image frames based on the boundary of the object, and identify the object commonly included in the plurality of image frames as a static object. The threshold value of the light change can be changed according to the illuminance around the electronic device 100 obtained through the illuminance sensor 120.
As another example, the static object detection module 151 may control an IR light source unit 130 so as to cause the IR light source unit 130 emit light while changing the intensity of IR emitted by the IR light source unit 130, and detect the IR of which intensity is changed through a dynamic vision sensor, to identify an object. The static object detection module 151 may identify an object commonly included in the plurality of image frames as a static object.
According to another embodiment, the static object detection module 151 may identify an object by detecting a change in light using a shutter or changing a pixel value using an actuator or a motor. The static object detection module 151 may identify an object commonly included in the object included in a plurality of image frames as a static object.
As another example, when the visual sensor 110 is implemented as a general image sensor, the static object detection module 151 may obtain a pixel value of a plurality of images obtained through the sensor and identify a static object based on a fixed pixel value among the pixel values obtained from the plurality of image frames.
The event detection module 152 may detect a fall down event of a person using the trained neural network model 157. The event detection module 152 may detect a fall down event by inputting a plurality of image frames obtained through the visual sensor 110 to the neural network model 157 on a real time basis.
The reference image acquisition module 153 may obtain a reference image that includes a static object sensed from the static object detection module 151. The reference image may be obtained with a plurality of image frames acquired prior to sensing the fall down event. The reference image acquisition module 153 may obtain a reference image that includes a static object based on a data value per pixel included in the plurality of image frames. Specifically, the reference image acquisition module 153 may obtain a reference image that includes a static object using a representative value (e.g., a mean, a mode, a value obtained by the AI, etc.) of pixels included in the plurality of image frames.
The event image acquisition module 154 may obtain an event image that includes a static object sensed from the static object detection module 151. The event image may be obtained with at least one image frame obtained after detecting the fall down event. In one embodiment, the event image acquisition module 154 may obtain one image frame acquired after the fall down event detection as an event image. As another example, the event image acquisition module 154 may obtain an event image that includes a static object based on a data value per pixel included in the plurality of image frames acquired after the fall down event is detected, such as the reference image acquisition module 153. The event image acquisition module 154 may obtain an event image that includes a static object using a representative value (e.g., a mean, a mode, a value obtained by the AI, etc.) of pixels included in the plurality of image frames acquired after the fall down event is detected.
The comparison module 155 may compare the reference image obtained from the reference image acquisition module 153 with the event image obtained from the event image acquisition module 154. The comparison module 155 may identify the similarity of the reference image and the event image. If the visual sensor 110 is a dynamic vision sensor, the similarity may be a similarity of the position of the pixel at which the change in light is detected, and if the visual sensor 110 is a conventional image sensor, the similarity may be a similarity of the pixel values.
If the similarity is less than the threshold, then the comparison module 155 identifies that the person's fall down event is true positive, and if the similarity is greater than or equal to the threshold, the comparison module 155 may identify that the person's fall down event is false positive. for example, if the similarity of the reference image and the event image is less than 98%, then the comparison module 155 may identify that the person's fall down event is true positive, and if the similarity is greater than or equal to 98%, the comparison module 155 may identify that the person's fall down event is false positive.
If the fall down event is identified as true positive, an alert module 156 may provide a user with an alert message. The alert module 156 may transmit, to an external user terminal, an alert message which includes a message including information on the fall down event and at least one of an image frame obtained after detecting the fall down event through the communicator 160.
A function associated with artificial intelligence in accordance with the disclosure operates through the processor 150 and the memory 140. The processor 150 may be configured with one or a plurality of processors. The one or more processors may be a general-purpose processor such as a central processing unit (CPU), an application processor (AP), a digital signal processor (DSP), or the like, a graphics-only processor such as a graphics processing unit (GPU), a vision processing unit (VPU), or an artificial intelligence-only processor such as a neural network processing unit (NPU). The one or a plurality of processors control the processing of the input data according to a predefined operating rule or artificial intelligence model stored in the memory 140. Alternatively, if one or a plurality of processors is an artificial intelligence-only processor, the artificial intelligence-only processor may be designed with a hardware structure specialized for the processing of a particular AI model.
The predetermined operating rule or AI model is made through learning. Here, being made through learning may refer to a predetermined operating rule or AI model set to perform a desired feature (or purpose) is made by making a basic AI model trained using various training data using learning algorithm. The learning may be accomplished through a separate server and/or system, but is not limited thereto and may be implemented in an electronic apparatus. Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
The AI model may include a plurality of neural network layers. Each of the plurality of neural network layers includes a plurality of weight values), and may perform a neural network processing operation through an iterative operation leveraging results of a previous layer and a plurality of parameters. The plurality of weight values included in the plurality of neural network layers may be optimized by learning results of the AI model. For example, the plurality of weight values may be updated such that a loss value or a cost value obtained by the AI model is reduced or minimized during the learning process. The artificial neural network may include deep neural network (DNN) and may include, for example, but is not limited to, convolutional neural network (CNN), recurrent neural network (RNN), restricted Boltzmann machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), deep Q-networks, or the like.
In addition to the configurations of FIG. 2, the electronic device 100 may further include an output device such as a display (not shown) or a speaker (not shown). When the fall down event is detected, the electronic device 100 may output information on the fall down event using an output device such as a display or a speaker.
FIG. 3 is a flowchart illustrating a method for detecting a fall down event by an electronic device according to an embodiment.
First, the electronic device 100 may obtain a plurality of image frames through a visual sensor in operation S310. The electronic device 100 may obtain a plurality of image frames via a visual sensor that captures a particular space. The electronic device 100 can obtain a plurality of image frames through the DVS, but this is only one embodiment, and can obtain a plurality of image frames through a general image sensor.
The electronic device 100 may identify a static object from a plurality of image frames and obtain a reference image including the static object in operation S320.
In one embodiment, when the visual sensor is a dynamic vision sensor, the electronic device 100 may use the dynamic vision sensor to obtain a plurality of image frames that include both the moving object and the static object. A method for obtaining a plurality of image frames including both a moving object and a static object will be described with reference to FIG. 4.
As illustrated in FIG. 4, the electronic device 100 may detect illuminance around the electronic device in operation S410.
In step S420, the electronic device 100 may adjust a threshold value of the change of light for detecting an object according to the detected illuminance in operation S420. Specifically, it can be adjusted such that the higher the detected illuminance value, the higher the threshold value of the change of light for detecting the object, and the lower the detected illumination value, the smaller the threshold of the change of light for detecting the object.
In step S430, the electronic device 100 may identify whether a boundary of the object is identified in the obtained image frame by adjusting a threshold value of the change of light in operation S430. If the obtained image frame by adjusting the threshold of the change of light is a first image frame 510 as shown in FIG. 5A, the electronic device 100 may identify that the boundary of the object is not identified. Alternatively, if the obtained image frame by adjusting the threshold of the change of light is a second image frame 520 as shown in FIG. 5B, the electronic device 100 may identify that the boundary of the object is identified.
If the boundary (or, boundary surface) of the object is identified in operation S430-Y, the electronic device 100 may fix the threshold value to identify the object from the plurality of image frames in operation S440. If the interface of the object is not identified in operation S430-N, the electronic device 100 may again adjust (reduce) the threshold to identify if the boundary of the object is identified.
The electronic device 100 may obtain a plurality of frames including the object by adjusting a threshold value of the change of light to detect an object through the method of FIG. 4.
In another embodiment, the electronic device 100 may use the IR light source unit 130 to change the intensity of the IR to emit IR or change the light emission period of the IR to emit IR to detect objects included in the plurality of image frames. That is, the electronic device 100 may sense a change in light for detecting an object through the dynamic vision sensor by changing the light intensity or the light emitting period of the IR light source unit 130. Thus, the electronic device 100 may obtain a plurality of image frames including the object.
As another embodiment, the electronic device 100 may identify an object by detecting a change of light using a shutter or changing a pixel value using an actuator or a motor.
As shown in FIG. 6, the electronic device 100 may obtain a reference image 620 including a static object using a plurality of image frames 610-1 through 610-6 obtained at a time before an event occurs. The electronic device 100 may identify an object commonly detected from a plurality of image frames among objects included in a plurality of image frames as a static object, and can obtain a reference image 620 including a static object. The electronic device 100 may obtain a reference image 620 based on the image frame obtained at a particular period (e.g., 10 minutes) of the image frame acquired at the time before the fall down event occurs, but this is only one embodiment, and may obtain the reference image 620 based on the image frame obtained at a particular time point (e.g., an afternoon time) before the fall down event occurs.
In another embodiment, if the visual sensor is a general image sensor, the electronic device 100 may identify the static object based on a region in which the pixel value is maintained constant among the pixel values of the plurality of image frames obtained through the image sensor. The electronic device 100 can identify the static object based on the image frame obtained during the time having the illuminance value within a predetermined range. The electronic device 100 may identify the static object based on the pixel value of the image frame obtained at the same time period (e.g., morning or evening). The electronic device 100 may then obtain a reference image that includes the identified static object.
Referring back to FIG. 2, the electronic device 100 may identify whether the fall down event is detected using the trained neural network model in operation S330. The trained neural network model may be an AI model trained sense the fall down event by inputting a plurality of frames.
If the fall down event is not detected in operation S330-N, the electronic device 100 may obtain (or update) the reference image using a plurality of image frames obtained after the certain time in operation S320. If the fall down event is detected in operation S330-Y, the electronic device 100 may obtain at least one image frame through the visual sensor in operation S340.
The electronic device 100 may obtain an event image including a static object from at least one image frame in operation S350. In one embodiment, the electronic device 100 may obtain an image frame of a specific time (e.g., one minute after detecting a fall down event) as an event image after a fall down event detection, but this is only one embodiment, and may obtain an event image including a static object using the method described in S320 based on a plurality of image frames obtained after detecting the fall down event.
The electronic device 100 may compare a reference image with an event image in operation S360. The electronic device 100 may identify the similarity between the reference image and the event image to identify whether the reference image and the event image are the same.
In step S370, the electronic device 100 may identify the fall down event based on the comparison result identified in operation S360. As shown in FIG. 7A, if a reference image 710 and an event image 720 are different from each other, that is, if the similarity between the reference image 710 and the event image 720 is below a threshold value, the electronic device 100 may identify that a fall down event is true positive and identify that a person's fall down is present. As illustrated in FIG. 7B, if the reference image 730 and the event image 740 are the same with each other, that is, if the similarity between the reference image 710 and the event image 720 is greater than or equal to a threshold value, the electronic device 100 may identify that the fall down event is false positive and identify that the fall down of the person is not present.
If it is identified that fall down is present in operation S370-Y, the electronic device 100 may provide an alert message in operation S380. However, if it is identified that fall down is not present in operation S370-N, the electronic device 100 may monitor the fall down event in operation S330.
FIG. 8 is a sequence diagram illustrating an alert and an image according to an event of a person by an electronic device according to an embodiment.
The electronic device 100 may detect an event in operation S810. The electronic device 100 may detect the fall down event described in FIGS. 1 to 7B, but this is merely exemplary, and the event in which the moving object is not detected for a predetermined time, or the like, may be detected.
The electronic device 100 may transmit information about the event to a user terminal 800 in operation S820. The information about the event may include information about the event occurrence fact, the type of event, the time at which the event occurred, the location where the event occurred, and the information about the person for whom the event occurred.
The user terminal 800 may output information about the received event in operation S830. The user terminal 800 may visually output information about the event through an output device such as a display, but this is only one embodiment, and the user terminal 800 may audibly output information about the event through the speaker and visually output through a vibration device.
The user terminal 800 may receive a user command to identify an image in operation S840.
When a user command is received, the user terminal 800 may request an image to the electronic device 100 in operation S850. The electronic device 100 may transmit an image obtained after the event detection to the user terminal 800 in response to the image request in operation S860.
The user terminal 800 may display a transmitted image in operation S870.
That is, as described above, by transmitting an image after detecting an event and information after detecting an event, a user may more quickly identify the occurrence of an event and the content of the event. Accordingly, a user may handle emergency situations more quickly.
In the embodiment described above, it has been described that the fall down event is detected, but this is merely exemplary, and the technical spirit of the disclosure may be applied to an embodiment of detecting another event (e.g., room occupancy event).
In the above-described embodiment, the reference image is obtained on the basis of the plurality of image frames obtained before the event detection, and true positive of the event is identified by comparing the obtained reference image with the event image, but this is merely one embodiment, and the image frame obtained before the event detection and the image obtained after the event detection may be compared to identify whether the true positive of the event. The true positive of the event may be identified by comparing an image frame obtained at a specific timing (e.g., one day before the event detection) before the event detection and an image frame obtained at a specific timing (e.g., one minute after the event detection) after the event detection.
The term "unit" or "module" used in the disclosure includes units consisting of hardware, software, or firmware, and is used interchangeably with terms such as, for example, logic, logic blocks, parts, or circuits. A "unit" or "module" may be an integrally constructed component or a minimum unit or part thereof that performs one or more functions. For example, the module may be configured as an application-specific integrated circuit (ASIC).
Meanwhile, various embodiments of the disclosure may be implemented in software, including instructions stored on machine-readable storage media readable by a machine (e.g., a computer). An apparatus may call instructions from the storage medium, and execute the called instruction, including an electronic apparatus (for example, electronic apparatus A) according to the disclosed embodiments. When the instructions are executed by a processor, the processor may perform a function corresponding to the instructions directly or by using other components under the control of the processor. The instructions may include a code generated by a compiler or a code executable by an interpreter. A machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, the term "non-transitory" only denotes that a storage medium does not include a signal but is tangible, and does not distinguish the case in which a data is semi-permanently stored in a storage medium from the case in which a data is temporarily stored in a storage medium. For example, "non-transitory storage medium" may include a buffer temporarily stored.
According to an embodiment, the method according to the above-described embodiments may be provided as being included in a computer program product. The computer program product may be traded as a product between a seller and a consumer. The computer program product may be distributed online in the form of machine-readable storage media (e.g., compact disc read only memory (CD-ROM)) or through an application store (e.g., Play Store TM and App Store TM) or distributed online directly between to users. In the case of online distribution, at least a portion of the computer program product (e.g.: downloadable app) may be at least temporarily stored or temporarily generated in a server of the manufacturer, a server of the application store, or a machine-readable storage medium such as memory of a relay server.
According to the embodiments, the respective elements (e.g., module or program) of the elements mentioned above may include a single entity or a plurality of entities. According to the embodiments, at least one element or operation from among the corresponding elements mentioned above may be omitted, or at least one other element or operation may be added. Alternatively or additionally, a plurality of components (e.g., module or program) may be combined to form a single entity. In this case, the integrated entity may perform functions of at least one function of an element of each of the plurality of elements in the same manner as or in a similar manner to that performed by the corresponding element from among the plurality of elements before integration. The module, a program module, or operations executed by other elements according to variety of embodiments may be executed consecutively, in parallel, repeatedly, or heuristically, or at least some operations may be executed according to a different order, may be omitted, or the other operation may be added thereto.

Claims (15)

  1. An electronic device comprising:
    a visual sensor;
    a memory configured to store at least one instruction; and
    a processor, connected to the visual sensor and the memory, configured to control the electronic device,
    wherein the processor, by executing the at least one instruction, is configured to:
    identify a static object from a plurality of image frames obtained through the visual sensor and obtain a reference image comprising the identified static object,
    based on a fall down event being detected through a trained neural network model, identify a static object from at least one image frame obtained through the visual sensor after the fall down event is detected,
    obtain an event image comprising the identified static object from the at least one image frame, and
    identify whether a person falls down by comparing the reference image and the event image.
  2. The electronic device of claim 1, wherein:
    the visual sensor is a dynamic vision sensor, and
    the processor is configured to:
    obtain a boundary of an object by adjusting a threshold value of a change in light detectable by the dynamic vision sensor, and identify an object included in each of the plurality of image frames based on the boundary of the object, and
    identify a commonly included object among objects included in the plurality of image frames as a static object.
  3. The electronic device of claim 2, wherein the threshold value of the change in light is changed according to illuminance around the electronic device.
  4. The electronic device of claim 1, further comprising:
    an infrared (IR) light source unit,
    wherein the visual sensor is a dynamic vision sensor, and
    wherein the processor is configured to:
    emit IR while changing intensity of IR emitted by the IR light source unit,
    identify an object included in each of the plurality of image frames by detecting the IR with changed intensity detected by the dynamic vision sensor, and
    identify a commonly included object among objects included in the plurality of image frames as a static object.
  5. The electronic device of claim 1, wherein:
    the visual sensor is an image sensor, and
    the processor is configured to obtain a pixel value of a plurality of images obtained through the image sensor, and identify the static object based on a fixed pixel value among the obtained pixel values.
  6. The electronic device of claim 1, wherein the trained neural network model is an artificial intelligence model trained to detect a person's fall down presence by inputting a plurality of image frames obtained through the visual sensor.
  7. The electronic device of claim 1, wherein the processor is configured to:
    identify a similarity between the reference image and the event image, and
    based on the similarity being less than a threshold value, identify that the fall down event of a person is true positive, and based on the similarity being greater than or equal to the threshold value, identify that the fall down event of a person is false positive.
  8. The electronic device of claim 1, further comprising:
    a communicator comprising a circuitry,
    wherein the processor is configured to, based on the identifying that the fall down event of the person being true positive, transmit, to a user terminal, a message comprising information on the fall down event and at least one of an image frame after the fall down event through the communicator.
  9. The electronic device of claim 1, further comprising:
    a communicator comprising a circuitry,
    wherein the processor is configured to, based on a moving object being not detected for a threshold time from an image obtained through the visual sensor, transmit, to a user terminal, a message comprising information that the moving object is not detected and at least one of an image frame obtained after the threshold time through the communicator.
  10. A method of controlling an electronic device, the method comprising:
    identifying a static object from a plurality of image frames obtained through a visual sensor, and obtaining a reference image comprising the identified static object;
    based on a fall down event being detected through a trained neural network model, identifying a static object from at least one image frame obtained through the visual sensor after the fall down event is detected;
    obtaining an event image comprising the identified static object from the at least one image frame; and
    identifying whether a person falls down by comparing the reference image and the event image.
  11. The method of claim 10, wherein:
    the visual sensor is a dynamic vision sensor, and
    the identifying a static object from the plurality of image frames comprises adjusting a threshold value of a change in light detectable by the dynamic vision sensor and obtaining a boundary of an object, identifying an object included in each of the plurality of image frames based on the boundary of the object, and identifying a commonly included object among objects included in the plurality of image frames as a static object.
  12. The method of claim 11, wherein the threshold value of the change in light is changed according to illuminance around the electronic device.
  13. The method of claim 10, wherein the electronic device further comprises an infrared (IR) light source unit,
    wherein the visual sensor is a dynamic vision sensor,
    wherein the identifying a static object from the plurality of image frames comprises:
    emitting IR while changing intensity of IR emitted by the IR light source unit,
    identifying an object included in each of the plurality of image frames by detecting the IR with changed intensity detected by the dynamic vision sensor, and
    identifying a commonly included object among objects included in the plurality of image frames as a static object.
  14. The method of claim 10, wherein:
    the visual sensor is an image sensor, and
    the identifying a static object from the plurality of image frames comprises obtaining a pixel value of a plurality of images obtained through the image sensor, and identifying the static object based on a fixed pixel value among the obtained pixel values.
  15. The method of claim 10, wherein the trained neural network model is an artificial intelligence model trained to detect a person's fall down presence by inputting a plurality of image frames obtained through the visual sensor.
PCT/KR2020/015300 2019-12-20 2020-11-04 Electronic device and method for controlling the electronic device. WO2021125550A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0171990 2019-12-20
KR1020190171990A KR20210079823A (en) 2019-12-20 2019-12-20 Electronic device and Method for controlling the electronic device thereof

Publications (1)

Publication Number Publication Date
WO2021125550A1 true WO2021125550A1 (en) 2021-06-24

Family

ID=76478662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/015300 WO2021125550A1 (en) 2019-12-20 2020-11-04 Electronic device and method for controlling the electronic device.

Country Status (2)

Country Link
KR (1) KR20210079823A (en)
WO (1) WO2021125550A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102410286B1 (en) * 2021-11-19 2022-06-22 주식회사 씨앤에이아이 Method for detecting a falling accident based on deep learning and electronic device thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160189501A1 (en) * 2012-12-17 2016-06-30 Boly Media Communications (Shenzhen) Co., Ltd. Security monitoring system and corresponding alarm triggering method
US9896022B1 (en) * 2015-04-20 2018-02-20 Ambarella, Inc. Automatic beam-shaping using an on-car camera system
US20180295337A1 (en) * 2017-04-10 2018-10-11 Intel Corporation Using dynamic vision sensors for motion detection in head mounted displays
US20190090786A1 (en) * 2017-09-27 2019-03-28 Samsung Electronics Co., Ltd. Method and device for detecting dangerous situation
KR20190095200A (en) * 2019-07-26 2019-08-14 엘지전자 주식회사 Apparatus and method for recognizing object in image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160189501A1 (en) * 2012-12-17 2016-06-30 Boly Media Communications (Shenzhen) Co., Ltd. Security monitoring system and corresponding alarm triggering method
US9896022B1 (en) * 2015-04-20 2018-02-20 Ambarella, Inc. Automatic beam-shaping using an on-car camera system
US20180295337A1 (en) * 2017-04-10 2018-10-11 Intel Corporation Using dynamic vision sensors for motion detection in head mounted displays
US20190090786A1 (en) * 2017-09-27 2019-03-28 Samsung Electronics Co., Ltd. Method and device for detecting dangerous situation
KR20190095200A (en) * 2019-07-26 2019-08-14 엘지전자 주식회사 Apparatus and method for recognizing object in image

Also Published As

Publication number Publication date
KR20210079823A (en) 2021-06-30

Similar Documents

Publication Publication Date Title
WO2017213398A1 (en) Learning model for salient facial region detection
WO2020159217A1 (en) Electronic device and method for determining task including plural actions
WO2019216732A1 (en) Electronic device and control method therefor
WO2018164411A1 (en) Electronic device including camera module and method for controlling electronic device
TW201013205A (en) Defective pixel detection and correction devices, systems, and methods for detecting and correcting defective pixel
WO2019177373A1 (en) Electronic device for controlling predefined function based on response time of external electronic device on user input, and method thereof
WO2022065682A1 (en) Wearable device and control method therefor
US20200019788A1 (en) Computer system, resource arrangement method thereof and image recognition method thereof
WO2021125550A1 (en) Electronic device and method for controlling the electronic device.
EP3738025A1 (en) Electronic device and method for controlling external electronic device based on use pattern information corresponding to user
WO2022039366A1 (en) Electronic device and control method thereof
WO2022158700A1 (en) Electronic device and control method therefor
WO2021141210A1 (en) Electronic apparatus and controlling method thereof
WO2019221479A1 (en) Air conditioner and control method thereof
WO2019225875A1 (en) Method and apparatus for tracking inventory
CN115871679A (en) Driver fatigue detection method, driver fatigue detection device, electronic device, and medium
WO2021040345A1 (en) Electronic device and method for controlling electronic device
WO2022097805A1 (en) Method, device, and system for detecting abnormal event
WO2021125507A1 (en) Electronic device and method for controlling same
WO2022164008A1 (en) Electronic apparatus and control method thereof
WO2022149665A1 (en) Method and electronic device for intelligent camera zoom
WO2022119128A1 (en) Electronic device and control method therefor
WO2023167399A1 (en) Electronic device and control method therefor
WO2024058474A1 (en) Electronic device for performing speech recognition and method of controlling same
WO2023136703A1 (en) Electronic device and control method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20902507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20902507

Country of ref document: EP

Kind code of ref document: A1