WO2024080647A1 - Procédé, dispositif, programme informatique et support d'enregistrement lisible par ordinateur permettant de générer et de fournir un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur - Google Patents

Procédé, dispositif, programme informatique et support d'enregistrement lisible par ordinateur permettant de générer et de fournir un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur Download PDF

Info

Publication number
WO2024080647A1
WO2024080647A1 PCT/KR2023/014988 KR2023014988W WO2024080647A1 WO 2024080647 A1 WO2024080647 A1 WO 2024080647A1 KR 2023014988 W KR2023014988 W KR 2023014988W WO 2024080647 A1 WO2024080647 A1 WO 2024080647A1
Authority
WO
WIPO (PCT)
Prior art keywords
sleep
information
user
providing
present
Prior art date
Application number
PCT/KR2023/014988
Other languages
English (en)
Korean (ko)
Inventor
이동헌
홍준기
박혜아
김형국
강소라
배재현
김성연
이태영
김대우
김승훈
Original Assignee
주식회사 에이슬립
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에이슬립 filed Critical 주식회사 에이슬립
Priority to KR1020247000275A priority Critical patent/KR20240052740A/ko
Priority to KR1020237041711A priority patent/KR20240052723A/ko
Publication of WO2024080647A1 publication Critical patent/WO2024080647A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to a method, device, computer program, and computer-readable recording medium for generating and providing sleep content based on the user's sleep information. More specifically, the present invention relates to sleep-related information based on sleep information acquired in the user's sleep environment. It is intended to create content and provide it to users.
  • the number of patients with sleep disorders in Korea increased by about 8% on average per year from 2014 to 2018, and the number of patients treated for sleep disorders in Korea in 2018 reached approximately 570,000.
  • Republic of Korea Patent Publication No. 10-2003-0032529 receives the user's physical information and outputs vibration and/or ultrasonic waves in the frequency band detected through repetitive learning according to the user's physical condition during sleep to induce optimal sleep.
  • Republic of Korea Patent Publication No. 10-2022-0015835 relates to an electronic device for evaluating sleep quality and a method of operating the electronic device, which identifies sleep cycles based on sleep-related information acquired by a wearable device during sleep time. And it suggests a method to evaluate sleep quality accordingly.
  • the sleep analysis method using a conventional wearable device had a problem in that sleep analysis was not possible when the wearable device was not in proper contact with the user's body or when the user was not wearing the wearable device. Additionally, when multiple users sleep in the same space, not only does the movement of the non-wearable device wearer interfere with the sleep analysis of the wearable device wearer, but there is also a problem in that sleep analysis for the non-wearable device wearer is impossible.
  • the user can easily obtain sound information related to the sleep environment through a user terminal (e.g., a mobile terminal) carried by the user, and use the user's sleep environment information based on the acquired sound information and other sleep environment information.
  • a user terminal e.g., a mobile terminal
  • a sleep report containing the user's sleep information is provided through an application (or app) installed on the user's terminal.
  • Most of the presented slip reports contain quantitative information. These quantitative sleep reports can cause psychological discomfort in users, and low sleep scores can cause discomfort in users.
  • GPT Generic Pre-trained Transformer
  • Patent Document 1 Republic of Korea Patent Publication No. 2003-0032529 (published on April 26, 2003)
  • Patent Document 2 Republic of Korea Patent Publication No. 2022-0015835 (published on February 8, 2022)
  • the present invention was conceived in consideration of the problems of the prior art described above, and the purpose of the present invention is to convert the user's mood or feeling about last night's sleep into a sleep image or sleep video and provide a sleep image or sleep video to the user.
  • the object is to provide a method, device, and computer-readable recording medium for generating and providing a slip video.
  • the purpose of the present invention is to generate and provide sleep content for the user's sleep using generative artificial intelligence based on information about the user's sleep that can be obtained through a sleep sensor.
  • the purpose of the present invention is to use sleep information to increase the accuracy of sleep analysis, provide useful sleep-related content to users, and improve sleep quality.
  • the purpose of the present invention is to provide a method of generating and providing imagery-inducing information that can induce sleep or improve the quality of sleep.
  • a method for generating and providing a sleep image or a sleep video includes the steps of providing a text input window on a display screen of a user terminal; A text transmission step of transmitting the input text to an external terminal when text about the user's sleeping mood, feeling, or dream memory is input through the text input window; A receiving step of receiving a sleep image or a sleep video corresponding to the text from the external terminal; and a sleep image or sleep video providing step of providing the received sleep image or sleep video to the display screen.
  • example text may be provided in the text input window.
  • the sleep image or sleep video further includes at least one indicator or one-line review among quantitative indicators of the user's sleep, and the one-line review is prepared in advance according to the quantitative indicators. It may have been derived using a mapped lookup table method.
  • the entire sleep image or sleep video is displayed on the display screen, or the sleep image or sleep video is displayed on a portion of the display screen and quantitative indicators of the user's sleep are displayed on the remaining portion of the display screen. Or you can display the calendar.
  • the step of providing the text input window further includes a candidate option providing step of providing candidate options corresponding to the sleeping mood of the user on the display screen, wherein when at least one of the candidate options is selected, the text An input window provision step may be performed.
  • an example image corresponding to the selected option is displayed on the display screen, and in the step of providing the text input window, the text input window may be displayed on the example image.
  • sleep image or sleep video settings for setting at least one of the style, color, sketch, and partially empty picture of the sleep image or sleep video Steps may be further included.
  • a method for generating and providing a sleep image or sleep video includes a candidate group providing step of providing keyword candidates for the user's sleeping mood, feeling, or memory of a dream on a display screen of a user terminal; When at least one of the keyword candidates is selected, a text transmission step of transmitting the text of the selected keyword candidate group to an external terminal; A receiving step of receiving a sleep image or a sleep video corresponding to the text from the external terminal; and a sleep image or sleep video providing step of providing the received sleep image or sleep video to the display screen.
  • the sleep image or sleep video further includes at least one indicator or one-line review among quantitative indicators of the user's sleep, and the one-line review is prepared in advance according to the quantitative indicators. It may have been derived using a mapped lookup table method.
  • the entire sleep image or sleep video is displayed on the display screen, or the sleep image or sleep video is displayed on a portion of the display screen and quantitative indicators of the user's sleep are displayed on the remaining portion of the display screen. Or you can display the calendar.
  • a method for generating and providing a sleep image or sleep video includes a receiving step of receiving a text about the user's sleeping mood, feeling, or memory of a dream from a user terminal; A sleep image or sleep video output step of inputting the text into a learning model stored in a memory and outputting a sleep image or sleep video corresponding to the text from the learning model; and a sleep image or sleep video transmission step of transmitting the output sleep image or sleep video to the user terminal or computing device.
  • the receiving step further includes receiving environmental sensing information from the user terminal, classifying and measuring quantitative indicators of the user's sleep from the environmental sensing information, and the sleep image or sleep video output step is , the style or color of the output sleep image or sleep video can be changed based on the quantitative indicators.
  • the sleep image or sleep video output step may calculate a sleep score for the quantitative indicators, and change the style or color of the sleep image or sleep video according to the calculated sleep score.
  • a computer-readable recording medium stores a computer program that performs the above-described method of generating and providing a sleep image or a sleep video.
  • a user terminal includes a display unit; Department of Wireless Communications; control unit; and a memory that stores program instructions executed by the control unit to perform operations, wherein the operations include providing a text input window on the display screen of the display unit; When a text about the user's sleeping mood, feeling, or dream memory is input through the text input window, transmitting the input text to an external terminal through the wireless communication unit; Receiving the sleep image or sleep video corresponding to the text from the external terminal through the wireless communication unit; and an operation of providing the received sleep image or sleep video to the display screen.
  • a user terminal includes a display unit; Department of Wireless Communications; control unit; and a memory that stores program instructions executed by the control unit to perform operations, wherein the operations provide keyword candidates for the user's sleeping mood, feelings, or memory of dreams on the display screen of the display unit. action; When at least one of the keyword candidates is selected, transmitting the text of the selected keyword candidate group to an external terminal through the wireless communication unit; Receiving the sleep image or sleep video corresponding to the text from the external terminal through the wireless communication unit; and an operation of providing the received sleep image or sleep video to the display screen.
  • An external terminal includes a communication module; processor; And a memory that stores program instructions executed by the processor to perform operations and stores a machine-learned learning model to output a predetermined sleep image or sleep video in response to input text. , an operation of receiving the text about the user's sleeping mood, feeling, or dream memory from the user terminal through the communication module; An operation of inputting the text into the learning model and outputting a sleep image or a sleep video corresponding to the text from the learning model; and transmitting the output sleep image or sleep video to the user terminal or computing device through the communication module.
  • a method for generating and providing imagery-inducing information includes a preparation step of preparing imagery-inducing information; A preparation information providing step of providing the prepared image inducing information to a user; An acquisition step of acquiring sleep state information from the user; An extraction step of extracting features of the user based on the image inducing information provided to the user and the sleep state information obtained from the user; And a generation step of generating feature-based imagery induction information based on the extracted user features.
  • a method for generating and providing imagery induction information including a.
  • the preparation step may provide a method of generating and providing imagery inducing information, including preparing the imagery inducing information based on a lookup table.
  • the preparation step may provide a method of generating and providing imagery inducing information, including preparing the imagery inducing information based on the feature-based imagery inducing information.
  • the preparation information providing step provides one or more of prepared imagery-inducing sound information, prepared imagery-inducing visual information, prepared imagery-inducing text information, and prepared imagery-inducing text sound information, or A method of generating and providing imagery inducing information may be provided, including providing a combination of two or more of these.
  • the step of providing preparation information includes prepared time-series image-inducing sound information with an image-inducing scenario, prepared time-series image-inducing visual information, prepared time-series image-inducing text sound information, and prepared time-series image-inducing sound information.
  • a method of generating and providing imagery inducing information may be provided, including the step of providing one or more of the image inducing text information, or providing a combination of two or more of these.
  • a method of generating and providing imagery inducing information including a generation information providing step of providing the generated feature-based imagery inducing information to a user. can do.
  • the generation information providing step includes at least one of feature-based imagery-inducing sound information, feature-based imagery-inducing visual information, feature-based imagery-inducing text information, and feature-based imagery-inducing text sound information. It is possible to provide a method of generating and providing imagery inducing information, including the step of providing, or providing a combination of two or more of these.
  • the step of providing the generation information includes feature-based time-series image-inducing sound information with an image-inducing scenario, feature-based time-series image-inducing visual information, feature-based time-series image-inducing text sound information, and A method of generating and providing imagery inducing information may be provided, including providing one or more of feature-based time-series image inducing text information, or providing a combination of two or more of them.
  • a method for generating and providing imagery inducing information includes an information preparation step of preparing information related to a user; An extraction step of extracting user features based on the prepared information; And a generation step of generating feature-based imagery induction information based on the extracted user features.
  • a method for generating and providing imagery induction information including a.
  • the information preparation step may provide a method of generating and providing image-inducing information, including an input step of receiving information related to the user from the user.
  • the user-related information input from the user in the input step includes content selected by a swipe method; Text input from the user; and keywords selected by the user among the presented keywords; It is possible to provide a method of generating and providing image-inducing information that is one or more of the above, or a combination of two or more of these.
  • a generation information providing step of providing the generated feature-based imagery induction information to a user Alternatively, a method for generating and providing imagery inducing information may be provided, including an acquisition step of acquiring sleep state information from the user.
  • the feature-based imagery inducing information provided in the generation information providing step is feature-based imagery inducing sound information, feature-based imagery inducing visual information, feature-based imagery inducing text information, and feature-based imagery inducing information. It is possible to provide a method of generating and providing imagery inducing information that is one or more of text sound information, or a combination of two or more of them.
  • the feature-based imagery inducing information provided in the generation information providing step is feature-based time-series imagery inducing sound information with an imagery inducing scenario, feature-based time-series imagery inducing visual information, and feature-based
  • a method of generating and providing imagery inducing information which is one or more of time-series image-inducing text information and feature-based time-series image-inducing text sound information, or a combination of two or more of these, may be provided.
  • the extraction step extracts the user's features based on one or more of the provided feature-based imagery induction information and the obtained sleep state information, or a combination thereof. It is possible to provide a method for generating and providing image-inducing information, including a step.
  • an electronic device includes a memory in which image induction information is recorded; an output unit that outputs the recorded image guidance information; An acquisition unit that acquires sleep state information from the user; And a processor for extracting user features based on the output imagery induction information and the acquired sleep state information, wherein the processor generates feature-based imagery induction information based on the extracted user features.
  • An electronic device having the following characteristics may be provided.
  • the acquisition unit when the acquisition unit acquires sleep state information from a user in another electronic device, the acquisition unit may provide an electronic device that receives the obtained sleep state information from the other electronic device.
  • an electronic device in which the image induction information recorded in the memory is based on a lookup table.
  • an electronic device may be provided in which the image guidance information recorded in the memory is based on the feature-based image guidance information.
  • the recorded image-inducing information output from the output unit includes recorded image-inducing sound information, recorded image-inducing visual information, recorded image-inducing text information, and recorded image-inducing text sound.
  • An electronic device containing one or more pieces of information, or a combination of two or more pieces of information, can be provided.
  • the recorded image-inducing information output from the output unit includes recorded time-series image-inducing sound information with an image-inducing scenario, recorded time-series image-inducing visual information, and recorded time-series image-inducing visual information.
  • An electronic device may be provided that is one or more of image-inducing text sound information and recorded time-series image-inducing text information, or a combination of two or more of these.
  • the output unit may provide an electronic device that outputs the generated feature-based imagery induction information.
  • the feature-based imagery inducing information output from the output unit includes feature-based imagery inducing sound information, feature-based imagery inducing visual information, feature-based imagery inducing text information, and feature-based imagery inducing text sound.
  • An electronic device containing one or more pieces of information, or a combination of two or more pieces of information, can be provided.
  • the feature-based image induction information output from the output unit includes feature-based time-series image-induction sound information with an image-induction scenario, feature-based time-series image-induction visual information, and feature-based time-series image induction information.
  • An electronic device may be provided that is one or more of image-inducing text sound information and feature-based time-series image-inducing text information, or a combination of two or more of these.
  • a memory in which user-related information is recorded; and a processor that extracts user features based on the recorded information, wherein the processor generates feature-based imagery induction information based on the extracted user features.
  • an electronic device may be provided, further comprising an input unit that receives information related to the user from the user.
  • information related to the user input to the input unit includes content selected by a swipe method; Text input from the user; and keywords selected by the user among the presented keywords; An electronic device that is one or more of these, or a combination of two or more of these, can be provided.
  • an output unit for outputting the generated feature-based imagery induction information may be provided that further includes an acquisition unit that acquires sleep state information from the user.
  • the acquisition unit when the acquisition unit obtains sleep state information from a user in another electronic device, the acquisition unit may provide an electronic device that receives the obtained sleep state information from the other electronic device.
  • the feature-based imagery inducing information output from the output unit includes feature-based imagery inducing sound information, feature-based imagery inducing visual information, feature-based imagery inducing text information, and feature-based imagery inducing text sound.
  • An electronic device containing one or more pieces of information, or a combination of two or more pieces of information, can be provided.
  • the feature-based image induction information output from the output unit includes feature-based time-series image-induction sound information with an image-induction scenario, feature-based time-series image-induction visual information, and feature-based time-series image induction information.
  • An electronic device may be provided that is one or more of image-inducing text information and feature-based time-series image-inducing text sound information, or a combination of two or more of these.
  • the processor extracts the user's features based on one or more of the output feature-based imagery induction information and the obtained sleep state information, or a combination thereof.
  • electronic devices can be provided.
  • an electronic device includes a memory in which image induction information is recorded; an output unit that outputs the recorded image guidance information; An acquisition unit that obtains sleep state information from the user; means for transmitting the output image induction information and the obtained sleep state information to a server; When the server extracts the user's features based on the transmitted imagery induction information and the transmitted sleep state information, means for receiving the extracted user's features; and means for generating feature-based imagery induction information based on the received user's features.
  • a memory in which user-related information is recorded; means for transmitting the recorded information to a server;
  • the server extracts the user's features based on the transmitted information, means for receiving the extracted user's features; and means for generating feature-based imagery induction information based on the received user's features.
  • an electronic device includes a memory in which image induction information is recorded; an output unit that outputs the recorded image guidance information; An acquisition unit that obtains sleep state information from the user; means for transmitting the output image induction information and the sleep state information to a server; And when the server extracts the user's features based on the transmitted imagery induction information and the transmitted sleep state information, and the server generates feature-based imagery induction information based on the extracted user's features, the generation An electronic device including means for receiving feature-based image induction information may be provided.
  • a memory in which user-related information is recorded; means for transmitting the recorded information to a server; And when the server extracts the user's features based on the transmitted information and the server generates feature-based imagery induction information based on the extracted user's features, receiving the generated feature-based imagery induction information.
  • An electronic device comprising: means can be provided.
  • an electronic device includes a memory in which image induction information is recorded; an output unit that outputs the recorded image guidance information; An acquisition unit that obtains sleep state information from the user; a processor that extracts user features based on the output image induction information and the acquired sleep state information; means for transmitting the extracted user features to a server; and when the server generates feature-based imagery induction information based on the transmitted features of the user, means for receiving the generated feature-based imagery induction information.
  • a memory in which user-related information is recorded; a processor that extracts user features based on the recorded information; means for transmitting the extracted user features to a server; and when the server generates feature-based imagery induction information based on the transmitted features of the user, means for receiving the generated feature-based imagery induction information.
  • the model for generating and providing imagery inducing information includes the user's sleep state information and the electronic device obtained through an acquisition unit of the electronic device.
  • a server device equipped with an imagery induction information generation and provision model that extracts user features based on imagery induction information output through the output unit of the device and generates feature-based imagery induction information based on the extracted user features. can be provided.
  • the model for generating and providing imagery inducing information includes the user's sleep state information and the electronic device obtained through an acquisition unit of the electronic device.
  • a server device equipped with an imagery induction information generation and provision model that extracts user features based on information related to the user recorded in the device's memory and generates feature-based imagery induction information based on the extracted user features. can be provided.
  • sleep information is acquired from one or more sleep information sensor devices - the sleep information is provided by the user. Includes sleep acoustic information - sleep information acquisition stage; generating one or more data arrays related to the user's sleep based on the acquired sleep information; Inputting the created features about the user's sleep into a content creation artificial intelligence;
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence can be provided, including generating user sleep content based on the output of the content generation artificial intelligence.
  • the sleep information acquisition step includes converting the user's sleep sound information into information including changes in frequency components along the time axis and performing analysis on the changed information. It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence, including;
  • the converted information visualizes changes along the time axis of the frequency components of the sleep sound information, and generates sleep content based on user sleep information using generative artificial intelligence. and provision method may be provided.
  • the sleep information acquisition step includes a sleep information inference step of inferring information about sleep by using the user's sleep sound information as input to a sleep information inference deep learning model; It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence.
  • the sleep information acquisition step includes outputting the inferred sleep information as a hypnogram in the time domain, based on user sleep information using generative artificial intelligence.
  • a method of creating and providing sleep content can be provided.
  • generating one or more data arrays about the user's sleep includes generating one or more data arrays about the user's sleep based on the inferred sleep information; It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence.
  • the one or more data arrays about the user's sleep are tensors, and a user utilizing generative artificial intelligence A method of creating and providing sleep content based on sleep information can be provided.
  • the step of generating one or more data arrays about the user's sleep includes combining the inferred sleep information with a large-scale language model to generate one or more data arrays about the user's sleep.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence can be provided, including the step of generating one or more data arrays about the user's sleep by inputting a (Large Language Model). .
  • the Large Language Model is a GPT model-based generative artificial intelligence model, and a method of generating and providing sleep content based on user sleep information using generative artificial intelligence. can be provided.
  • the Large Language Model is a generative artificial intelligence model based on the BERT model, and a method of generating and providing sleep content based on user sleep information using generative artificial intelligence. can be provided.
  • the step of generating one or more data arrays about the user's sleep includes lookup based on the inferred sleep information to generate one or more data arrays about the user's sleep.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence may be provided, including the step of generating one or more data arrays regarding the user's sleep in a table.
  • the one or more data arrays about the user's sleep are generated based on preference information related to the user's sleep. It is possible to provide a method of creating and providing sleep content based on user sleep information using generative artificial intelligence.
  • the one or more data arrays about the user's sleep are generated based on the user's sleep indicator information.
  • a method of creating and providing sleep content based on user sleep information using generative artificial intelligence can be provided.
  • the one or more data arrays about the user's sleep are generated based on the user's sleep score.
  • a method of creating and providing sleep content based on user sleep information using artificial intelligence can be provided.
  • the inputting step into the content creation artificial intelligence includes a data array processing step for inputting the generated one or more data arrays about the user's sleep into the content creation artificial intelligence. It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence.
  • the data array processing step further includes receiving a user keyword input to input into the content generation artificial intelligence, based on user sleep information using generative artificial intelligence.
  • a method of creating and providing sleep content can be provided.
  • the step of inputting into the content creation artificial intelligence includes generating one or more keywords about the user's sleep based on one or more data arrays about the user's sleep. It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence.
  • the step of generating one or more keywords related to the user's sleep includes information about the user's sleep based on a lookup table corresponding to one or more data arrays related to the user's sleep.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence may be provided, including the step of generating one or more keywords.
  • the step of generating one or more keywords related to the user's sleep includes generating one or more keywords related to the user's sleep by using one or more data arrays related to the user's sleep as input to a large-scale language model.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence may be provided, including the step of generating one or more keywords.
  • the feature processing step includes inputting a method for interpreting one or more data arrays regarding the user's sleep to be input to the content creation artificial intelligence.
  • a method of creating and providing sleep content based on user sleep information using artificial intelligence can be provided.
  • the content generation artificial intelligence can provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence, which is an artificial intelligence model based on a large-scale language model. there is.
  • the large-scale language model-based content generation artificial intelligence is a GPT model-based generative artificial intelligence model, generating sleep content based on user sleep information using generative artificial intelligence, and A method of provision can be provided.
  • the large-scale language model-based content generation artificial intelligence is a BERT model-based generative artificial intelligence model, generating sleep content based on user sleep information using generative artificial intelligence, and A method of provision can be provided.
  • the step of generating the user sleep content includes generating sleep text content based on the output of the content generation artificial intelligence, using generative artificial intelligence.
  • a method of creating and providing sleep content based on user sleep information can be provided.
  • the step of generating sleep text content includes generating a basic sleep sentence based on the output of the content generation artificial intelligence, and the user using generative artificial intelligence.
  • a method of creating and providing sleep content based on sleep information can be provided.
  • generating sleep text content includes extracting sleep core keywords based on the sleep basic sentences; It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence, which further includes the step of receiving user input keywords.
  • the step of generating the user sleep content includes generating sleep sound source content based on the output of the content generation artificial intelligence, where the user using generative artificial intelligence
  • a method of creating and providing sleep content based on sleep information can be provided.
  • the step of generating sleep sound source content measures the similarity between the generated sleep sentence and sound source samples, and generates sleep sound source content by combining the one or more sound source samples based on the similarity. It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence, including the step of:
  • the step of generating the user sleep content includes generating sleep visual content based on the output of the content generation artificial intelligence, where the user using generative artificial intelligence
  • a method of creating and providing sleep content based on sleep information can be provided.
  • the step of generating the user sleep content includes providing the generated sleep text content and the generated sleep sound source content to the user, using generative artificial intelligence.
  • a method of creating and providing sleep content based on user sleep information can be provided.
  • the step of generating the user sleep content further includes providing the generated sleep visual content to the user, sleep information based on user sleep information using generative artificial intelligence.
  • sleep content is generated based on user sleep information using generative artificial intelligence, in which the generated sleep sound source content and the keywords and titles of the generated sleep text content match in time series. and provision method may be provided.
  • the user keyword input step generating a basic sentence using the input user keyword as input to a large-scale language model; A sleep sentence keyword refining step of selecting sleep sentence keywords based on the generated basic sentences; Selecting a sleep content theme based on the selected sentence keywords;
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence may be provided, including the step of generating sleep content based on the selected sleep content theme.
  • the user keyword input step includes a method of generating and providing sleep content based on user sleep information using generative artificial intelligence, including receiving the user keyword directly from the user. can be provided.
  • the user keyword input step includes a method of generating and providing sleep content based on user sleep information using generative artificial intelligence, including receiving the user keyword from user information. can be provided.
  • the sleep sentence keyword refining step includes extracting sleep sentence keywords by using the generated basic sentence as an input to a large-scale language model.
  • User utilizing generative artificial intelligence A method of creating and providing sleep content based on sleep information can be provided.
  • the sleep content theme selection step includes measuring a first similarity, which is the similarity between the sound source sample title of the sound source sample list, the selected sentence keyword, and the input user keyword; and selecting a sleep content theme based on the measured first similarity.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence may be provided.
  • the step of selecting the sleep content theme further includes selecting one or more sleep content events based on the selected sleep sentence keyword and the input user keyword.
  • a method of creating and providing sleep content based on user sleep information using artificial intelligence can be provided.
  • selecting the one or more sleep content events includes removing adjectives of the input user keyword; A second similarity measurement step of measuring a second similarity, which is the similarity between the user keyword from which the adjective is removed and the sound source sample title in the sound source sample list; and selecting a sleep content event based on the measured second similarity.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence may be provided.
  • the step of measuring the second similarity includes determining that the second similarity exists when there is a common word in the sound source sample title and the user keyword from which the adjective is removed. It is possible to provide a method of generating and providing sleep content based on user sleep information using generative artificial intelligence.
  • the step of selecting one or more sleep content events includes determining a third similarity degree, which is the similarity between the sound source sample title of the sound source sample list, the selected sentence keyword, and the input user keyword. measuring; And based on the measured third similarity, selecting a sound source sample title from the sound source sample list excluding the selected sleep content theme as a sleep content event, based on user sleep information using generative artificial intelligence.
  • a method of creating and providing sleep content can be provided.
  • the step of generating sleep content based on the selected sleep content theme and the selected content event includes the generated basic sentence, the selected sleep content theme, and the selected sleep content event.
  • a method of generating and providing sleep content based on user sleep information using generative artificial intelligence can be provided, including generating sleep sound source content based on the generated sleep content sentence.
  • the generated sleep sound source content is user sleep information using generative artificial intelligence that corresponds to the order of the generated sleep content sentences.
  • a method of creating and providing sleep-based content can be provided.
  • sleep content is generated based on user sleep information using generative artificial intelligence.
  • a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors for generating and providing content, wherein the one or more programs include instructions to perform one or more of the methods described above.
  • a non-transitory computer-readable storage medium may be provided.
  • the device includes: a display unit; One or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, where the one or more programs include instructions for performing one or more of the methods described above.
  • a device that generates and provides sleep content based on a user's sleep information can be provided.
  • the quality of last night's sleep is not directly provided to the user as a quantitative value, so the user does not feel aversion or discomfort toward the quantitative value. There is an advantage to not having it.
  • one or more data arrays are generated based on sleep information, and the data arrays are input into content creation artificial intelligence, thereby generating sleep content related to sleep and providing customized sleep content to the user. It can improve intuitive understanding of sleep and contribute to improving the quality of the user's sleep.
  • the present invention it is possible to induce sleep or improve the quality of sleep of a user by generating and providing imagery-inducing information.
  • FIGS. 1A and 1B are conceptual diagrams showing a system in which various aspects of a sleep content generating device or a sleep content providing device based on user sleep information can be implemented according to an embodiment of the present invention.
  • FIG. 1C is a conceptual diagram illustrating a system in which the creation and provision of sleep content based on user sleep information is implemented in the user terminal 300 according to an embodiment of the present invention.
  • FIG. 1D is a conceptual diagram illustrating a system of an apparatus 100a that generates imagery induction information based on sleep state information according to an embodiment of the present invention.
  • FIG. 1E is a conceptual diagram illustrating a system of a device 200a that provides imagery induction information based on sleep state information according to an embodiment of the present invention.
  • FIG. 1F shows a conceptual diagram illustrating a system in which various aspects of a computing device for creating a sleep environment based on sleep state information can be implemented according to an embodiment of the present invention.
  • Figure 1g shows a conceptual diagram showing a system in which various aspects of a sleep environment control device can be implemented according to another embodiment of the present invention.
  • FIG. 1H is a conceptual diagram showing a system in which various aspects of various electronic devices can be implemented according to another embodiment of the present invention.
  • Figure 1i is a diagram for explaining a system for generating and providing a sleep image according to an embodiment of the present invention.
  • FIG. 2A is a block diagram showing the configuration of a sleep content generating device 700/providing device 800 based on user sleep information according to an embodiment of the present invention.
  • Figure 2b is a block diagram showing the configuration of an electronic device 600 according to the present invention.
  • Figure 2c is an exemplary diagram illustrating a work space related to a user's sleeping environment according to an embodiment of the present invention.
  • FIG. 2D is a block diagram for explaining the computing device 100 according to an embodiment of the present invention.
  • Figure 2e is a block diagram for explaining an external terminal 200 according to an embodiment of the present invention.
  • Figure 2f is a block diagram for explaining the user terminal 300 according to an embodiment of the present invention.
  • Figure 2g shows an exemplary block diagram of a sleep environment control device according to an embodiment of the present invention.
  • Figure 2h shows an example block diagram of a receiving module and a transmitting module according to an embodiment of the present invention.
  • Figure 2i is an example diagram for explaining a second sensor unit that detects whether the user is located in an area (or sleep detection area) 11a where environmental sensing information can be obtained according to an embodiment of the present invention.
  • 3A and 3B are graphs verifying the performance of the sleep analysis method according to the present invention, showing polysomnography (PSG) results (PSG results) and analysis results (AI results) using the AI algorithm according to the present invention. This is a comparison drawing.
  • Figure 3c is a graph verifying the performance of the sleep analysis method according to the present invention, showing polysomnography (PSG) results in relation to sleep apnea and hypoventilation and the results according to the present invention.
  • PSG polysomnography
  • This is a diagram comparing the analysis results (AI results) using AI algorithms.
  • Figure 4 is a diagram showing an experimental process for verifying the performance of the sleep analysis method according to the present invention.
  • Figure 5 is an exemplary diagram illustrating a process for obtaining sleep sound information from environmental sensing information according to an embodiment of the present invention.
  • FIG. 6A is an exemplary diagram illustrating a method of obtaining a spectrogram corresponding to sleeping sound information according to an embodiment of the present invention.
  • Figure 6b is a diagram for explaining sleep stage analysis using a spectrogram in the sleep analysis method according to the present invention.
  • Figure 6c is a diagram for explaining sleep disorder determination using a spectrogram in the sleep analysis method according to the present invention.
  • Figure 7 is an exemplary diagram illustrating environmental composition information for each time point generated based on the user's sleeping state, according to an embodiment of the present invention.
  • Figure 8 shows an exemplary flowchart for providing a method for creating a sleep environment based on sleep state information, according to an embodiment of the present invention.
  • Figure 9 is a schematic diagram showing one or more network functions according to an embodiment of the present invention.
  • Figure 10 is a diagram illustrating the structure of a sleep analysis model using deep learning to analyze a user's sleep, according to an embodiment of the present invention.
  • Figure 11 is a conceptual diagram for explaining the operation of the environment creation device according to the present invention.
  • Figure 12 is a block diagram showing the configuration of an environment creation device according to the present invention.
  • Figure 13 is a flowchart illustrating a process for obtaining sleep state information through a sleep measurement mode of an environment creation device according to an embodiment of the present invention.
  • Figure 14 is a flowchart illustrating a process for creating an environment that induces the user to enter sleep according to an embodiment of the present invention.
  • Figure 15 is a flow chart illustrating a process for changing the user's sleep environment during sleep and immediately before waking up according to an embodiment of the present invention.
  • Figure 16 is a diagram for explaining another example of a hypnogram displaying a sleep stage within a user's sleep period according to an embodiment of the present invention.
  • Figure 17a is a diagram for explaining a non-numerical evaluation of a user's sleep according to an embodiment of the present invention.
  • Figure 17b is a diagram to explain the sleep score, which is a numerical evaluation of the user's sleep.
  • Figure 18 is a diagram to explain the structure of the Transformer model, which is the basis of a large language model.
  • Figure 19 is a diagram for explaining the inverter model of the DIFFUSION model in the content creation type artificial intelligence according to an embodiment of the present invention.
  • Figure 20 is a diagram for explaining the generator and discriminator of a GAN (Generative Adversarial Network) in content generation artificial intelligence according to an embodiment of the present invention.
  • GAN Geneative Adversarial Network
  • Figure 21 is a flow chart to explain the steps for generating a sleep plot.
  • Figure 22 is a diagram for explaining sleep sound source content generated by content generation artificial intelligence according to an embodiment of the present invention.
  • Figure 23 is a diagram for explaining sleep content generated by content generation artificial intelligence according to an embodiment of the present invention.
  • Figure 24 is a diagram for explaining the matching of keywords of sleep sound source content and sleep text content generated by content creation artificial intelligence according to an embodiment of the present invention.
  • FIG. 25 is a diagram illustrating an embodiment of a user terminal 300 that provides imagery induction information based on sleep state information according to an embodiment of the present invention.
  • Figure 26a is a diagram for explaining a method in which a user checks certain content by swiping among the emotional modeling methods according to the present invention.
  • Figure 26b is a diagram for explaining a method of inputting a user's preferred text among the emotional modeling methods according to the present invention.
  • Figure 26c is a diagram for explaining a method of selecting a keyword for a user's preferred content among the emotional modeling methods according to the present invention.
  • Figure 27 is a flow chart to explain an example of a method for generating and providing a sleep image according to an embodiment of the present invention.
  • FIGS. 28 to 36 and FIG. 41 are actual example screens in which the flowchart of FIG. 27 is specifically implemented in the user terminal 300.
  • Figure 37 is a diagram showing examples of slip images according to embodiments of the present invention.
  • Figures 38 to 40 are diagrams for explaining sleep images according to other embodiments.
  • Figure 42 is a flowchart for explaining a sleep image provision and method according to another embodiment of the present invention.
  • Figure 43 is a flowchart for explaining a sleep image provision and method according to another embodiment of the present invention.
  • Figure 44 is a flowchart for explaining a sleep image provision and method according to another embodiment of the present invention.
  • Figure 45 is a diagram for explaining consistency training according to an embodiment of the present invention.
  • Figure 46 is a flowchart illustrating a method for analyzing sleep state information including the process of combining sleep sound information and sleep environment information into multimodal data according to an embodiment of the present invention.
  • Figure 47 is a flowchart illustrating a method for analyzing sleep state information including the step of combining the inferred sleep sound information and sleep environment information into multimodal data according to an embodiment of the present invention.
  • Figure 48 is a flowchart illustrating a method for analyzing sleep state information including the step of combining inferred sleep sound information with sleep environment information and multimodal data according to an embodiment of the present invention.
  • Figure 49 is a diagram illustrating a linear regression analysis function used to analyze AHI, a sleep apnea occurrence index, through sleep events that occur during sleep, according to an embodiment of the present invention.
  • the term “unit” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “unit” or “module” performs certain roles.
  • “part” or “module” is not limited to software or hardware.
  • a “unit” or “module” may be configured to reside on an addressable storage medium and may be configured to run on one or more processors.
  • a “part” or “module” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Includes procedures, subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables.
  • the functionality provided within components and “parts” or “modules” can be combined into smaller components and “parts” or “modules” or into additional components and “parts” or “modules”. could be further separated.
  • a computer refers to all types of hardware devices including at least one processor, and depending on the embodiment, it may be understood as encompassing software configurations that operate on the hardware device.
  • a computer can be understood to include, but is not limited to, a smartphone, tablet PC, desktop, laptop, and user clients and applications running on each device.
  • the device may include a portable communication device (eg, mobile phone, smart watch) that includes other functions such as PDA and music player functions.
  • a portable communication device eg, mobile phone, smart watch
  • other functions such as PDA and music player functions.
  • each step described in this specification is described as being performed by a computer, but the subject of each step is not limited thereto, and depending on the embodiment, at least part of each step may be performed in a different device.
  • FIGS. 1A and 1B are conceptual diagrams showing a system in which various aspects of a sleep content generating device or a sleep content providing device based on user sleep information can be implemented according to an embodiment of the present invention.
  • a system may include a computing device 100, a user terminal 300, an external server 20, and a network.
  • the sleep content generating device 700 and/or the sleep content providing device 800 based on user sleep information may be implemented as the computing device 100.
  • the imagery inducing information generating device 100a and/or the imagery inducing information providing device 200a may be implemented as the computing device 100 .
  • a sleep image generating device and/or a sleep image providing device may be implemented as the computing device 100.
  • the system in which the device for generating and/or providing sleep content based on user sleep information shown in FIG. 1A is implemented is according to one embodiment, and its components are not limited to the embodiment shown in FIG. 1A, and necessary It may be added, changed, or deleted depending on.
  • the system according to embodiments of the present invention may include a user terminal 300, an external server 20, and a network.
  • the sleep content generating device 700 or the sleep content providing device 800 based on user sleep information is operated by the user terminal 300 or an external server ( 20) can be implemented as.
  • the device for generating sleep content based on user sleep information is implemented as a user terminal 300, and the device for providing sleep content based on user sleep information is implemented as an external server 20. It can be implemented as:
  • the device for generating sleep content based on user sleep information is implemented as an external server 20, and the device for providing sleep content based on user sleep information is installed at the user terminal ( 300).
  • both a sleep content generating device based on user sleep information and a sleep content providing device based on user sleep information may be implemented as the user terminal 300.
  • both the device for generating sleep content based on user sleep information and the device for providing sleep content based on user sleep information according to an embodiment of the present invention may be implemented as an external server 20.
  • the imagery inducing information generating device 100a and/or the imagery inducing information providing device 200a may be implemented as a user terminal 300 or an external server 20.
  • the sleep image generating device and/or the sleep image providing device may be implemented as the user terminal 300 or the external server 20.
  • the system in which the device for generating and/or providing sleep content based on user sleep information shown in FIG. 1B is implemented is according to one embodiment, and its components are not limited to the embodiment shown in FIG. 1B, and necessary It may be added, changed, or deleted depending on.
  • the external server 20 may be composed of a single server or a plurality of servers. Additionally, according to an embodiment of the present invention, the external server 20 may be implemented as an external terminal 200.
  • a sleep content generating device based on user sleep information can generate sleep content using generative artificial intelligence.
  • a sleep content providing device based on user sleep information can provide sleep content generated using generative artificial intelligence.
  • FIG. 1C is a conceptual diagram illustrating a system in which the creation and provision of sleep content based on user sleep information is implemented in the user terminal 300 according to an embodiment of the present invention.
  • sleep content may be generated and provided based on user sleep information in the user terminal 300 without a separate generating device 700 and/or a separate providing device 800. .
  • a sleep content generating device based on user sleep information can generate sleep content using generative artificial intelligence.
  • a sleep content providing device based on user sleep information can provide sleep content generated using generative artificial intelligence.
  • the device for generating and/or providing sleep content based on user sleep information when the device for generating and/or providing sleep content based on user sleep information according to embodiments of the present invention is implemented as a computing device 100, the computing device 100 , data for the system according to embodiments of the present invention can be mutually transmitted and received through the user terminal 300 and the network.
  • the computing device 100 when the device for generating and/or providing sleep content based on user sleep information according to embodiments of the present invention is implemented as the computing device 100, the computing device 100 ) can mutually transmit and receive data for the system according to embodiments of the present invention with the user terminal 300 and/or the external server 20 through a network.
  • the user terminal 300 is provided through the network. , it can perform the role of the sleep content generating device 700 and/or the providing device 800 based on user sleep information, and mutually transmit and receive data for the system according to embodiments of the present invention.
  • the user terminal 300 is connected to the imagery inducing information generating device ( 100a) and/or the image inducing information providing device 200a, it is possible to mutually transmit and receive data for the system according to embodiments of the present invention.
  • the user terminal 300 functions as a sleep image generating device and/or a sleep image providing device through the network.
  • data for the system according to embodiments of the present invention can be mutually transmitted and received.
  • FIG. 2A is a block diagram showing the configuration of a device 700/800 that generates/provides sleep content based on user sleep information according to an embodiment of the present invention.
  • the device 700 for generating sleep content based on user sleep information using generative artificial intelligence stores a display 720 and one or more programs configured to be executed by one or more processors. It may include a memory 740 and one or more processors 760.
  • the device 800 that provides sleep content based on user sleep information using generative artificial intelligence includes a display 820 and one or more programs configured to be executed by one or more processors. It may include a memory 840 that stores and one or more processors 860.
  • memory 740 or memory 840 that stores one or more programs includes high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices. , including non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Additionally, the memory may store instructions for performing a method of providing one or more graphical user interfaces representing information about the user's sleep.
  • the processor 760 or processor 860 may be composed of one or more processor means. Additionally, the processor may execute memory that stores one or more programs.
  • the device 700/device 800 for generating/providing sleep content based on user sleep information utilizes generative artificial intelligence to generate sleep content based on user sleep information.
  • the operation of creating or providing content can be performed.
  • FIG. 1D is a conceptual diagram illustrating a system of an apparatus 100a that generates imagery induction information based on sleep state information according to an embodiment of the present invention.
  • the system of the present invention may include a device 100a that generates imagery induction information, a user terminal 300, and a network.
  • FIG. 1E is a conceptual diagram showing a system of a device 200a that provides imagery induction information based on sleep state information according to an embodiment of the present invention. As shown in FIG. 1E, it may include a device 200a, a user terminal 300, and a network that provide image inducing information of the present invention.
  • the device 100a for generating the image guidance information of the present invention and the device 200a for providing the image guidance information of the present invention through the user terminal 300 and the network.
  • Data for systems according to embodiments may be transmitted and received.
  • Figure 1C is a conceptual diagram showing a system for generating and providing imagery induction information based on state information according to an embodiment of the present invention.
  • the user terminal 300 generates the image through the network.
  • data for the system according to embodiments of the present invention can be transmitted and received.
  • the user terminal 300 may provide sleep content through a network.
  • the generating device 700/providing device 800 By performing the role of the generating device 700/providing device 800, data for the system according to embodiments of the present invention can be transmitted and received.
  • the user terminal 300 can connect the device for generating a sleep image and the sleep image through the network.
  • data for systems according to embodiments of the present invention can be transmitted and received.
  • FIG. 1F shows a conceptual diagram illustrating a system in which various aspects of a computing device for creating a sleep environment based on sleep state information can be implemented according to an embodiment of the present invention.
  • a system according to embodiments of the present invention may include a computing device 100, a user terminal 300, an external server 20, an environment creation device 30, and a network.
  • the present invention is a system in which the computing device 100, the user terminal 300, the external server 20, and the environment creation device 30 are connected through a network, according to embodiments of the present invention. You can transmit and receive data for each other.
  • the system for implementing a method for creating a sleep environment based on the sleep state information shown in FIG. 1F is according to one embodiment, and its components are not limited to the embodiment shown in FIG. 1F, It may be added, changed, or deleted as needed.
  • Figure 1g shows a conceptual diagram showing a system in which various aspects of a sleep environment control device can be implemented according to another embodiment of the present invention.
  • the system according to embodiments of the present invention may include a sleep environment control device 400, a user terminal 300, an external server 20, and a network.
  • the system for implementing a method for creating a sleep environment based on the sleep state information shown in FIG. 1g is according to one embodiment, and its components are not limited to the embodiment shown in FIG. 1g. It may be added, changed, or deleted as needed.
  • FIG. 1H shows a conceptual diagram illustrating a system in which various aspects of various electronic devices can be implemented according to another embodiment of the present invention. As shown in FIG. 1H, the electronic devices shown in FIG. 1H may perform at least one of the operations performed by various devices according to embodiments of the present invention.
  • operations performed by various electronic devices include obtaining environmental sensing information and sleep information, performing learning about sleep analysis, and performing inference about sleep analysis.
  • the operations performed by various devices include generating imagery inducing information, providing imagery inducing information, acquiring environmental sensing information, learning a sleep analysis model, and sleeping state. It may include an operation of learning information, an operation of inferring sleep state information, and an operation of displaying sleep state information.
  • receive information related to the user's sleep transmit or receive at least one of environmental sensing information and sleep information, perform preprocessing on environmental sensing information, determine environmental sensing information and sleep information, or Data obtained by extracting acoustic information from sensing information and sleep information, processing or processing data, processing services, providing services, constructing a learning data set based on environmental sensing information or the user's sleep information, or acquiring data.
  • store a plurality of data that is input to a neural network transmit or receive various information, mutually transmit and receive data for systems according to embodiments of the present invention through a network, or generate sleep content based on user sleep information.
  • This may include actions that create or provide information, actions that generate or provide imagery-inducing information, actions that generate or provide sleep images, actions that generate or provide sleep content using generative artificial intelligence based on the user's sleep information, etc. It may be possible.
  • the electronic devices shown in FIG. 1H may individually perform the operations performed by various electronic devices according to embodiments of the present invention, but may also perform one or more operations simultaneously or in time series.
  • the electronic devices 1a to 1d shown in FIG. 1H may be electronic devices within the range of an area (or sleep detection area) 11a that can obtain environmental sensing information.
  • an area (or sleep detection area) 11a that can obtain environmental sensing information.
  • area 11a the area (or sleep detection area) 11a where environmental sensing information can be obtained.
  • the electronic devices 1a and 1d may be a device composed of a combination of two or more electronic devices.
  • electronic devices 1a and 1b may be electronic devices connected to a network within area 11a.
  • the electronic devices 1c and 1d may be electronic devices not connected to the network within the area 11a.
  • electronic devices 2a to 2b may be electronic devices outside the range of area 11a.
  • FIG. 1H there may be a network that interacts with electronic devices within the scope of area 11a, and there may be a network that interacts with electronic devices outside the scope of area 11a.
  • a network that interacts with electronic devices within the scope of area 11a may serve to transmit and receive information for controlling smart home appliances.
  • the network interacting with electronic devices within the scope of area 11a may be, for example, a local area network or a local network.
  • the network interacting with electronic devices within the scope of area 11a may be, for example, a remote network or a global network.
  • FIG. 1H there may be one or more electronic devices connected through a network outside the range of area 11a, and in this case, the electronic devices may distribute data to each other or perform one or more operations separately.
  • electronic devices connected through a network outside the scope of area 11a may include server devices.
  • the electronic devices may perform various operations independently of each other.
  • various electronic devices according to the present invention can transmit and receive data for the system according to embodiments of the present invention through a network.
  • Figure 1i is a diagram for explaining a system for generating and providing a sleep image according to an embodiment of the present invention.
  • the system includes a computing device 100, an external terminal 200, and a user terminal 300 connected to a network.
  • a computing device 100 the external terminal 200
  • a user terminal 300 connected to a network.
  • the basic hardware structures of the computing device 100, the external terminal 200, and the user terminal 300 will be described.
  • FIG. 2D is a block diagram for explaining the computing device 100 according to an embodiment of the present invention.
  • the computing device 100 includes at least one processor 110, a memory 120, an output device 130, an input device 140, an input/output interface 150, a sensor module 160, It may include a communication module 170.
  • the computing device 100 may obtain sleep state information of the user and adjust the user's sleep environment based on the sleep state information. Specifically, the computing device 100 may obtain sleep state information related to whether the user is before, during, or after sleep based on environmental sensing information, and determine the sleep environment of the space where the user is located according to the sleep state information. can be adjusted.
  • the computing device 100 when the user obtains sleep state information that the user is before sleep, the computing device 100 sets the intensity and illuminance of light (e.g., white light of 3000K, 30 lux) to induce sleep based on the sleep state information. It is possible to generate environmental information related to illuminance) and air quality (fine dust concentration, harmful gas concentration, air humidity, air temperature, etc.).
  • the computing device 100 may transmit environment creation information related to the intensity and illuminance of light and air quality for inducing sleep to the environment creation device 30 .
  • the environment creation device 30 sets the light intensity and illuminance of the space where the user is located based on the environment creation information received from the computing device 100 to an appropriate intensity and illuminance (for example, white light of 3000K) to induce sleep. It can be adjusted to an illuminance of 30 lux. That is, the environment creation information generated by the computing device 100 is transmitted to a lighting device, which is an embodiment of the environment creation device 30, so that the illuminance in the sleeping space can be adjusted.
  • an appropriate intensity and illuminance for example, white light of 3000K
  • the computing device 100 removes fine dust, removes harmful gases, operates allergy care, operates deodorization/sterilization, controls dehumidification/humidification, adjusts blowing intensity, controls air purifier operation noise, and turns on LEDs.
  • Environmental creation information such as various information related to the environment, can be generated.
  • the environment creation information generated by the computing device 100 is transmitted to an air purifier, which is an embodiment of the environment creation device 30, so that the air quality in the room, in the vehicle, or in the sleeping space can be adjusted.
  • the environmental sensing information used by the computing device 100 to analyze the sleeping state may include acoustic information acquired in a non-invasive manner during the user's activities in the work space or during sleep.
  • environmental sensing information may include sounds generated as the user tosses and turns during sleep, sounds related to muscle movements, or sounds related to the user's breathing during sleep.
  • the environmental sensing information may include sleep sound information, and the sleep sound information may mean sound information related to movement patterns and breathing patterns that occur during the user's sleep.
  • environmental sensing information may be obtained through the user terminal 300 carried by the user.
  • environmental sensing information related to the user's activities in the work space may be obtained through a microphone module provided in the user terminal 300.
  • the microphone module provided in the user terminal 300 carried by the user may be configured as a MEMS (Micro-Electro Mechanical System) because it must be provided in the user terminal 300 of a relatively small size.
  • MEMS Micro-Electro Mechanical System
  • These microphone modules can be manufactured very small, but can have a lower signal-to-noise ratio (SNR) than condenser microphones or dynamic microphones.
  • SNR signal-to-noise ratio
  • a low signal-to-noise ratio may mean that the ratio of noise, which is a sound that is not to be identified, to the sound that is to be identified is high, making it difficult to identify the sound (i.e., unclear).
  • Environmental sensing information that is the subject of analysis in the present invention may include sound information related to the user's breathing and movement acquired during sleep, that is, sleep sound information.
  • This sleep sound information is information about very small sounds (i.e., sounds that are difficult to distinguish) such as the user's breathing and movement, and is acquired along with other sounds during the sleep environment, so the microphone with a low signal-to-noise ratio as described above is used. If acquired through modules, detection and analysis can be very difficult.
  • the computing device 100 may obtain sleep state information based on environmental sensing information obtained through a microphone module made of MEMS.
  • the computing device 100 is capable of converting and/or adjusting ambiguously acquired environmental sensing information including a lot of noise into data that can be analyzed, and utilizing the converted and/or adjusted data to learn about an artificial neural network. It can be done.
  • the learned neural network e.g., acoustic analysis model
  • the user based on data (e.g., transformed and/or adjusted) acquired (e.g., transformed and/or adjusted) corresponding to the sleep acoustic information. Sleep state information can be obtained.
  • the sleep state information may include sleep stage information related to changes in the user's sleep stage during sleep, as well as information related to whether the user is sleeping.
  • the sleep state information may include sleep stage information indicating that the user was in REM sleep at a first time point, and that the user was in light sleep at a second time point different from the first time point. In this case, information that the user fell into a relatively deep sleep at the first time and had a lighter sleep at the second time can be obtained through the corresponding sleep state information.
  • the computing device 100 acquires sleep sound information with a low signal-to-noise ratio through a user terminal that is widely used to collect sound (e.g., artificial intelligence speaker, bedroom IoT device, mobile phone, etc.),
  • a user terminal that is widely used to collect sound
  • sleep state information related to changes in sleep stages can be provided. This eliminates the need to have a contact microphone on the user's body to obtain clear sound, and also allows sleep status to be monitored in a typical home environment with just a software update without purchasing an additional device with a high signal-to-noise ratio. This can provide the effect of increasing convenience.
  • the computing device 100 and the environment creation device 30 are separately represented as separate entities, but according to an embodiment of the present invention, the environment creation device 30 is included in the computing device 100 to enable sleep.
  • Condition measurement and environmental adjustment operation functions can also be performed in one integrated device.
  • computing device 100 may be a terminal or a server, and may include any type of device.
  • the computing device 100 is a digital device, such as a laptop computer, a notebook computer, a desktop computer, a web pad, or a mobile phone, and may be a digital device equipped with a processor and computing power with memory.
  • Computing device 100 may be a web server that processes services.
  • the types of servers described above are merely examples and the present invention is not limited thereto.
  • the computing device 100 may be a server that provides cloud computing services. More specifically, the computing device 100 is a type of Internet-based computing and may be a server that provides a cloud computing service that processes information not on the user's computer but on another computer connected to the Internet.
  • the cloud computing service may be a service that stores data on the Internet and can be used anytime, anywhere through Internet access without the user having to install necessary data or programs on his or her computer.
  • the cloud computing service may be a service that allows simple manipulation of data stored on the Internet. You can easily share and forward with a click.
  • cloud computing services not only allow you to simply store data on a server on the Internet, but also allow you to perform desired tasks using the functions of applications provided on the web without having to install a separate program, and allow multiple people to view documents at the same time. It may be a service that allows you to work while sharing. Additionally, cloud computing services may be implemented in at least one of the following forms: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), virtual machine-based cloud server, and container-based cloud server. . That is, the computing device 100 of the present invention may be implemented in at least one form among the cloud computing services described above. The specific description of the cloud computing service described above is merely an example, and may include any platform for constructing the cloud computing environment of the present invention.
  • IaaS Infrastructure as a Service
  • PaaS Platform as a Service
  • SaaS Software as a Service
  • virtual machine-based cloud server virtual machine-based cloud server
  • container-based cloud server container-based cloud server
  • the computing device 100 may perform an operation of generating sleep content based on user sleep information and/or providing sleep content based on user sleep information.
  • the device 700 for generating sleep content based on user sleep information and/or the device 800 for providing sleep content based on user sleep information are not separately provided, and the sleep content is provided with the computing device 100.
  • a creation and delivery system may be implemented.
  • the computing device 100 may perform an operation of generating and/or providing a sleep image.
  • a sleep content generating and providing system equipped with the computing device 100 may be implemented without a separate device for generating and/or providing a sleep image.
  • the computing device 100 may perform an operation of generating and/or providing imagery induction information.
  • the device 100a for generating imagery induction information based on sleep state information and/or the device 200a for providing imagery induction information based on sleep state information are not separately provided, and the computing device 100 The provided sleep content creation and provision system can be implemented.
  • the processor 110 may include one or more application processors (AP), one or more communication processors (CP), or at least one artificial intelligence processor (AI processor).
  • AP application processors
  • CP communication processors
  • AI processor artificial intelligence processor
  • the application processor, communication processor, or AI processor may each be contained within different integrated circuit (IC) packages or may be contained within one IC package.
  • the application processor runs an operating system or application program, controls multiple hardware or software components connected to the application processor, and can perform various data processing/computations, including multimedia data.
  • the application processor may be implemented as a system on chip (SoC).
  • SoC system on chip
  • the processor 110 may further include a graphic processing unit (GPU) (not shown).
  • GPU graphic processing unit
  • the communication processor may perform a function of managing a data link and converting a communication protocol in communication between the computing device 100 and other computing devices connected to a network.
  • a communications processor may be implemented as a SoC.
  • the communications processor may perform at least some of the multimedia control functions.
  • the communication processor may control data transmission and reception of the communication module 170.
  • the communication processor may be implemented to be included as at least part of the application processor.
  • the application processor or the communication processor may load and process commands or data received from at least one of the non-volatile memory or other components connected to the volatile memory. Additionally, the application processor or communication processor may store data received from or generated by at least one of the other components in the non-volatile memory.
  • the computer program When loaded into memory 120, the computer program may include one or more instructions that cause processor 110 to perform methods/operations according to various embodiments of the present invention. That is, the processor 110 can perform methods/operations according to various embodiments of the present invention by executing one or more instructions.
  • the computer program includes obtaining sleep state information of a user, generating environment creation information based on the sleep state information, and transmitting the environment creation information to an environment creation device. It may include one or more instructions to perform a method of creating a sleep environment according to state information.
  • the processor 110 may be composed of one or more cores, such as a central processing unit (CPU) of a computing device, and a general purpose graphics processing unit (GPGPU). , may include a processor for data analysis and deep learning, such as a tensor processing unit (TPU).
  • cores such as a central processing unit (CPU) of a computing device, and a general purpose graphics processing unit (GPGPU).
  • GPU general purpose graphics processing unit
  • TPU tensor processing unit
  • the processor 110 may read a computer program stored in the memory 120 and perform data processing for machine learning according to an embodiment of the present invention. According to one embodiment of the present invention, the processor 110 may perform calculations for learning a neural network.
  • the processor 110 is used for learning neural networks, such as processing input data for learning in deep learning (DL), extracting features from input data, calculating errors, and updating the weights of the neural network using backpropagation. Calculations can be performed.
  • DL deep learning
  • Calculations can be performed.
  • CPU, GPGPU, and TPU of the processor 110 may process learning of the network function.
  • CPU and GPGPU can work together to process learning of network functions and data classification using network functions.
  • the processors of a plurality of computing devices can be used together to process learning of network functions and data classification using network functions.
  • a computer program executed in a computing device according to an embodiment of the present invention may be a CPU, GPGPU, or TPU executable program.
  • network function may be used interchangeably with artificial neural network or neural network.
  • a network function may include one or more neural networks, and in this case, the output of the network function may be an ensemble of the outputs of one or more neural networks.
  • the model may include a network function.
  • a model may include one or more network functions, in which case the output of the model may be an ensemble of the outputs of one or more network functions.
  • the processor 110 may read the computer program stored in the memory 120 and provide a sleep analysis model according to an embodiment of the present invention. According to an embodiment of the present invention, the processor 110 may perform calculations to calculate environmental composition information based on sleep state information. According to one embodiment of the present invention, the processor 110 may perform calculations to learn a sleep analysis model.
  • the sleep analysis model will be explained in more detail below. Based on the sleep analysis model, sleep information related to the user's sleep quality can be inferred. Environmental sensing information acquired in real time or periodically from the user is input as an input value to the sleep analysis model, and data related to the user's sleep is output.
  • Learning of such a sleep analysis model and inference based thereon may be performed by the computing device 100.
  • both learning and inference can be designed to be performed by the computing device 100.
  • learning may be performed in the computing device 100, but inference may be performed in the user terminal 300.
  • learning may be performed in the computing device 100, but inference may be performed in the environment creation device 30 implemented with smart home appliances (TV, lighting, refrigerator, air purifier), etc.
  • this may be performed by the sleep environment control device 400 of FIG. 1G. That is, both learning and inference can be performed by the sleep environment control device 400.
  • learning may be performed in the computing device 100, but inference may be performed in the external terminal 200.
  • the processor 110 may typically process the overall operation of the computing device 100.
  • the processor 110 can provide or process appropriate information or functions to the user terminal by processing signals, data, information, etc. input or output through the components discussed above or by running an application program stored in the memory 120. there is.
  • the processor 110 may obtain information on the user's sleep state.
  • Acquiring sleep state information may be acquiring or loading sleep state information stored in the memory 120. Additionally, acquisition of sleep sound information may involve receiving or loading data from another storage medium, another computing device, or a separate processing module within the same computing device based on wired/wireless communication means.
  • FIG. 2C shows a block diagram of a computing device for creating a sleep environment based on sleep state information related to an embodiment of the present invention.
  • the computing device 100 may include a network unit 180, a memory 120, and a processor 110. It is not limited to the components included in computing device 100 described above. That is, depending on the implementation aspect of the embodiments of the present invention, additional components may be included or some of the above-described components may be omitted.
  • the computing device 100 includes a network unit 180 that transmits and receives data with the user terminal 300, the external server 20, and the environment creation device 30. ) may include.
  • the network unit 180 may transmit and receive data for performing a method of creating a sleep environment according to sleep state information according to an embodiment of the present invention, to other computing devices, servers, etc. That is, the network unit 180 may provide a communication function between the computing device 100, the user terminal 300, the external server 20, and the environment creation device 30.
  • the network unit 180 may receive sleep checkup records and electronic health records for multiple users from a hospital server.
  • the network unit 180 may receive environmental sensing information related to the space in which the user operates from the user terminal 300.
  • the network unit 180 may transmit environment creation information for adjusting the environment of the space where the user is located to the environment creation device 30. Additionally, the network unit 180 may allow information to be transferred between the computing device 100, the user terminal 300, and the external server 20 by calling a procedure with the computing device 100.
  • the computing device 100 may be configured to include both the network unit 180 and the communication module 170, but may include only one of the network unit 180 and the communication module 170. It may be configured. Additionally, the communication module 170 may perform the operations of the network unit 180 described above.
  • Memory 120 may include internal memory or external memory.
  • Built-in memory is volatile memory (e.g., dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), etc.) or non-volatile memory (e.g., one time programmable ROM (OTPROM), programmable memory (PROM), etc. ROM), EPROM (erasable and programmable ROM), EEPROM (electrically erasable and programmable ROM), mask ROM, flash ROM, NAND flash memory, NOR flash memory, etc.).
  • the internal memory may take the form of a solid state drive (SSD).
  • the external memory may be a flash drive, for example, compact flash (CF), secure digital (SD), micro secure digital (Micro-SD), mini secure digital (Mini-SD), or extreme digital (xD). Alternatively, it may further include a memory stick, etc.
  • the memory 120 may store a computer program for performing a method of creating a sleep environment according to sleep state information according to an embodiment of the present invention, and the stored computer program may be stored in the processor 110. It can be read and driven by . Additionally, the memory 120 may store any type of information generated or determined by the processor 110 and any type of information received by the network unit 180. Additionally, the memory 120 may store data related to the user's sleep. For example, the memory 120 may store input/output data (e.g., environmental sensing information related to the user's sleep environment, sleep state information corresponding to the environmental sensing information, or environment creation information according to the sleep state information, etc. ) can also be stored temporarily or permanently.
  • input/output data e.g., environmental sensing information related to the user's sleep environment, sleep state information corresponding to the environmental sensing information, or environment creation information according to the sleep state information, etc.
  • the memory 120 is a flash memory type, hard disk type, multimedia card micro type, or card type memory (e.g. (e.g. SD or -Only Memory), and may include at least one type of storage medium among magnetic memory, magnetic disk, and optical disk.
  • the computing device 100 may operate in connection with web storage that performs a storage function of the memory 120 on the Internet.
  • the description of the memory described above is only an example, and the present invention is not limited thereto.
  • the output device 130 may include at least one of a display module and/or a speaker.
  • the output device 130 can display various data including multimedia data, text data, voice data, etc. to the user or output them as sound.
  • the input device 140 may include a touch panel, a digital pen sensor, a key, or an ultrasonic input device.
  • the input device 140 may be the input/output interface 150.
  • the touch panel may recognize touch input in at least one of capacitive, resistive, infrared, or ultrasonic methods. Additionally, the touch panel may further include a controller (not shown). In the case of capacitive type, not only direct touch but also proximity recognition is possible.
  • the touch panel may further include a tactile layer. In this case, the touch panel can provide a tactile response to the user.
  • the digital pen sensor may be implemented using the same or similar method for receiving a user's touch input, or using a separate recognition layer.
  • the key may be a keypad or touch key.
  • An ultrasonic input device is a device that can check data by detecting micro sound waves in a terminal through a pen that generates ultrasonic signals, and enables wireless recognition.
  • the computing device 100 may receive user input from an external device (eg, a network, computer, or server) connected thereto using the communication module 170.
  • the input device 140 may further include a camera module or/and a microphone.
  • a camera module is a device that can capture images and videos, and may include one or more image sensors, an image signal processor (ISP), or a flash LED.
  • ISP image signal processor
  • a microphone can receive voice signals and convert them into electrical signals.
  • the input/output interface 150 transmits commands or data input from the user through the input device 140 or output device 130 to the processor 110, memory 120, communication module 170, etc. through a bus (not shown). It can be delivered. As an example, the input/output interface 150 may provide data about a user's touch input through a touch panel to the processor 110. As an example, the input/output interface 150 may output commands or data received from the processor 110, memory 120, communication module 170, etc. through the bus through the output device 130. As an example, the input/output interface 150 may output voice data processed through the processor 110 to the user through a speaker.
  • the sensor module 160 may include a gesture sensor, gyro sensor, barometric pressure sensor, magnetic sensor, acceleration sensor, grip sensor, proximity sensor, RGB (red, green, blue) sensor, biometric sensor, temperature/humidity sensor, illuminance sensor, or UV ( may include at least one of ultra violet) sensors.
  • the sensor module 160 may measure a physical quantity or detect the operating state of the computing device 100 and convert the measured or sensed information into an electrical signal. Additionally or alternatively, the sensor module 160 may include an olfactory sensor (E-nose sensor), an electromyography sensor (EMG sensor), an electroencephalogram sensor (EEG sensor) (not shown), an electrocardiogram sensor (ECG sensor), and a photoplethysmography sensor (PPG sensor). ), a heart rate monitor sensor (HRM sensor), a perspiration sensor, or a fingerprint sensor.
  • the sensor module 160 may further include a control circuit for controlling at least one sensor included therein.
  • the communication module 170 may include a wireless communication module or an RF module.
  • the wireless communication module may include, for example, Wi-Fi, BT, GPS or NFC.
  • a wireless communication module may provide a wireless communication function using radio frequencies. Additionally or alternatively, the wireless communication module may include a network interface or modem for connecting the computing device 100 to a network (e.g., Internet, LAN, WAN, telecommunication network, cellular network, satellite network, POTS or 5G network, etc.). It can be included.
  • a network e.g., Internet, LAN, WAN, telecommunication network, cellular network, satellite network, POTS or 5G network, etc.
  • the RF module may be responsible for transmitting and receiving data, for example, RF signals or called electronic signals.
  • the RF module may include a transceiver, a power amp module (PAM), a frequency filter, or a low noise amplifier (LNA).
  • the RF module may further include components for transmitting and receiving electromagnetic waves in free space in wireless communication, for example, conductors or wires.
  • the computing device 100 may include at least one of a server, TV, smart TV, refrigerator, oven, clothes styler, robot vacuum cleaner, drone, air conditioner, air purifier, PC, speaker, home CCTV, lighting, washing machine, and smart plug. there is. Since the components of the computing device 100 described in FIG. 2D are examples of components generally provided in a computing device, the computing device 100 is not limited to the components described above, and certain components may be omitted as necessary. and/or may be added.
  • the network unit 180 includes Public Switched Telephone Network (PSTN), x Digital Subscriber Line (xDSL), Rate Adaptive DSL (RADSL), Multi Rate DSL (MDSL), and VDSL (A variety of wired communication systems can be used, such as Very High Speed DSL), Universal Asymmetric DSL (UADSL), High Bit Rate DSL (HDSL), and Local Area Network (LAN).
  • PSTN Public Switched Telephone Network
  • xDSL Digital Subscriber Line
  • RADSL Rate Adaptive DSL
  • MDSL Multi Rate DSL
  • VDSL VDSL
  • wired communication systems such as Very High Speed DSL), Universal Asymmetric DSL (UADSL), High Bit Rate DSL (HDSL), and Local Area Network (LAN).
  • the network unit 180 presented in this specification can use various wireless communication systems that can be realized now and in the future, such as mobile communication systems such as 4G and 5G (LTE), and satellite communication systems such as Starlink. .
  • mobile communication systems such as 4G and 5G (LTE)
  • satellite communication systems such as Starlink.
  • the network unit 180 can be configured regardless of the communication mode, such as wired or wireless, and may be composed of various communication networks such as a personal area network (PAN) and a wide area network (WAN). You can. Additionally, the network may be the well-known World Wide Web (WWW), and may also use wireless transmission technology used for short-distance communication, such as Infrared Data Association (IrDA) or Bluetooth. The techniques described herein can be used in the networks mentioned above, as well as other networks.
  • PAN personal area network
  • WAN wide area network
  • IrDA Infrared Data Association
  • Bluetooth wireless transmission technology used for short-distance communication
  • the computing device 100 implemented in the system according to embodiments of the present invention may be configured to include both the communication module 170 and the network unit 180, but the communication module 170 and the network unit ( 180) may be configured to include only one of them.
  • the operation of the above-described communication module 170 may be performed in the network unit 180, or the operation of the above-described network unit 180 may be performed in the communication module 170.
  • Networks include Public Switched Telephone Network (PSTN), x Digital Subscriber Line (xDSL), Rate Adaptive DSL (RADSL), Multi Rate DSL (MDSL), and Very High Speed DSL (VDSL). ), UADSL (Universal Asymmetric DSL), HDSL (High Bit Rate DSL), and local area network (LAN) can be used.
  • PSTN Public Switched Telephone Network
  • xDSL Digital Subscriber Line
  • RADSL Rate Adaptive DSL
  • MDSL Multi Rate DSL
  • VDSL Very High Speed DSL
  • UADSL Universal Asymmetric DSL
  • HDSL High Bit Rate DSL
  • LAN local area network
  • CDMA Code Division Multi Access
  • TDMA Time Division Multi Access
  • FDMA Frequency Division Multi Access
  • OFDMA Orthogonal Frequency Division Multi Access
  • SC-FDMA Single Carrier-FDMA
  • the network according to embodiments of the present invention can be configured regardless of the communication mode, such as wired or wireless, and is composed of various communication networks such as a personal area network (PAN) and a wide area network (WAN). It can be. Additionally, the network may be the well-known World Wide Web (WWW), and may also use wireless transmission technology used for short-distance communication, such as Infrared Data Association (IrDA) or Bluetooth.
  • IrDA Infrared Data Association
  • Bluetooth wireless transmission technology used for short-distance communication
  • Figure 2e is a block diagram for explaining an external terminal 200 according to an embodiment of the present invention.
  • the external terminal 200 may include a processor 210, a memory 220, and a communication module 270.
  • the external terminal 200 may be an external server 20 or a cloud server.
  • the external server 20 is a digital device, such as a laptop computer, a notebook computer, a desktop computer, a web pad, or a mobile phone, and may be a digital device equipped with a processor and computing power with memory.
  • the external server 20 may be a web server that processes services.
  • the types of servers described above are merely examples and the present invention is not limited thereto.
  • the processor 210 generally controls the external terminal 200.
  • Processor 210 may include AI processor 215.
  • the AI processor 215 can learn a neural network using a program stored in the memory 220.
  • the AI processor 215 can learn a neural network for recognizing data related to the operation of the user terminal 300.
  • the neural network may be designed to simulate human brain structure (eg, neuron structure of a human neural network) on a computer.
  • a neural network may include an input layer, an output layer, and at least one hidden layer. Each layer includes at least one neuron with a weight, and the neural network may include synapses connecting neurons.
  • each neuron can output an input signal input through a synapse as a function value of an activation function for weight and/or bias.
  • the neural network may include a deep learning model developed from a neural network model.
  • a deep learning model multiple network nodes are located in different layers and can exchange data according to convolutional connection relationships.
  • Examples of neural network models include deep neural networks (DNN), convolutional neural networks (CNN), recurrent neural networks, restricted Boltzmann machines, and deep belief networks. ), deep Q-Network, and various deep learning techniques, and can be applied in fields such as vision recognition, speech recognition, natural language processing, and voice/signal processing.
  • processor 210 that performs the functions described above may be a general-purpose processor (e.g., CPU), or may be an AI-specific processor (e.g., GPU, TPU) for artificial intelligence learning.
  • CPU general-purpose processor
  • AI-specific processor e.g., GPU, TPU
  • the memory 220 may store various programs and data necessary for the operation of the user terminal 300 and/or the external terminal 200.
  • the memory 220 is accessed by the AI processor 215, and reading/writing/modifying/deleting/updating data by the AI processor 215 can be performed.
  • the memory 220 may store a neural network model (eg, deep learning model) generated through a learning algorithm for data classification/recognition.
  • the memory 220 may store not only the learning model 221 but also input data, learning data, and learning history.
  • the AI processor 215 may include a data learning unit 215a that learns a neural network for data classification/recognition.
  • the data learning unit 215a can learn standards regarding which learning data to use to determine data classification/recognition and how to classify and recognize data using the learning data.
  • the data learning unit 215a can learn a deep learning model by acquiring learning data to be used for learning and applying the acquired learning data to the deep learning model.
  • the data learning unit 215a may be manufactured in the form of at least one hardware chip and mounted on the external terminal 200.
  • the data learning unit 215a may be manufactured in the form of a dedicated hardware chip for artificial intelligence, and may be manufactured as part of a general-purpose processor (CPU) or graphics processor (GPU) and mounted on the external terminal 200.
  • the data learning unit 215a may be implemented as a software module.
  • the software module When implemented as a software module (or a program module including instructions), the software module may be stored in a non-transitory computer readable recording medium that can be read by a computer. In this case, at least one software module may be provided to an operating system (OS) or may be provided by an application.
  • OS operating system
  • the data learning unit 215a can use the acquired training data to train the neural network model to have a judgment standard on how to classify/recognize certain data.
  • the learning method by the model learning unit can be classified into supervised learning, unsupervised learning, and reinforcement learning.
  • supervised learning refers to a method of training an artificial neural network with a given label for the learning data, and the label is the correct answer (or result value) that the artificial neural network must infer when the learning data is input to the artificial neural network. It can mean.
  • Unsupervised learning can refer to a method of training an artificial neural network in a state where no labels for training data are given.
  • Reinforcement learning can refer to a method of training an agent defined within a specific environment to select an action or action sequence that maximizes the cumulative reward in each state. Additionally, the model learning unit may learn a neural network model using a learning algorithm including error backpropagation or gradient descent. When a neural network model is learned, the learned neural network model may be referred to as a learning model 221.
  • the learning model 221 is stored in the memory 220 and can be used to infer results for new input data other than training data.
  • the AI processor 215 includes a data pre-processing unit 215b and/or a data selection unit 215c to improve the analysis results using the learning model 221 or to save resources or time required for generating the learning model 221. ) may further be included.
  • the data preprocessing unit 215b may preprocess the acquired data so that the acquired data can be used for learning/inference for situational judgment.
  • the data preprocessor 215b may extract feature information as preprocessing for input data received through the communication module 270, and the feature information may include a feature vector and a feature point. It can be extracted in formats such as point or feature map.
  • the data selection unit 215c may select data required for learning from among the training data or the training data preprocessed in the preprocessor.
  • the selected learning data may be provided to the model learning unit.
  • the data selection unit 215c may detect a specific area in an image acquired through a camera of a computing device and select only data about objects included in the specific area as learning data. Additionally, the data selection unit 215c may select data required for inference among input data obtained through an input device or input data preprocessed in a preprocessor.
  • the AI processor 215 may further include a model evaluation unit 215d to improve the analysis results of the neural network model.
  • the model evaluation unit 215d inputs evaluation data into the neural network model, and when the analysis result output from the evaluation data does not meet a predetermined standard, it can cause the model learning unit to learn again.
  • the evaluation data may be preset data for evaluating the learning model 221.
  • the model evaluation unit 215d determines that among the analysis results of the learned neural network model for the evaluation data, when the number or ratio of evaluation data for which the analysis result is inaccurate exceeds a preset threshold, the model evaluation unit 215d determines that the analysis result does not meet a predetermined standard. It can be evaluated as
  • the communication module 270 may transmit the result of AI processing by the AI processor 215 to the user terminal 300. Additionally, it can also be transmitted to the computing device 100 shown in FIG. 1I.
  • Figure 2f is a block diagram for explaining the user terminal 300 according to an embodiment of the present invention.
  • the user terminal 300 is a terminal that can receive information related to the user's sleep through information exchange with the computing device 100, and may refer to a terminal owned by the user.
  • the user terminal 300 may be a terminal related to a user who wants to improve his or her health through information related to his or her sleeping habits.
  • the user can obtain monitoring information related to his or her sleep through the user terminal 300.
  • Monitoring information related to sleep may include, for example, sleep state information related to when the user fell asleep, time spent sleeping, or when the user woke up, or sleep stage information related to changes in sleep stage during sleep.
  • sleep stage information may refer to information on changes in the user's sleep to light sleep, normal sleep, deep sleep, or REM sleep at each time point during the user's 8 hours of sleep last night.
  • the detailed description of the above-described sleep stage information is only an example, and the present invention is not limited thereto.
  • the user terminal 300 may be a mobile phone, a smart phone, a laptop computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation device, a tablet PC, or an ultrabook. ), wearable devices (eg, a watch-type terminal (smartwatch), a glass-type terminal (smart glass), a head mounted display (HMD)), etc.
  • PDA personal digital assistant
  • PMP portable multimedia player
  • a navigation device e.g, a watch-type terminal (smartwatch), a glass-type terminal (smart glass), a head mounted display (HMD)
  • HMD head mounted display
  • the user terminal 300 includes a wireless communication unit 310, an input unit 320, a sensing unit 340, an output unit 350, an interface unit 360, a memory 370, a control unit 380, and a power supply unit 390. ), etc. may be included.
  • the components shown in FIG. 2F are not essential for implementing the user terminal, so the user terminal described herein may have more or fewer components than the components listed above.
  • the wireless communication unit 310 enables wireless communication between the user terminal 300 and the wireless communication system, the user terminal 300 and the computing device 100, or the user terminal 300 and the external terminal 200. It may contain one or more modules. Additionally, the wireless communication unit 310 may include one or more modules that connect the user terminal 300 to one or more networks.
  • This wireless communication unit 310 may include at least one of a broadcast reception module 311, a mobile communication module 312, a wireless Internet module 313, a short-range communication module 314, and a location information module 315. .
  • the input unit 320 includes a camera 321 or an image input unit for inputting an image signal, a microphone or an audio input unit for inputting an audio signal, and a user input unit 323 for receiving information from a user, for example, a touch key (touch key). key), pushkey (mechanical key, etc.). Voice data or image data collected by the input unit 320 may be analyzed and processed as a user's control command.
  • the sensing unit 340 may include one or more sensors for sensing at least one of information within the user terminal, information on the surrounding environment surrounding the user terminal, and user information.
  • the sensing unit 340 includes a proximity sensor (341), an illumination sensor (342), a touch sensor, an acceleration sensor, a magnetic sensor, and a gravity sensor.
  • the user terminal disclosed in this specification can utilize information sensed by at least two of these sensors by combining them.
  • the output unit 350 is for generating output related to vision, hearing, or tactile sensation, and includes at least one of a display unit 351, an audio output unit 352, a haptic module 353, and an optical output unit 354. can do.
  • the display unit 351 can implement a touch screen by forming a layered structure or being integrated with the touch sensor.
  • This touch screen functions as a user input unit 323 that provides an input interface between the user terminal 300 and the user (U), and simultaneously provides an output interface between the user terminal 300 and the user (U). You can.
  • the interface unit 360 serves as a passageway for various types of external devices connected to the user terminal 300.
  • This interface unit 360 connects devices equipped with a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, and an identification module. It may include at least one of a port, an audio input/output (I/O) port, a video input/output (I/O) port, and an earphone port.
  • an external device eg, computing device 100
  • the user terminal 300 may perform appropriate control related to the connected external device.
  • the memory 370 stores data supporting various functions of the user terminal 300.
  • the memory 370 may store a plurality of application programs (application programs or applications) running on the user terminal 300, data for operating the user terminal 300, commands, or instructions. At least some of these applications may be downloaded from an external server through wireless communication. Additionally, at least some of these applications may be present on the user terminal 300 from the time of shipment for the basic functions of the user terminal 300 (e.g., incoming and outgoing calls, receiving and sending functions). Meanwhile, the application program may be stored in the memory 370, installed on the user terminal 300, and driven by the control unit 380 to perform an operation (or function) of the user terminal.
  • the memory 370 may store instructions for the operation of the control unit 380.
  • control unit 380 In addition to operations related to the application program, the control unit 380 typically controls the overall operation of the user terminal 300.
  • the control unit 380 can provide or process appropriate information or functions to the user by processing signals, data, information, etc. input or output through the components discussed above, or by running an application program stored in the memory 370.
  • the control unit 380 may control at least some of the components examined with FIG. 2F in order to run the application program stored in the memory 370. Furthermore, the control unit 380 may operate at least two of the components included in the user terminal 300 in combination with each other in order to run the application program.
  • the power supply unit 390 receives external power and internal power under the control of the control unit 380 and supplies power to each component included in the user terminal 300.
  • This power supply unit 390 includes a battery, and the battery may be a built-in battery or a replaceable battery.
  • At least some of the above components may cooperate with each other to implement the operation, control, or control method of the user terminal according to various embodiments described below. Additionally, the operation, control, or control method of the user terminal may be implemented on the user terminal by running at least one application program stored in the memory 370.
  • the environment creation device 30 can adjust the user's sleeping environment.
  • the environment creation device 30 may include one or more environment creation modules, and may include air quality, illuminance, temperature, wind direction, humidity, and By operating an environment creation module related to at least one of the sounds, the user's sleeping environment can be adjusted.
  • the environment creation device 30 includes an air purifier capable of controlling air quality, a lighting device capable of controlling the amount of light (illuminance), an air conditioner capable of controlling temperature, a humidifier/dehumidifier capable of controlling humidity, and sound It can be implemented with controllable audio/speakers, etc.
  • Figure 8 shows an exemplary flowchart for providing a method of creating a sleep environment according to sleep state information related to an embodiment of the present invention.
  • the method may include acquiring sleep state information of the user (S1000).
  • the method may include generating environment composition information based on sleep state information (S2000).
  • S2000 sleep state information
  • the method may include transmitting environment creation information to the environment creation device 30 (S3000).
  • the processor 110 may generate environment composition information based on sleep state information and/or sleep stage information.
  • Sleep state information is information related to whether the user is sleeping, and includes first sleep state information that the user is before sleep, second sleep state information that the user is sleeping, and third sleep state information that the user is after sleep. It can contain at least one.
  • the processor 110 may generate first environment composition information based on the first sleep state information. Specifically, when the user obtains first sleep state information indicating that the user is before sleep, the processor 110 may generate first environment creation information based on the first sleep state information.
  • the first environment composition information may be information about the intensity and illuminance of light that naturally induces sleep.
  • the first environment creation information may be control information that supplies 3000K white light at an illumination intensity of 30 lux from the time of sleep induction until the time the second sleep state information is acquired.
  • the time to induce sleep may be determined by the processor 110.
  • the processor 110 may determine the time to induce sleep through information exchange with the user's user terminal 300.
  • the user may set the time at which he or she wants to sleep through the user terminal 300 and transmit the time to the processor 110.
  • the processor 110 may determine the time to induce sleep based on the time when the user wants to sleep from the user terminal 300.
  • the processor 110 may determine a point in time 20 minutes prior to when the user wants to sleep as the time to induce sleep.
  • the processor 110 may determine 10:40 as the time to induce sleep.
  • the specific numerical description of the above-mentioned time points is only an example, and the present invention is not limited thereto.
  • the processor 110 may obtain the user's sleep intention information based on environmental sensing information and determine the time to induce sleep based on the sleep intention information.
  • Sleep intention information may be information that represents the user's intention to sleep in quantitative numbers. For example, as the user's sleep intention is higher, sleep intention information closer to 10 may be calculated, and as the sleep intention is lower, sleep intention information closer to 0 may be calculated.
  • the detailed numerical description of the above-described sleep intention information is only an example, and the present invention is not limited thereto.
  • the environmental composition information may be a signal generated by the computing device 100 based on determination of the user's sleep state information.
  • environment creation information may include information about lowering or increasing illumination, etc. If the environment creation device 30 is a lighting device, the environment creation information may include control information to gradually increase the illuminance of 3000K white light from 0 lux to 250 lux starting 30 minutes before the weather is predicted. .
  • the environment creation information is based on the user's real-time sleep state to remove fine dust (fine dust, ultrafine dust, ultrafine dust), remove harmful gases, drive allergy care, It can include various information related to deodorization/sterilization operation, dehumidification/humidification control, ventilation intensity control, air purifier operation noise control, LED lighting, management of smog-causing substances (SO2, NO2), removal of household odors, etc.
  • the environmental composition information may include control information for adjusting at least one of temperature, humidity, wind direction, or sound.
  • control information for adjusting at least one of temperature, humidity, wind direction, or sound.
  • One or more environment creation modules included in the environment creation device 30 may include, for example, at least one of an illumination control module, a temperature control module, a wind direction control module, a humidity control module, and a sound control module. However, it is not limited thereto, and the one or more environment creation modules may further include various environment creation modules that can bring about changes in the user's sleeping environment. That is, the environment creation device 30 may adjust the user's sleeping environment by operating one or more environment creation modules based on the environment control signal of the computing device 100.
  • the processor 110 may determine the time to induce sleep based on sleep intention information. Specifically, the processor 110 may identify the time when sleep intention information exceeds a predetermined threshold score as the sleep induction time. That is, when high sleep intention information is obtained, the processor 110 may identify this as a time appropriate for sleep induction, that is, a sleep induction time.
  • the processor 110 may determine the timing of inducing the user to sleep. Accordingly, when the user acquires the first sleep state information before sleep, the processor 110 provides first environment creation information ( It can generate 3000K white light at an illuminance of 30 lux.
  • the processor 110 operates from the time when the user is predicted to be preparing for sleep (e.g., sleep induction time) to the time when the user falls asleep (i.e., when the second sleep state information is obtained). It is possible to generate first environment creation information to adjust the light up to the point in time), and it can be decided to transmit the first environment creation information to the environment creation device 30. Accordingly, 3000K white light can be supplied at an illumination intensity of 30 lux from 20 minutes before the user falls asleep (eg, the time of inducing sleep) until the moment the user falls asleep. This is an excellent light for secreting melatonin before the user falls asleep, and can improve the user's sleep efficiency by encouraging the user to fall asleep naturally.
  • the processor 110 operates from the time when the user is predicted to be preparing for sleep (e.g., sleep induction time) to the time when the user falls asleep (i.e., when the second sleep state information is obtained).
  • the first environment composition information for controlling the air purifier can be generated.
  • first environment composition information that controls the air purifier to remove fine dust and harmful gases in advance until a predetermined time (e.g., 20 minutes before the user's sleep) may be generated.
  • the first environment creation information is such as controlling the air purifier to generate noise (white noise) at a level that can induce sleep just before sleep, adjusting the blower intensity below the preset intensity, or lowering the intensity of the LED. May contain information.
  • the first environment creation information may include information for controlling the air purifier to perform dehumidification/humidification based on temperature and humidity information in the sleeping space. Additionally, the first environment creation information may include control information to adjust personalized blowing intensity and noise according to the operation history of the air purifier and the acquired sleep state (quality of sleep).
  • the processor 110 may generate second environment composition information based on second sleep state information.
  • the second environment creation information may be control information that minimizes the illuminance to create a dark room environment without light. For example, if there is interference from light during sleep, the likelihood of sleeping fragmented increases, making it difficult to get a good night's sleep.
  • the processor 110 turns off the LED of the air purifier based on the second sleep state information, operates the air purifier with noise below a preset level, adjusts the blowing intensity to below the preset intensity, or blows the air purifier.
  • Second environment composition information can be generated to control the air purifier to set the temperature within a preset range or maintain the humidity in the sleeping space at a predetermined temperature.
  • the second environment creation information may include control information for increasing the blowing intensity to improve air quality because, depending on the sleep stage, there is less concern about waking up when in deep sleep.
  • the processor 110 when the processor 110 detects that the user has entered sleep (or a sleep stage) (when obtaining second sleep state information), the processor 110 provides control information to prevent light from being supplied or minimize the operation of the air purifier, i.e. , second environment creation information can be generated. Accordingly, the user's probability of having a deep sleep increases and the quality of sleep can be improved.
  • the processor 110 may generate third environment composition information based on the wake-up induction point.
  • the third environment creation information may be characterized as control information to supply 3000K white light by gradually increasing the illuminance from 0 lux to 250 lux from the time of inducing wake-up to the time of waking up.
  • the third environment creation information may be control information related to gradually increasing the illumination intensity starting 30 minutes before the user wakes up (i.e., the time of inducing the user to wake up).
  • the weather induction time may be determined based on the weather prediction time.
  • the weather induction time may be determined based on the weather prediction time.
  • the weather prediction time may be information about the time when the user is expected to wake up.
  • the weather forecast time may be 7 o'clock for the first user.
  • the detailed description of the above-mentioned weather forecast timing is only an example, and the present invention is not limited thereto.
  • the third environment creation information may include information for controlling the air purifier to induce waking up by lowering the blowing intensity and noise at the time of waking up. Additionally, the third environment creation information may include control information for controlling the air purifier to generate white noise to gradually induce waking up. The third environment creation information may include control information for controlling the air purifier to maintain the noise of the air purifier below a preset level after waking up. Additionally, the third environment creation information may include control information for controlling the air purifier in conjunction with the weather prediction time and weather recommendation time. The recommended wake-up time may be a time automatically extracted according to the user's sleep pattern, and the predicted wake-up time is as described below.
  • the weather prediction time may be determined in advance through information exchange with the user's user terminal 300.
  • the user may set the time at which he or she wishes to wake up and transmit the time to the processor 110 through the user terminal 300 . That is, the processor 110 may obtain the weather prediction time based on the time set by the user of the user terminal 300. For example, when the user sets an alarm time through the user terminal 300, the processor 110 may determine the set alarm time as the weather prediction time.
  • the wake-up prediction time may be determined based on the sleep entry time identified through the second sleep state information.
  • the processor 110 may determine the time at which the user enters sleep through second sleep state information indicating that the user is sleeping.
  • the processor 110 may determine the wake-up prediction time based on the sleep entry time to be determined through the second sleep state information. For example, the processor 110 may determine the time point after 8 hours, which is the appropriate sleep time, as the weather prediction time point, based on the time of entering sleep. For a specific example, if the sleep onset time is 11 o'clock, the processor 110 may determine the wake-up prediction time to be 7 o'clock.
  • the specific numerical description for each time point described above is only an example, and the present invention is not limited thereto. That is, the processor 110 may determine the wake-up prediction time based on the time when the user falls asleep.
  • the recommended wake-up time may be determined based on the user's sleep stage information. For example, if a user wakes up in the REM stage, there is a high possibility that he or she will wake up feeling refreshed. During one night's sleep, the user can have sleep cycles in the order of light sleep, deep sleep, light sleep, and REM sleep, and can wake up refreshed when waking up in the REM sleep stage. Preferably, in consideration of the user's appropriate or desired sleep time, the sleep recommendation timing can be determined while at least satisfying the appropriate or desired sleep time.
  • the processor 110 may determine the predicted wake-up time of the user through sleep stage information related to the user's sleep stage. As a specific example, the processor 110 determines the time when the user changes from the REM stage to another sleep stage (preferably, the time immediately before transitioning from the REM stage to another sleep stage) as the recommended wake-up time through sleep stage information. You can. That is, the processor 110 may determine the predicted wake-up time based on information on the sleep stage in which the user can wake up most refreshed (i.e., REM sleep stage).
  • the processor 110 may determine the predicted wake-up time of the user based on at least one of user settings, sleep entry time, and sleep stage information. Additionally, when the processor 110 determines the weather prediction time, which is the time when the user wants to wake up, the processor 110 may determine the wake-up induction time based on the corresponding weather prediction time. For example, the processor 110 may determine a point in time 30 minutes prior to the time the user wants to wake up as the time to induce wake-up. For a specific example, if the time at which the user wants to wake up (i.e., predicted weather time) is 7:00, the processor 110 may determine 6:30 as the wake-up time.
  • the time at which the user wants to wake up i.e., predicted weather time
  • the processor 110 may determine 6:30 as the wake-up time.
  • the specific description of the above-mentioned time points is only an example, and the present invention is not limited thereto.
  • the processor 110 determines the wake-up induction time by determining the weather forecast time at which the user's wake-up is predicted, and emits 3000K white light at 0 lux from the wake-up induction time to the wake-up time (e.g., until the user actually wakes up).
  • Information on creating a third environment can be generated to gradually increase the illuminance to 250 lux.
  • the processor 110 may decide to transmit the corresponding third environment creation information to the environment creation device 30, and accordingly, the environment creation device 30 may determine light-related information in the space where the user is located based on the third environment creation information. Adjustment operations can be performed.
  • the environment creation device 30 can control the light supply module to gradually increase the illuminance of 3000K white light from 0 lux to 250 lux starting 30 minutes before waking up.
  • the processor 110 may obtain fourth environment creation information based on third sleep state information. Specifically, the processor 110 may obtain the user's sleep disease information.
  • sleep disorder information may include delayed sleep phase syndrome. Delayed sleep phase syndrome can be a symptom of a sleep disorder in which one is unable to fall asleep at the desired time and the ideal sleep time is pushed back.
  • blue-light therapy is one of the treatment methods for delayed sleep phase syndrome, and may be a treatment that supplies blue light for about 30 minutes after the user wakes up at the desired wake-up time. If this supply of blue light is repeated every morning, the circadian rhythm can be restored to its original state, preventing people from falling asleep later at night than normal people.
  • the processor 110 may generate the fourth environment creation information based on the sleep disease information and the third sleep state information. For example, when the sleep disease information that the user corresponds to delayed sleep phase syndrome and the third sleep state information that the user is post-sleep (i.e., waking up) are obtained through the user terminal 300, the processor 110 4Environmental composition information can be generated.
  • the fourth environment creation information may be control information to supply blue light with an illumination intensity of 300 lux, a hue of 221 degrees, a saturation of 100%, and a brightness of 56% for a preset time from the time of waking up.
  • blue light with an illuminance of 300 lux, a hue of 221 degrees, 100% saturation, and 56% brightness may refer to blue light for treating delayed sleep phase syndrome.
  • the processor 110 determines the waking up time as 7 o'clock based on the third sleep state information, and starts the preset wake-up time at 7 o'clock.
  • the fourth environment composition information can be generated to supply blue light with an illuminance of 300 lux, a hue of 221 degrees, 100% saturation, and 56% brightness by the time point (e.g., 7:30).
  • the user's circadian rhythm can be adjusted to a normal range (for example, to fall asleep around 12 midnight and wake up around 7 am).
  • the quality of sleep of users with specific sleep disorders can be improved through the creation of fourth environment creation information.
  • the processor 110 may determine to transmit environment creation information to the environment creation device. Specifically, the processor 110 may generate environment creation information related to illuminance adjustment, and determines to transmit the corresponding environment creation information to the environment creation device 30, thereby controlling the illuminance adjustment operation of the environment creation device 30. can do.
  • light may be one of the representative factors that may affect sleep quality. For example, depending on the light intensity, color, exposure level, etc., it can have a good or bad effect on the quality of sleep. Accordingly, the processor 110 can improve the user's sleep quality by adjusting the illuminance. For example, the processor 110 may monitor the situation before or after falling asleep and adjust the illumination to effectively wake the user up accordingly. That is, the processor 110 can determine the sleep state (eg, sleep stage) and automatically adjust the light intensity to maximize the quality of sleep.
  • the sleep state eg, sleep stage
  • the processor 110 may receive sleep plan information from the user terminal 300.
  • Sleep plan information is information generated by the user through the user terminal 300 and may include, for example, information about bedtime and wake-up time.
  • the processor 110 may generate external environment creation information based on sleep plan information.
  • the processor 110 may identify the user's bedtime through sleep plan information and generate external environment creation information based on the corresponding bedtime.
  • the processor 110 can generate first environment composition information to provide white light of 3000K with an illumination intensity of 30 lux based on the bed position 20 minutes before bedtime. there is. In other words, it is possible to create an illumination level that induces the user to naturally fall asleep in relation to bedtime.
  • the processor 110 may determine the point in time at which the user enters sleep, that is, the point in time at which the user enters sleep, through the second sleep state information, and generate second environment creation information based on this. For example, as shown in FIG. 7, the processor 110 can generate second environment creation information to create an atmosphere such as a quiet dark room by minimizing light from the time of entering the water or controlling the air purifier to sleep mode. there is. This second environment creation information has the effect of improving the quality of sleep by allowing the user to fall into a deep sleep.
  • processor 110 may generate external environment composition information based on sleep stage information.
  • the sleep stage information may include information about changes in the user's sleep stage acquired in time series through analysis of sleep sound information.
  • the processor 110 when the processor 110 identifies that the user has entered a sleep stage (e.g., light sleep) through the user's sleep stage information, it minimizes the illumination to create a dark room environment without light or to achieve a good night's sleep.
  • a sleep stage e.g., light sleep
  • external environment creation information can be generated to remove fine dust/harmful gases, control air temperature and humidity, turn on LEDs, adjust the level of driving noise, and blow air volume.
  • the user's sleep efficiency can be improved by creating an optimal sleep environment, that is, optimal illumination for each user's sleep stage.
  • the processor 110 may generate environmental information to provide appropriate illumination or adjust air quality according to changes in the user's sleep stage during sleep. For example, when changing from shallow sleep to deep sleep, fine red light is supplied, or when changing from REM sleep to shallow sleep, lowering the illumination level or supplying blue light, etc., more diverse external environments depending on the change in sleep stage. Composition information can be generated. This can have the effect of maximizing the user's sleep quality by automatically considering the entire sleep experience, not just a part of it, by automatically considering the situation during sleep as well as before sleep or immediately after waking up.
  • the processor 110 may identify the user's wake-up time through sleep plan information, generate a predicted wake-up time based on the corresponding wake-up time, and generate environment composition information accordingly. For example, as shown in FIG. 7, the processor 110 starts from 0 lux of 3000K white light based on the bed location 30 minutes before the weather prediction time and gradually increases the illuminance to reach 250 lux.
  • Environmental composition information can be generated. This third environment creation information can induce a person to wake up naturally and refreshed in response to the desired wake-up time.
  • the processor 110 may determine to transmit environment creation information to the environment creation device 30 . That is, the processor 110 can improve the user's sleep quality by generating external environment creation information that allows the user to easily fall asleep or wake up naturally when going to bed or waking up based on the sleep plan information.
  • processor 110 may generate recommended sleep plan information based on sleep stage information. Specifically, the processor 110 can obtain information about changes in the user's sleep stage (eg, sleep cycle) through sleep stage information, and set the expected wake-up time based on this information. For example, a typical sleep cycle during the day may go through light sleep, deep sleep, front sleep, and REM sleep stages. The processor 110 determines that the time after REM sleep is when the user can wake up most refreshed and determines the wake-up time after REM, thereby generating recommended sleep plan information. Additionally, the processor 110 may determine to generate environment creation information according to the recommended sleep plan information and transmit it to the environment creation device 30 . Accordingly, the user can wake up naturally according to the recommended sleep plan information recommended by the processor 110. This means that the processor 110 recommends the user's wake-up time according to changes in the user's sleep stage. This may be a time when the user's fatigue is minimized, so it can have the advantage of improving the user's sleep efficiency.
  • sleep stage e.g, sleep cycle
  • the sleep environment control device 400, the user terminal 300, and the external server 20 can mutually transmit and receive data for the system according to embodiments of the present invention through a network. there is.
  • the user terminal 300 is a terminal that can receive information related to the user's sleep through information exchange with the sleep environment control device 400, and may refer to a terminal owned by the user. there is.
  • the general configuration and functions of the user terminal 300 may be as described above.
  • the user terminal 300 can acquire sound information related to the space where the user is located.
  • sound information may mean sound information obtained in the space where the user is located.
  • Acoustic information can be obtained in relation to the user's activity or sleep in a non-contact manner.
  • acoustic information may be acquired in the space while the user is sleeping.
  • the sound information acquired through the user terminal 300 may be the basis for obtaining the user's sleep state information in the present invention.
  • sleep state information related to whether the user is before, during, or after sleep may be obtained through sound information obtained in relation to the user's movement or breathing.
  • information about changes in the user's sleep stage during sleep time may be obtained through sound information.
  • the sleep environment control device 400 of the present invention can receive health checkup information or sleep checkup information from the external server 20 and construct a learning data set based on the corresponding information.
  • the description regarding the external server 20 has been described in detail above, and the description will be omitted here.
  • the acoustic information used by the sleep environment control device 400 to analyze the sleep state may be acquired in a non-invasive manner during the user's activities or sleep in the work space.
  • the sound information may include sounds generated as the user tosses and turns during sleep, sounds related to muscle movements, or sounds related to the user's breathing during sleep.
  • the environmental sensing information may include sleep sound information, and the sleep sound information may mean sounds related to movement patterns and breathing patterns that occur during the user's sleep.
  • sound information may be obtained through at least one of the user terminal 300 and the sound collection unit 414 carried by the user.
  • environmental sensing information related to the user's activities in a space may be obtained through a microphone module provided in the user terminal 300 and the sound collection unit 414.
  • the configuration of the microphone module provided in the user terminal 300 or the sound collection unit 414 is the same as described above.
  • the acoustic information that is the subject of analysis in the present invention is related to the user's breathing and movements acquired during sleep, and is information about very small sounds (i.e., sounds that are difficult to distinguish), and is acquired along with other sounds during the sleep environment. Therefore, if it is acquired through a microphone module as described above with a low signal-to-noise ratio, detection and analysis may be very difficult.
  • the sleep environment control device 400 may obtain sleep state information based on acoustic information acquired through a microphone module composed of MEMS. Specifically, the sleep environment control device 400 can convert and/or adjust ambiguously acquired acoustic information including a lot of noise into data that can be analyzed, and use the converted and/or adjusted data to learn about the artificial neural network. can be performed. When pre-training for the artificial neural network is completed, the learned neural network (e.g., acoustic analysis model) is based on the data (e.g., transformed and/or adjusted) acquired (e.g., transformed and/or adjusted) in response to the acoustic information to determine the user's Sleep state information can be obtained.
  • the learned neural network e.g., acoustic analysis model
  • the sleep state information may include sleep stage information related to changes in the user's sleep stage during sleep, as well as information related to whether the user is sleeping.
  • the sleep state information may include sleep stage information indicating that the user was in REM sleep at a first time point, and that the user was in light sleep at a second time point different from the first time point. In this case, through the corresponding sleep state information, information may be obtained that the user fell into a relatively deep sleep at the first time and had a lighter sleep at the second time.
  • the sleep environment control device 400 has a low signal-to-noise ratio through a commonly used user terminal (e.g., artificial intelligence speaker, bedroom IoT device, mobile phone, etc.) or a sound collection unit 414 to collect sound.
  • a commonly used user terminal e.g., artificial intelligence speaker, bedroom IoT device, mobile phone, etc.
  • the processed data can be processed to provide information on whether the user is before, during, or after sleep and sleep state information related to changes in sleep stage. there is.
  • the sleep environment control device 400 may be a terminal or a server, and may include any type of device.
  • the sleep environment control device 400 is a digital device, such as a laptop computer, a notebook computer, a desktop computer, a web pad, or a mobile phone, and may be a digital device equipped with a processor and computing power with memory.
  • the sleep environment control device 400 may be a web server that processes services.
  • the types of servers described above are merely examples and the present invention is not limited thereto.
  • the sleep environment control device 400 may be a server that provides cloud computing services.
  • the server in question has been described in detail above, so the description will be omitted here.
  • 2G shows an exemplary block diagram of a sleep environment control device related to one embodiment of the present invention.
  • the sleep environment control device 400 may include a receiving module 410 and a transmitting module 420.
  • the sleep environment control device 400 may include a transmission module 420 that transmits a wireless signal and a reception module 410 that receives the transmitted wireless signal.
  • the wireless signal may refer to an orthogonal frequency division multiplexing signal.
  • the wireless signal may be a WiFi-based OFDM sensing signal.
  • the transmitting module 420 of the present invention can be implemented through a laptop, smartphone, tablet PC, smart speaker (AI speaker), etc.
  • the receiving module 410 can be implemented through a wifi receiver.
  • the receiving module 410 may be implemented through various computing devices such as laptops, smartphones, and tablet PCs.
  • the transmitting module 420 and the receiving module 410 may be equipped with wireless chips that comply with Wi-Fi 802.11n, 802.11ac, or other standards that support OFDM. That is, the sleep environment control device 400 that acquires object state information with high reliability can be implemented using relatively inexpensive equipment.
  • the transmitting module 420 can transmit a wireless signal in one direction where the object is located, and the receiving module 410 is provided at a predetermined distance from the transmitting module 420, so that the transmitting module 420 ) can receive a wireless signal transmitted from. Since these wireless signals are orthogonal frequency division multiplex signals, they can be transmitted or received through a plurality of subcarriers.
  • the transmitting module 420 and the receiving module 410 may be provided to have a predetermined separation distance.
  • the predetermined separation distance may mean the space in which the object is active or located.
  • the transmitting module 420 and the receiving module 410 may be positioned opposite to each other based on a preset area.
  • the preset area 11a is, for example, an area related to the position where the user sleeps, as shown in FIG. 2I, and may be, for example, an area where a bed is located.
  • the transmitting module 420 and the receiving module 410 may be provided on both sides of the bed where the user sleeps.
  • the sleep environment control device 400 of the present invention provides information about whether the user is located in a preset area and the user based on the WiFi-based OFDM signal transmitted and received through the transmitting module 420 and the receiving module 410.
  • Object state information which is information about movement or breathing, can be obtained.
  • the transmitting module 420 and the receiving module 410 may transmit and receive wireless signals (eg, OFDM signals) through one or more antennas.
  • wireless signals eg, OFDM signals
  • each of the transmitting module 420 and the receiving module 410 is equipped with 3 antennas, channel status related to a total of 192 (i.e., 3 Information can be obtained every frame.
  • 3 Information can be obtained every frame.
  • the detailed numerical description of the antenna and subcarrier described above is only an example, and the present disclosure is not limited thereto.
  • a plurality of transmitting modules 420 and receiving modules 410 may be provided.
  • each of three transmitting modules and four receiving modules may be provided with a predetermined separation distance.
  • wireless signals transmitted and received by each of the plurality of transmitting modules and receiving modules may be different from each other.
  • the wireless signal received through the receiving module 410 is a wireless signal that passes through a channel corresponding to a preset area and may include information indicating the characteristics of the channel.
  • the reception module 410 may obtain channel state information from a wireless signal.
  • the channel state information is information indicating characteristics related to the channel related to the space where the user is located, and may be calculated based on a wireless signal transmitted from the transmitting module 420 and a wireless signal received through the receiving module. .
  • the wireless signal transmitted from the transmission module 420 may pass through a specific channel (i.e., the space where the user is located) and be received through the reception module 410.
  • the wireless signal may be transmitted through a plurality of subcarriers corresponding to each multi-path.
  • the wireless signal received through the receiving module 410 may be a signal reflecting the user's movement in the preset area 11a.
  • the processor can obtain channel state information related to channel characteristics experienced as the wireless signal passes through the channel (i.e., the space where the user is located). This channel state information may consist of amplitude and phase.
  • the sleep environment control device 400 transmits the transmitting module 420 based on the wireless signal transmitted from the transmitting module 420 and the wireless signal received through the receiving module 410 (i.e., a signal reflecting the movement of the object).
  • the receiving module 410 i.e., a signal reflecting the movement of the object.
  • Channel state information related to the characteristics of the space (eg, a preset area) between the receiver and the receiving module 410 can be obtained.
  • the receiving module 410 when the receiving module 410 receives a wireless signal transmitted from the transmitting module 420, it may be characterized in that it detects the user's movement based on the received wireless signal.
  • the receiving module 410 may obtain information regarding whether the user is located in a preset area through a change in channel state information.
  • the information obtained when the user is located or not located between the transmitting module 420 and the receiving module 410 Channel state information may be different from each other.
  • the difference between the channel state information obtained corresponding to the case where the user is located within the area (e.g., a preset area) between the transmitting module 420 and the receiving module 410 and the case where the user is not located is maximized.
  • the transmitting module 420 and the receiving module 410 may be arranged as much as possible.
  • a directional patch antenna may be provided corresponding to each of the transmitting module 420 and the receiving module 410.
  • the directional patch antenna may be an antenna module composed of m x n patches (i.e., the number of m horizontal patches and the number of n vertical patches).
  • the antenna pre-beam may be set to increase the difference in signals when the user is located between the transmitting module 420 and the receiving module 410 or when the user is not located.
  • the beam width of the antenna is preset to be optimal, and the transmitting module 420 and receiving module 410 can be placed so that the user is lying down in the direction of transmitting and receiving signals using this directional patch antenna. That is, a wireless link that is directly secured line-of-sight can be formed between the directional patch antennas of each of the transmitting module 420 and the receiving module 410.
  • the antenna of each module can operate as a directional antenna to form a wireless link in response to a narrower area (eg, a preset area).
  • a wireless link can be formed between the antennas of the transmitting module 420 and the receiving module 410, and when a user is located between these wireless links, the user's body blocks the wireless link and the wireless link is distorted.
  • the signal level i.e., channel state information
  • a change in signal level can be detected through a change in Received Signal Strength Indicator (RSSI) and Channel State Information (CSI), and accordingly, the receiving module 410 allows the user to select a preset signal through these changes. It can be determined whether it is located in the area 11a.
  • RSSI Received Signal Strength Indicator
  • CSI Channel State Information
  • information about whether the user is located in the preset area 11a may be used to determine whether to operate the environment creation unit 415 or to determine the user's intention to sleep.
  • the receiving module 410 may calculate the user's sleep state information and adjust the user's sleep environment based on the sleep state information. Specifically, the receiving module 410 can acquire sleep state information related to whether the user is before, during, or after sleep based on the acquired sensing information, and can determine the sleep state of the space where the user is located according to the sleep state information. You can adjust the environment. For a specific example, when the user acquires sleep state information that the user is before sleep, the receiving module 410 sets the intensity and illuminance of light (e.g., white light of 3000K, 30 lux) to induce sleep based on the sleep state information.
  • Environmental composition information related to illuminance can be generated.
  • the receiving module 410 determines the light intensity and illuminance of the space where the user is located based on the environmental composition information related to the intensity and illuminance of light to induce sleep and sets the appropriate intensity and illuminance to induce sleep (e.g., white light of 3000K). can be adjusted to an illuminance of 30 lux.
  • the receiving module 410 acquires sleep state information that the user is before sleep
  • the receiving module 410 is configured to operate from the time when the user is predicted to be preparing for sleep (e.g., sleep induction time) to the time when the user falls asleep (i.e., second sleep state).
  • environmental creation information for controlling the air purifier can be created. Specifically, environmental creation information can be generated to control the air purifier to remove fine dust and harmful gases in advance until a predetermined time (e.g., 20 minutes before the user's sleep).
  • environmental creation information includes information such as controlling the air purifier to generate noise (white noise) that can induce sleep just before sleep, adjusting the blower intensity below the preset intensity, or lowering the intensity of the LED. It can be included.
  • the first environment creation information may include information for controlling the air purifier to perform dehumidification/humidification based on temperature and humidity information in the sleeping space. Additionally, the first environment creation information may include control information to adjust personalized blowing intensity and noise according to the operation history of the air purifier and the acquired sleep state (quality of sleep).
  • 2H shows an exemplary block diagram of a receiving module and a transmitting module related to one embodiment of the present invention.
  • the reception module 410 includes a network unit 411, a memory 412, a sensor unit 413, a sound collection unit 414, an environment creation unit 415, and a reception control unit 416. It can be included.
  • the receiving module 410 is not limited to the components described above. That is, depending on the implementation aspect of the embodiments of the present invention, additional components may be included or some of the above-described components may be omitted.
  • the transmission module 420 includes a transmission unit 421 that transmits a wireless signal and a transmission control unit 422 that controls the wireless signal transmission operation of the transmission unit 421, as shown in FIG. 2H. may include.
  • the transmission control unit 422 may determine when a wireless signal is transmitted through the transmission unit 421.
  • the transmission control unit 422 may control the transmission unit 421 in response to the time when the sleep measurement mode is started, thereby allowing a wireless signal to be transmitted.
  • the reception module 410 may include a transmission module 420, a network unit 411 that transmits and receives data with the user terminal 300 and the external server 20.
  • the network unit 411 may transmit and receive data for performing a sleep environment adjustment method according to sleep state information according to an embodiment of the present invention, etc. to other computing devices, servers, etc. That is, the network unit 411 may provide a communication function between the receiving module 410 and the transmitting module 420, the user terminal 300, and the external server 20. for example,
  • the network unit 411 can receive sleep checkup records and electronic health records for multiple users from the hospital server. For another example, the network unit 411 may receive sound information related to the space in which the user operates from the user terminal 300. For another example, the network unit 411 may transmit environment creation information for adjusting the environment of the space where the user is located to the environment creation unit 415. Additionally, the network unit 411 may allow information to be transferred between the sleep environment control device 400, the user terminal 300, and the external server 20 by calling a procedure with the sleep environment control device 400. .
  • the network unit 411 may be composed of any one or a combination of the various wired and wireless communication systems described above.
  • the memory 412 may store a computer program for performing a sleep environment control method based on sleep state information according to an embodiment of the present invention, and the stored computer program may be stored in the reception control unit ( 416) can be read and driven. Additionally, the memory 412 may store any type of information generated or determined by the reception control unit 416 and any type of information received by the network unit 411. Additionally, the memory 412 may store data related to the user's sleep. For example, the memory 412 stores input/output data (e.g., sound information related to the user's sleep environment, sleep state information corresponding to the sound information, or environment creation information according to the sleep state information, etc.). It can also be stored temporarily or permanently.
  • input/output data e.g., sound information related to the user's sleep environment, sleep state information corresponding to the sound information, or environment creation information according to the sleep state information, etc.
  • the memory 412 is a flash memory type, hard disk type, multimedia card micro type, or card type memory (e.g. (e.g. SD or -Only Memory), and may include at least one type of storage medium among magnetic memory, magnetic disk, and optical disk.
  • the sleep environment control device 400 may operate in connection with web storage that performs a storage function of the memory 412 on the Internet.
  • the description of the memory described above is only an example, and the present invention is not limited thereto.
  • the computer program When loaded into the memory 412, the computer program may include one or more instructions that cause the reception control unit 416 to perform methods/operations according to various embodiments of the present invention. That is, the reception control unit 416 can perform methods/operations according to various embodiments of the present invention by executing one or more instructions.
  • the receiving module 410 may include a sensor unit 413 that acquires one or more sensing information related to a space.
  • work space refers to the space where the user lives, and may, for example, refer to a bedroom where the user sleeps.
  • the sensor unit 413 may include a first sensor unit that detects the user's movement in a space.
  • the first sensor unit may include at least one of a PIR sensor (Passive Infrared Sensor) and an ultrasonic sensor.
  • the PIR sensor can detect the user's movement within the detection range by detecting changes in the amount of infrared rays emitted from the user's body.
  • the PIR sensor can detect the user's movements in the bedroom by identifying infrared rays of 8 ⁇ m to 14 ⁇ m emitted by the user's body.
  • An ultrasonic sensor generates sound waves and can detect the movement of an object by detecting a signal that is reflected and returned from a specific object.
  • an ultrasonic sensor generates sound waves in a bedroom space, and as the user enters the bedroom, it can detect the user's movement inside the bedroom through sound waves reflected by the user's body.
  • the sensor unit 413 may include a second sensor unit that detects whether the user is located in a preset area of the work space based on a wireless signal.
  • the second sensor unit may receive a wireless signal transmitted from the transmission module 420 and detect whether the user is located in a preset area based on the received wireless signal.
  • the preset area is related to an area where the user lies down to sleep among areas located in a work space, and may mean, for example, an area equipped with a bed.
  • work space may mean the space inside the bedroom
  • the preset area may mean the space where the bed is located.
  • the second sensor unit may be provided at a position opposite to the transmission module 420 based on a preset area.
  • the transmission module 420 and the second sensor unit may be provided on both sides of the bed where the user sleeps.
  • the sleep environment control device 400 of the present invention provides information about whether the user is located in a preset area and the user based on the WiFi-based OFDM signal transmitted and received through the transmitting module 230 and the receiving module 240.
  • Object state information which is information about movement or breathing, can be obtained.
  • the reception module 410 may allow the environment creation unit 415 to operate when it is determined through the second sensor unit that the user is located in a preset area.
  • the reception module 410 may allow the operation of the environment creation unit 415 only when the user is detected to be located in the preset area 11a. That is, the receiving module 410 can control the operation of the environment creation unit 415 that performs an environment adjustment operation only when the user is located in a preset area.
  • the environment creation unit 415 may not perform an operation to change the sleeping environment if the user is not located in a specific location.
  • the sensor unit 413 may perform one or more environmental sensing functions to obtain indoor environmental information related to at least one of the user's body temperature, indoor temperature, indoor airflow, indoor humidity, and indoor illumination in relation to the user's sleeping environment.
  • Indoor environment information is information related to the user's sleep environment, and may be information that serves as a standard for considering the influence of external factors on the user's sleep through sleep conditions related to changes in the user's sleep stage.
  • One or more environmental sensing modules may include, for example, at least one sensor module selected from a temperature sensor, an airflow sensor, a humidity sensor, an acoustic sensor, and an illuminance sensor. However, it is not limited to this, and may further include various sensors capable of measuring external environments that may affect the user's sleep.
  • the receiving module 410 may include a sound collection unit 414.
  • the sound collection unit 414 includes a small microphone module and can obtain information about sounds occurring in the space where the user sleeps.
  • the microphone module provided in the sound collection unit 414 may be composed of MEMS (Micro-Electro Mechanical Systems) that is relatively small in size. These microphone modules are cost-effective and can be manufactured very compactly, but may have a lower signal-to-noise ratio (SNR) than a condenser microphone or dynamic microphone.
  • SNR signal-to-noise ratio
  • a low signal-to-noise ratio may mean that the ratio of noise, which is a sound that is not to be identified, to the sound that is to be identified is high, making it difficult to identify the sound (i.e., unclear).
  • Information subject to analysis in the present invention may be acoustic information related to the user's breathing and movement acquired during sleep, that is, sleep acoustic information.
  • This sleep sound information is information about very fine sounds such as the user's breathing and movement, and is acquired along with other sounds during the sleep environment, so when it is acquired through the microphone module as described above with a low signal-to-noise ratio, it is detected. and analysis can be very difficult. Accordingly, when sleep sound information with a low signal-to-noise ratio is obtained, the reception control unit 416 may process it into data for processing and/or analysis.
  • the receiving module 410 may include an environment creation unit 415.
  • the environment creation unit 415 can adjust the user's sleeping environment. Specifically, the environment creation unit 415 may adjust at least one of air quality, illumination, temperature, wind direction, humidity, and sound of the space where the user is located based on the environment creation information.
  • the environmental composition information may be a signal generated by the reception control unit 416 based on determination of the user's sleep state information.
  • environment creation information may include information about lowering or increasing illumination, etc.
  • the environmental composition information may include control information for adjusting at least one of temperature, humidity, wind direction, or sound.
  • the environment creation information is based on the user's real-time sleep status and provides various information related to fine dust removal, harmful gas removal, allergy care operation, deodorization/sterilization operation, dehumidification/humidification control, blowing intensity control, air purifier operation noise control, and LED lighting. It may include etc.
  • the detailed description of the above-described environmental composition information is only an example, and the present invention is not limited thereto.
  • the environment creation unit 415 may control at least one of illuminance control, temperature control, wind direction control, humidity control, and sound control. However, it is not limited to this, and the environment creation unit may further perform various control operations that can bring about changes in the user's sleeping environment. That is, the environment creation unit 415 can adjust the user's sleeping environment by performing various control operations based on the environment control signal from the reception control unit 416.
  • the environment creation unit 415 may be implemented through connection through the Internet of Things (IOT). Specifically, the environment creation unit 415 may be implemented through linkage with various devices that can change the indoor environment in relation to the space where the user is located.
  • the environment creation unit 415 may be implemented as a smart air conditioner, smart heater, smart boiler, smart window, smart humidifier, smart dehumidifier, and smart lighting based on connection through the Internet of Things.
  • IOT Internet of Things
  • the environment creation unit 415 may be implemented as a smart air conditioner, smart heater, smart boiler, smart window, smart humidifier, smart dehumidifier, and smart lighting based on connection through the Internet of Things.
  • the specific description of the above-mentioned environmental preparation part is only an example, and the present invention is not limited.
  • the reception control unit 416 may be composed of one or more cores, such as a central processing unit (CPU) of a computing device, and a general purpose graphics processing unit (GPGPU). ), and may include processors for data analysis and deep learning, such as a tensor processing unit (TPU).
  • cores such as a central processing unit (CPU) of a computing device, and a general purpose graphics processing unit (GPGPU).
  • CPU central processing unit
  • GPU general purpose graphics processing unit
  • TPU tensor processing unit
  • the reception control unit 416 may read the computer program stored in the memory 412 and perform data processing for machine learning according to an embodiment of the present invention.
  • the reception control unit 416 can perform calculations for learning a neural network.
  • the reception control unit 416 performs neural network learning, such as processing input data for learning in deep learning (DL), extracting features from input data, calculating errors, and updating the weights of the neural network using backpropagation. calculations can be performed.
  • neural network learning such as processing input data for learning in deep learning (DL), extracting features from input data, calculating errors, and updating the weights of the neural network using backpropagation. calculations can be performed.
  • At least one of the CPU, GPGPU, and TPU of the reception control unit 416 may process learning of the network function.
  • CPU and GPGPU can work together to process learning of network functions and data classification using network functions.
  • the processors of a plurality of computing devices can be used together to process learning of network functions and data classification using network functions.
  • a computer program executed in a computing device according to an embodiment of the present invention may be a CPU, GPGPU, or TPU executable program.
  • the reception control unit 416 may read the computer program stored in the memory 412 and provide a sleep analysis model according to an embodiment of the present invention. According to an embodiment of the present invention, the reception control unit 416 may perform calculations to calculate environmental composition information based on sleep state information. According to an embodiment of the present invention, the reception control unit 416 may perform calculations to learn a sleep analysis model.
  • the reception control unit 416 can typically process the overall operation of the sleep environment control device 400.
  • the reception control unit 416 processes signals, data, information, etc. input or output through the components discussed above or runs an application program stored in the memory 412 to provide or process appropriate information or functions to the user terminal. You can.
  • the reception control unit 416 may obtain sound information related to the space where the user sleeps. Acquisition of sound information according to an embodiment of the present invention may be acquiring or loading sound information stored in the memory 412. Additionally, acquisition of acoustic information may involve receiving or loading data from another storage medium, another computing device, or a separate processing module within the same computing device based on wired/wireless communication means.
  • the reception control unit 416 may obtain sleep sound information from environmental sensing information.
  • the environmental sensing information may be acoustic information obtained during the user's daily life.
  • environmental sensing information may include various sound information acquired according to the user's life, such as sound information related to cleaning, sound information related to cooking food, and sound information related to watching TV.
  • the reception control unit 416 may identify a singularity in which information of a preset pattern is sensed in environmental sensing information.
  • the preset pattern information may be related to breathing and movement patterns related to sleep. For example, in the awake state, all nervous systems are activated, so breathing patterns may be irregular and body movements may be frequent. Additionally, breathing sounds may be very low because the neck muscles are not relaxed. On the other hand, when the user sleeps, the autonomic nervous system stabilizes, breathing changes regularly, body movements may decrease, and breathing sounds may become louder.
  • the reception control unit 416 may identify the point in time at which sound information of a preset pattern related to regular breathing, small body movement, or small breathing sounds, etc., is detected as a singular point in the environmental sensing information. Additionally, the reception control unit 416 may acquire sleep sound information based on environmental sensing information obtained based on the identified singularity. The reception control unit 416 may identify a singularity related to the user's sleep time from the environmental sensing information acquired in time series and obtain sleep sound information based on the singularity.
  • the reception control unit 416 may identify a singularity related to the point in time when a preset pattern is identified from environmental sensing information. Additionally, the reception control unit 416 may acquire sleep sound information based on the identified singular point and sound information acquired after the singular point.
  • the reception control unit 416 can extract and obtain only sleep sound information from a vast amount of sound information by identifying peculiarities related to the user's sleep from environmental sensing information.
  • only sounds related to sleep i.e., sleep sound information
  • This provides convenience by allowing users to automate the process of recording their sleep time, and can also contribute to improving the accuracy of acquired sleep sound information.
  • the reception control unit 416 may calculate sleep state information based on sound information. Specifically, the reception control unit 416 may calculate sleep state information based on the user's sleeping sound information obtained through the sound collection unit 414.
  • sleep state information may include information related to whether the user is sleeping.
  • the sleep state information may include at least one of first sleep state information indicating that the user is before sleep, second sleep state information indicating that the user is sleeping, and third sleep state information indicating that the user is after sleep.
  • This sleep state information may be obtained based on sleep sound information.
  • Sleep sound information may include sound information acquired during the user's sleep in a space where the user is located in a non-contact manner.
  • the reception control unit 416 may obtain sleep state information related to whether the user is before sleep or in sleep based on a singularity identified from the sound information. Specifically, if a singular point is not identified, the reception control unit 416 may determine that the user is before sleeping, and if a singular point is identified, the reception control unit 416 may determine that the user is sleeping after the singular point. In addition, after the singular point is identified, the reception control unit 416 identifies a time point (e.g., waking up time) at which the preset pattern is not observed, and when the corresponding time point is identified, it can be determined that the user has woken up after sleeping. there is.
  • a time point e.g., waking up time
  • the reception control unit 416 determines whether the user is before or during sleep based on whether a singular point is identified in the sound information and whether a preset pattern is continuously detected after the singular point is identified. Alternatively, sleep state information related to whether or not the user is asleep can be obtained.
  • the reception control unit 416 may generate environment creation information based on sensing information and sleep state information. Specifically, the reception control unit 416 may generate environmental composition information based on sensing information obtained through the sensor unit 413 and sleep state information obtained as a result of sound analysis. The reception control unit 416 generates environment creation information based on the sensing information and sleep state information, and transmits the generated environment creation information to the environment creation unit 415, thereby controlling the sleep environment change operation of the environment creation unit 415. You can.
  • the reception control unit 416 may generate environmental composition information based on sleep state information.
  • the reception control unit 416 may generate first environment creation information based on the first sleep state information. Specifically, when the user obtains first sleep state information indicating that the user is before sleep, the reception control unit 416 may generate first environment creation information based on the first sleep state information. That is, when the user's sleep state is before sleep, the reception control unit 416 may generate first environment creation information to supply a preset white light for a certain period of time.
  • the first environment creation information may be control information for controlling the air purifier to remove fine dust and harmful gases in advance until a predetermined time (e.g., 20 minutes before) before the user sleeps.
  • the first environment creation information is such as controlling the air purifier to generate noise (white noise) at a level that can induce sleep just before sleep, adjusting the blower intensity below the preset intensity, or lowering the intensity of the LED. May contain information.
  • the first environment creation information may include information for controlling the air purifier to perform dehumidification/humidification based on temperature and humidity information in the sleeping space.
  • the time to induce sleep may be determined by the reception control unit 416.
  • the reception control unit 416 may determine the time to induce sleep through information exchange with the user's user terminal 300.
  • the user can create sleep plan information by setting the time he or she wants to sleep and the time he or she wants to wake up through the user terminal 300, and transmit the generated sleep plan information to the reception control unit 416.
  • the sleep plan information may include desired bedtime information and desired wake-up time information.
  • the reception control unit 416 may identify the time to induce sleep based on desired bedtime information.
  • the reception control unit 416 may obtain the user's sleep intention information based on environmental sensing information and determine the time to induce sleep based on the sleep intention information.
  • the reception control unit 416 may obtain sleep intention information based on environmental sensing information. According to one embodiment, the reception control unit 416 may identify the type of sound included in the environmental sensing information. Additionally, the reception control unit 416 may calculate sleep intention information based on the number of types of identified sounds. The reception control unit 416 can calculate the sleep intention information at a lower level as the number of types of sounds increases, and can calculate the sleep intention information higher as the number of types of sounds decreases.
  • the reception control unit 416 can obtain sleep intention information related to how much the user intends to sleep according to the number of types of sounds included in the environmental sensing information. For example, as more types of sounds are identified, sleep intention information indicating that the user's sleep intention is lower (i.e., sleep intention information with a lower score) may be output.
  • the reception control unit 416 may generate an intent score table by pre-matching different intent scores to each of a plurality of sound information.
  • the reception control unit 416 may obtain sleep intention information based on environmental sensing information and an intention score table.
  • the reception control unit 416 may record an intention score matched to the identified sound in response to a time when at least one of the plurality of sounds included in the intention score table is identified in the environmental sensing information.
  • the reception control unit 416 may obtain sleep intention information based on the sum of intention scores obtained during a predetermined period of time (eg, 10 minutes). That is, the reception control unit 416 can obtain sleep intention information related to how much the user intends to sleep according to the characteristics of the sound included in the environmental sensing information. For example, as sounds related to the user's activity are identified, sleep intention information indicating that the user's sleep intention is low (i.e., sleep intention information with a low score) may be output.
  • the reception control unit 416 may determine the time to induce sleep based on sleep intention information. Specifically, the reception control unit 416 may identify the time when sleep intention information exceeds a predetermined threshold score as the sleep induction time. That is, when high sleep intention information is obtained, the reception control unit 416 can identify this as an appropriate time for sleep induction, that is, a sleep induction time.
  • the reception control unit 416 may calculate sleep intention weighting information based on sensing information acquired through the sensor unit 413. Specifically, when the reception control unit 416 identifies that the user is located in a preset area through the second sensor unit after the user's movement occurs in one space through the first sensor unit, it can determine that the user has a high intention to sleep. In response, sleep intention weighted information related to 1 can be calculated. When the reception control unit 416 detects through the first sensor unit and the second sensor unit that no movement of the user occurs within the work space or preset area and that the user is not located, it determines that the user does not intend to sleep. It can be determined, and correspondingly, sleep intention weighting information related to 0 can be calculated.
  • the reception control unit 416 when the reception control unit 416 detects that the user is located in a specific space (e.g., bed space) through the sensor unit 413, it calculates sleep intention weighting information related to 1 and determines that the user is not located in the specific space. If it is detected that it is not, sleep intention weighting information related to 0 can be calculated. In other words, the reception control unit 416 may calculate sleep intention weighting information related to 0 or 1 depending on whether the user is located in the work space and a preset area.
  • a specific space e.g., bed space
  • the reception control unit 416 may determine the time to induce sleep based on sensing information and sleep state information. Specifically, the reception control unit 416 may determine the time to induce sleep based on the sensing information obtained through the sensor unit 413 and the sleep state information obtained as a result of sound analysis.
  • the reception control unit 416 may determine the time to induce sleep based on sleep intention information calculated based on environmental sensing information and sleep intention weighting information calculated through sensing information. For example, final sleep intention information may be obtained through sleep intention information and sleep intention weighting information, and the time when the final sleep intention information exceeds a certain threshold may be determined as the sleep induction time point.
  • the reception control unit 416 may calculate the final sleep intention information by multiplying the sleep intention information and the sleep intention weighting information. For a specific example, if the sleep intention information calculated based on environmental sensing information is '9' and the sleep intention weighting information calculated based on the sensing information is '0', the final sleep intention information can be calculated as 0. and the reception control unit 416 may determine that it does not exceed a predetermined threshold (eg, 8).
  • a predetermined threshold eg, 8
  • the sleep intention information is '9' and the sleep intention weighting information is '1'
  • the final sleep intention information may be calculated as 9, and the reception control unit 416 may set a predetermined threshold (e.g., 8 ) can be determined to be exceeded, and that point in time can be determined as the time to induce sleep.
  • a predetermined threshold e.g. 8
  • the final sleep intention information may change depending on whether the user is located in a certain location. For example, even if high sleep intention information (e.g., 10) is calculated based on environmental sensing information, if the user is not located in a certain location, the final sleep intention information becomes 0, and it is ultimately determined that the user's sleep intention is low. can do.
  • high sleep intention information e.g. 10
  • the final sleep intention information becomes 0, and it is ultimately determined that the user's sleep intention is low. can do.
  • the reception control unit 416 can determine the timing of inducing the user to sleep. Accordingly, when the user acquires the first sleep state information that is before sleep, the reception control unit 416 provides first environment creation information to adjust the light from the sleep induction point to the point in time when the second sleep state information is acquired. (3000K white light supplied at an illuminance of 30 lux) can be generated.
  • the reception control unit 416 obtains the second sleep state information from the time when the user is predicted to be preparing for sleep (e.g., sleep induction time) to the time when the user falls asleep. It is possible to generate first environment creation information to adjust the light up to the point in time) and decide to transmit the first environment creation information to the environment creation unit 415. Accordingly, 3000K white light can be supplied at an illumination intensity of 30 lux from 20 minutes before the user falls asleep (eg, the time of sleep induction) until the moment the user falls asleep. This is an excellent light for secreting melatonin before the user falls asleep, and can improve the user's sleep efficiency by encouraging the user to fall asleep naturally.
  • the reception control unit 416 may generate second environment creation information based on second sleep state information.
  • the second environment creation information may be control information that minimizes the illuminance to create a dark room environment without light. That is, when the user's sleeping state is sleeping, the reception control unit 416 can minimize the illumination to create a dark room environment without light. For example, if there is interference from light during sleep, the likelihood of sleeping fragmented increases, making it difficult to get a good night's sleep.
  • the second environment creation information is based on the second sleep state information, such as turning off the LED of the air purifier, operating the air purifier with noise below a preset level, or adjusting the blowing intensity below the preset intensity.
  • It may be control information for controlling the air purifier to set the blowing temperature within a preset range or to maintain the humidity in the sleeping space at a predetermined temperature. Users can be induced to sleep by airflow and white noise in a sleeping space where fine dust and harmful gases have been removed just before sleep, and after going to sleep, they can have a good night's sleep with the optimal temperature and humidity controlled.
  • the reception control unit 416 when the reception control unit 416 detects that the user has entered sleep (or sleep stage) (when obtaining second sleep state information), the reception control unit 416 provides control information to prevent light from being supplied, that is, second environment creation information. can be created. Accordingly, the user's probability of having a deep sleep increases and the quality of sleep can be improved.
  • the reception control unit 416 may generate third environment creation information based on the wake-up induction point.
  • the third environment creation information may be control information related to gradually increasing the illumination intensity starting 30 minutes before the user wakes up (i.e., the time of inducing the user to wake up).
  • the weather induction time may be determined based on the weather prediction time.
  • the wake-up induction time may be determined based on desired wake-up time information.
  • the desired wake-up time information may be information about the user's desired wake-up time.
  • the desired wake-up time information may be obtained through information exchange with the user's user terminal 300.
  • the user can set the time when he wants to sleep and when he wants to wake up through the user terminal 300 and transmit the information to the reception control unit 416.
  • the reception control unit 416 may obtain desired wake-up time information based on the wake-up time set by the user of the user terminal 300.
  • the weather induction timing may be determined based on the weather prediction timing.
  • the wake-up prediction time may be determined based on the sleep entry time identified through the second sleep state information.
  • the reception control unit 416 can identify the user's sleep entry point through the second sleep state information indicating that the user is sleeping. The reception control unit 416 may determine the weather prediction time based on the sleep entry time determined through the second sleep state information.
  • the wake-up prediction time may be determined based on the user's sleep stage information. For example, a user may wake up most refreshed if he or she wakes up in the REM stage. During one night's sleep, the user can have sleep cycles in the order of light sleep, deep sleep, light sleep, and REM sleep, and can wake up most refreshed when waking up in the REM sleep stage.
  • the reception control unit 416 can determine the predicted wake-up time of the user through sleep stage information related to the user's sleep stage. For a specific example, the reception control unit 416 may determine the time when the user changes from the REM stage to another sleep stage as the wake-up prediction time through sleep stage information. That is, the reception control unit 416 can determine the predicted wake-up time based on information on the sleep stage at which the user can wake up most refreshed (i.e., REM sleep stage).
  • the reception control unit 416 may determine the predicted wake-up time of the user based on at least one of sleep plan information, sleep entry time, and sleep stage information acquired from the user terminal.
  • the reception control unit 416 may determine the wake-up induction time based on the corresponding weather prediction time. For example, the reception control unit 416 may determine a time point 30 minutes prior to the time the user wants to wake up as the time point for inducing the user to wake up.
  • the reception control unit 416 determines the weather prediction time at which the user's wake-up is predicted to identify the wake-up induction time, and from the wake-up induction time to the wake-up time (e.g., until the user actually wakes up) )
  • Third environment composition information can be generated to supply 3000K white light by gradually increasing the illuminance from 0 lux to 250 lux.
  • the reception control unit 416 may decide to transmit the corresponding third environment creation information to the environment creation unit 415.
  • the environment creation unit 415 may determine light-related information in the space where the user is located based on the third environment creation information. Adjustment operations can be performed. For example, the environmental preparation unit 415 may gradually increase the 3000K white light from 0 lux to 250 lux starting 30 minutes before waking up.
  • the third environment creation information may include information for controlling the air purifier to induce waking up by lowering the blowing intensity and noise at the time of waking up.
  • the third environment creation information may include control information for controlling the air purifier to generate white noise to gradually induce waking up.
  • the third environment creation information may include control information for controlling the air purifier to maintain the noise of the air purifier below a preset level after waking up.
  • the third environment creation information may include control information for controlling the air purifier in conjunction with the weather prediction time and weather recommendation time.
  • sleep stage information may be obtained through a sleep analysis model that analyzes the user's sleep stage based on sound information (i.e., sleep sound information) acquired during sleep. That is, the sleep stage information of the present invention can be obtained through a sleep analysis model.
  • sound information i.e., sleep sound information
  • the reception control unit 416 can obtain environmental sensing information and obtain sleep sound information based on the corresponding sound information.
  • sleep sound information is information related to sounds acquired during the user's sleep, for example, sounds generated as the user tosses and turns during the user's sleep, sounds related to muscle movements, or sounds related to the user's breathing during sleep. May include related sounds.
  • the reception control unit 416 may perform preprocessing on sleep sound information. Preprocessing for sleep sound information may be preprocessing for noise removal. Specifically, the reception control unit 416 may classify sleep sound information into one or more sound frames having a predetermined time unit. Additionally, the reception control unit 416 may identify the minimum sound frame with the minimum energy level based on the energy level of each of one or more sound frames. The reception control unit 416 may perform noise removal on sleep sound information based on the minimum sound frame.
  • the reception control unit 416 may classify 30 seconds of sleep sound information into one or more sound frames of a very short 40 ms size. Additionally, the reception control unit 416 may identify the minimum sound frame with the minimum energy level by comparing the sizes of each of the plurality of sound frames with respect to the size of 40 ms. The reception control unit 416 may remove the identified minimum sound frame component from the entire sleep sound information (i.e., 30 seconds of sleep sound information). For example, as the minimum sound frame component is removed from the sleep sound information, preprocessed sleep sound information can be obtained. That is, the reception control unit 416 can perform preprocessing for noise removal by identifying the minimum sound frame as a background noise frame and removing it from the original signal (i.e., sleep sound information).
  • the reception control unit 416 may generate a spectrogram (SP) in response to the sleep sound information (SS), as shown in FIG. 6A.
  • sleep sound information (SS) may mean preprocessed sleep sound information. That is, the reception control unit 416 may generate information or a spectrogram including changes in the frequency components of the sleep sound information along the time axis in response to the preprocessed sleep sound information.
  • the spectrogram that the reception control unit 416 generates in response to the sleep sound information (SS) may include a Mel spectrogram.
  • the reception control unit 416 may obtain a Mel-Spectrogram through a Mel-Filter Bank for the spectrogram.
  • the parts of the human cochlea that vibrate may differ depending on the frequency of voice data.
  • the human cochlea has the characteristic of detecting frequency changes well in low frequency bands and having difficulty detecting frequency changes in high frequency bands. Accordingly, a Mel spectrogram can be obtained from the spectrogram using a Mel filter bank so as to have a recognition ability similar to the characteristics of the human cochlea for voice data.
  • the mel-filter bank may apply a small number of filter banks in a low frequency band and apply a wider filter bank toward higher frequencies.
  • the reception control unit 416 can obtain a Mel spectrogram by applying a Mel filter bank to the spectrogram to recognize voice data similar to the characteristics of the human cochlea.
  • the Mel spectrogram may include frequency components that reflect human hearing characteristics. That is, in the present invention, the spectrogram generated in response to sleep sound information and subject to analysis using a neural network may include the Mel spectrogram described above.
  • the reception control unit 416 may obtain sleep stage information by processing information or a spectrogram (SP) along the time axis of the frequency components of the sleep sound information as input to the sleep analysis model.
  • the sleep analysis model is a model for obtaining sleep stage information related to changes in the user's sleep stage, and can output sleep stage information by inputting sleep sound information acquired during the user's sleep.
  • the sleep analysis model may include a neural network model constructed through one or more network functions.
  • the reception control unit 416 may obtain information or a spectrogram including changes in the frequency components of the sleep sound information along the time axis based on the sleep sound information.
  • conversion of the sleep sound information into information or a spectrogram containing changes along the time axis of the frequency components may be intended to facilitate analysis of breathing or movement patterns related to relatively small sounds.
  • the reception control unit 416 utilizes a sleep analysis model including a feature extraction model and a feature classification model, based on information including changes along the time axis of the frequency components of the acquired sleep sound information or a spectrogram. Sleep stage information can be generated.
  • the sleep analysis model uses as input information or spectrograms that include changes along the time axis of the frequency components of sleep sound information corresponding to multiple epochs so that both past and future information can be considered, and sleep stages are determined. Since prediction can be performed, more accurate sleep stage information can be output.
  • the reception control unit 416 may output sleep stage information or sleep stage probability information corresponding to sleep sound information using the sleep analysis model described above.
  • sleep stage information may be information related to sleep stages that change during the user's sleep.
  • the reception control unit 416 may perform data augmentation based on preprocessed sleep sound information.
  • This data augmentation is intended to enable the sleep analysis model to robustly output sleep state information (e.g., sleep stage information) even in sounds measured in various domains (e.g., different bedrooms, different microphones, different placement locations, etc.).
  • data augmentation may include at least one of pitch shifting, gaussian noise, loudness control, dynamic range control, and spec augmentation.
  • the reception control unit 416 may perform data augmentation related to pitch shifting based on sleep sound information.
  • the reception control unit 416 may perform data augmentation by adjusting the pitch of the sound, such as raising or lowering the pitch of the sound at predetermined intervals.
  • the reception control unit 416 includes not only pitch shifting, but also gaussian noise, which performs data augmentation through correction related to noise, loudness control, which performs data augmentation by correcting the sound to give the feeling that sound quality is maintained even when the volume is changed, and sound Dynamic range control, which performs data augmentation by adjusting the dynamic range, which is a logarithmic ratio measured in dB between the maximum amplitude and minimum amplitude, and spec augmentation related to increasing the specifications of the sound can be performed.
  • the reception control unit 416 performs robust recognition in response to sleep sound acquired in various environments by the sleep analysis model through data augmentation of the sound information (i.e., sleep sound information) that is the basis for the analysis of the present invention. By doing so, the accuracy of sleep stage prediction can be improved.
  • the sound information i.e., sleep sound information
  • the reception control unit 416 may obtain fourth environment creation information based on third sleep state information.
  • fourth environment creation information since it is the same as what was described in relation to the operation of the processor 110 in the embodiment of FIG. 1F, redundant description will be omitted.
  • the reception control unit 416 may decide to transmit environment creation information to the environment creation unit 415. Specifically, the reception control unit 416 may generate environment creation information related to the illuminance adjustment, and determine to transmit the corresponding environment creation information to the environment creation unit 415, thereby controlling the illuminance adjustment operation of the environment creation unit 415. You can.
  • light or air quality may be one of the representative factors that can affect the quality of sleep. For example, depending on the light intensity, color, exposure level, etc., it can have a good or bad effect on the quality of sleep.
  • the quality of sleep is greatly influenced by the type/concentration of fine dust, the type/concentration of harmful gases, the presence or absence of allergic substances, and the temperature or humidity of the air.
  • the reception control unit 416 can improve the user's sleep quality by adjusting the illumination level or air quality.
  • the reception control unit 416 can monitor the situation before or after falling asleep, and adjust the illumination level accordingly to effectively wake the user. That is, the reception control unit 416 can determine the sleep state (eg, sleep stage) and automatically adjust the illumination level or air quality to maximize the quality of sleep.
  • the reception control unit 416 may receive sleep plan information from the user terminal 300.
  • the reception control unit 416 may generate external environment creation information based on the received sleep plan information.
  • the reception control unit 416 receives sleep plan information from the user terminal 300, and based on this, from the time when the user is predicted to be preparing for sleep (e.g., sleep induction time) to the time when the user falls asleep (i.e. Until the point at which the second sleep state information is acquired, the first environment creation information for controlling the air purifier may be generated.
  • sleep plan information e.g., sleep induction time
  • the reception control unit 416 can determine the time when the user enters sleep, that is, when the user enters sleep, through the second sleep state information, and generate second environment creation information based on this.
  • the reception control unit 416 may generate environmental composition information based on sleep stage information.
  • the sleep stage information may include information about changes in the user's sleep stage acquired in time series through analysis of sleep sound information.
  • the reception control unit 416 identifies that the user has entered a sleep stage (e.g., light sleep) through the user's sleep stage information
  • the external environment minimizes the illuminance to create a dark room environment without light.
  • Composition information can be generated.
  • the user's sleep efficiency can be improved by creating an optimal sleep environment, that is, optimal illumination for each user's sleep stage.
  • the reception control unit 416 may generate environment creation information to provide appropriate illumination according to changes in the user's sleep stage during sleep. For example, when changing from shallow sleep to deep sleep, fine red light is supplied, or when changing from REM sleep to shallow sleep, lowering the illumination level or supplying blue light, etc., more diverse external environments depending on the change in sleep stage. Composition information can be generated.
  • the reception control unit 416 may identify the user's desired wake-up time through sleep plan information, generate a weather prediction time based on the desired wake-up time, and generate environment creation information accordingly. there is.
  • the reception control unit 416 may decide to transmit environment creation information to the environment creation unit 415. That is, the reception control unit 416 generates environment creation information that allows the user to easily fall asleep or wake up naturally when going to bed or waking up, based on the sleep plan information, and uses the environment creation information to create the environment of the environment creation unit 415. By controlling the composition operation, the user's sleep quality can be improved.
  • the reception control unit 416 may generate recommended sleep plan information based on sleep stage information. Specifically, the reception control unit 416 can obtain information about changes in the user's sleep stage (eg, sleep cycle) through sleep stage information, and set the expected wake-up time based on this information.
  • sleep stage information e.g., sleep cycle
  • the reception control unit 416 may determine to generate environment creation information according to the recommended sleep plan information and transmit it to the environment creation unit 415. Accordingly, the user can wake up naturally according to the recommended sleep plan information recommended by the reception control unit 416. This means that the reception control unit 416 recommends the user's wake-up time according to the change in the user's sleep stage. This may be the time when the user's fatigue is minimized, so it can have the advantage of improving the user's sleep efficiency.
  • the reception control unit 416 may update the environment creation information by comparing the user's actual wake-up time and desired wake-up time information. Specifically, the reception control unit 416 may utilize the second sensor unit to generate actual wake-up time information related to the user's actual wake-up time. For example, as the user wakes up and leaves the bed area (e.g., a preset area), the wireless link that has changed due to the user's body is restored, and the second sensor unit detects this change in signal level to allow the user to get out of bed after waking up. The actual exit time (i.e. actual wake-up time) can be accurately detected. That is, the second sensor unit can generate actual wake-up time information by recording the time the user leaves the preset area.
  • the bed area e.g., a preset area
  • the reception control unit 416 may compare desired wake-up time information and actual wake-up time information, and if the information is different as a result of the comparison, the environment creation information may be updated.
  • the actual wake-up time information compared with the desired wake-up time information may include information about the actual wake-up time accumulated a certain number of times or more.
  • actual wake-up time information may include information about when the user actually woke up during the week.
  • the reception control unit 416 may update environmental composition information by analyzing the difference between the desired wake-up time and the accumulated actual wake-up time. Specifically, when the actual wake-up time is later than the desired wake-up time, the reception control unit 416 may gradually increase the maximum brightness of the white light supplied at the time of waking up to advance the user's circadian rhythm. For example, on the next day when the actual wake-up time is later than the desired wake-up time, the environment creation information can be updated so that the maximum brightness of white light is supplied higher than the previous day in response to the user's wake-up time.
  • the reception control unit 416 may reduce the maximum brightness of the white light supplied at the waking time to delay the user's waking up time. For example, on the next day when the actual wake-up time is earlier than the desired wake-up time, the environment creation information can be updated so that the maximum brightness of white light is supplied lower than the previous day in response to the user's wake-up time. That is, the reception control unit 416 can compare the user's actual waking up time and the desired waking up time, and update environmental composition information to change the user's circadian rhythm according to the comparison result. Accordingly, a sleep environment optimized for the user can be created, and sleep efficiency can be further increased.
  • the reception control unit 416 collects sound information by driving the sound collection unit through at least one measurement mode of the manual sleep measurement mode and the automatic sleep measurement mode, and collects sound information based on the collected sound information. Sleep state information can be calculated.
  • the manual sleep measurement mode may mean that the measurement mode is manually initiated as a sleep input signal is generated by the user.
  • the user can generate a sleep input signal by applying physical pressure to the sleep input button formed on the outer surface of the sleep environment control device 400, or can generate a sleep input signal by utilizing the user terminal.
  • the sleep environment control device 400 i.e., receiving module
  • the manual spherical measurement mode allows the user to directly determine when to start measuring his or her sleep state.
  • the automatic sleep measurement mode may mean that sleep measurement is automatically initiated without the need for a separate user action to generate a sleep input signal.
  • the automatic sleep measurement mode automatically starts the measurement mode when the user's movement within a space is detected through the first sensor unit and the user is identified as being located in a preset area through the second sensor unit. It can be characterized. A detailed description of the automatic sleep measurement mode will be described later with reference to FIG. 13.
  • Figure 13 is a flowchart illustrating a process for obtaining sleep state information through a sleep measurement mode of an environment creation device according to an embodiment of the present invention.
  • the order of the steps shown in FIG. 13 described above may be changed as needed, and at least one step may be omitted or added. That is, the above-described steps are only one embodiment of the present invention, and the scope of the present invention is not limited thereto.
  • the reception control unit 416 can detect the user's movement within a space through the first sensor unit (S1100).
  • the first sensor unit may include at least one of a PIR sensor and an ultrasonic sensor.
  • the reception control unit 416 may identify that the user is located in a preset area through the second sensor unit (S1200).
  • the second sensor unit may receive a wireless signal transmitted from the transmission module 420 and detect whether the user is located in a preset area based on the received wireless signal.
  • the second sensor unit may be provided at a position opposite to the transmission module 420 based on a preset area.
  • the transmission module 420 and the second sensor unit may be provided on both sides of the bed where the user sleeps.
  • the sleep environment control device 400 of the present invention provides information about whether the user is located in a preset area and the user based on the WiFi-based OFDM signal transmitted and received through the transmitting module 420 and the receiving module 410.
  • Object state information which is information about movement or breathing, can be obtained.
  • the reception control unit 416 may collect sound information related to a space by driving the sound collection unit 414 (S1300). That is, the reception control unit 416 detects that the user's movement occurs in a space through the first sensor unit, and automatically detects the user's movement in a preset area through the second sensor unit by automatically entering the sound collection unit ( 414), it is possible to collect acoustic information related to the work space.
  • the reception control unit 416 may calculate sleep state information based on the collected sound information (S1400).
  • the reception control unit 416 may obtain sleep state information related to whether the user is before sleep or in sleep based on the singularity identified from the sound information. Specifically, if a singular point is not identified, the reception control unit 416 may determine that the user is before sleeping, and if a singular point is identified, the reception control unit 416 may determine that the user is sleeping after the singular point.
  • the reception control unit 416 identifies a time point (e.g., waking up time) at which the preset pattern is not observed, and when the corresponding time point is identified, it can be determined that the user has woken up after sleeping. there is.
  • Figure 14 is a flowchart illustrating a process for creating an environment that induces the user to enter sleep related to an embodiment of the present invention.
  • the order of the steps shown in FIG. 14 described above may be changed as needed, and at least one step may be omitted or added. That is, the above-described steps are only one embodiment of the present invention, and the scope of the present invention is not limited thereto.
  • the receiving module 410 may identify the time to induce sleep based on desired bedtime information (S2100).
  • desired bedtime information For a specific example, the user can create sleep plan information by setting the time he/she wants to sleep and the time he/she wants to wake up through the user terminal 300, and transmit the generated sleep plan information to the receiving module 410. You can.
  • the sleep plan information may include desired bedtime information and desired wake-up time information.
  • the receiving module 410 may identify the time to induce sleep based on desired bedtime information.
  • the receiving module 410 may detect whether the user is located in a preset area at the time of inducing sleep through the second sensor unit (S2200).
  • the second sensor unit may receive a wireless signal transmitted from the transmission module 420 and detect whether the user is located in a preset area based on the received wireless signal.
  • the receiving module 410 may transmit a notification to the user terminal (S2300). Specifically, when sleep induction is imminent, but the user is not located in a preset area, a notification to prepare for sleep may be transmitted to the user terminal.
  • the receiving module 410 may generate first environment composition information to supply a preset white light from the time of sleep induction to the time of sleep ( S2400). That is, the first environment creation information can be generated only when the user is located in a preset area corresponding to the sleep induction time.
  • the receiving module 410 can control the operation of the environment creation unit 415 that performs the environment adjustment operation by generating the first environment creation information only when the user is located in a preset area. Accordingly, the environment creation unit 415 may not perform an operation to change the sleeping environment when the user is not located in a specific location.
  • Figure 15 is a flowchart illustrating a process for changing the user's sleep environment during sleep and immediately before waking up related to an embodiment of the present invention.
  • the order of the steps shown in FIG. 15 described above may be changed as needed, and at least one step may be omitted or added. That is, the above-described steps are only one embodiment of the present invention, and the scope of the present invention is not limited thereto.
  • the receiving module 410 may generate second environment creation information to create a dark room environment without light by minimizing the illuminance (S3100). For example, if there is interference from light during sleep, the likelihood of sleeping fragmented increases, making it difficult to get a good night's sleep.
  • S3100 the illuminance
  • the receiving module 410 when the receiving module 410 detects that the user has entered sleep (or sleep stage) (when obtaining second sleep state information), the receiving module 410 provides control information to prevent light from being supplied, that is, second environment creation information. can be created. Accordingly, the user's probability of having a deep sleep increases and the quality of sleep can be improved.
  • the receiving module 410 creates a third environment that allows the user to identify a wake-up induction point based on desired wake-up time information and gradually increase and supply the illuminance of white light from the wake-up induction point to the desired wake-up time.
  • Information can be generated (S3200).
  • Figure 11 is a conceptual diagram for explaining the operation of the environment creation device according to the present invention.
  • Figure 11 (a) is a schematic diagram in which the environment creation device 30 of Figure 1F is implemented as an air purifier 500
  • Figure 11 (b) shows the air purifier 500 with the user terminal 300. This is a schematic diagram of how they operate in conjunction.
  • the air purifier 500 can operate in conjunction with the user terminal 300 and the computing device 100.
  • Computing device 100 may include a network unit 180, memory 120, and processor 110 (see FIG. 2C).
  • the network unit 180 transmits and receives data with the user terminal 300, the external server 20, and the air purifier 500.
  • the network unit 180 may transmit and receive data for performing a method of creating a sleep environment according to sleep state information according to an embodiment of the present invention, to other computing devices, servers, etc.
  • the network unit 180 may provide a communication function between the computing device 100, the user terminal 300, the external server 20, and the air purifier 500.
  • the network unit 180 may receive sleep checkup records and electronic health records for multiple users from a hospital server.
  • the network unit 180 may receive environmental sensing information related to the space in which the user operates from the user terminal 300.
  • the network unit 180 may transmit to the air purifier 500 air quality-related environment creation information to adjust the environment of the space where the user is located.
  • the network unit 180 may allow information to be transferred between the computing device 100, the user terminal 300, and the external server 20 by calling a procedure with the computing device 100.
  • the memory 120 may store a computer program for performing a method of creating a sleep environment according to sleep state information according to an embodiment of the present invention, and the stored computer program may be read and driven by the processor 110. Additionally, the memory 120 may store any type of information generated or determined by the processor 110 and any type of information received by the network unit 180. Additionally, the memory 120 may store data related to the user's sleep. For example, the memory 120 may store input/output data (e.g., environmental sensing information related to the user's sleeping environment (particularly related to air quality), sleep state information corresponding to the environmental sensing information, or sleep Environment creation information according to status information, etc.) may be stored temporarily or permanently.
  • input/output data e.g., environmental sensing information related to the user's sleeping environment (particularly related to air quality), sleep state information corresponding to the environmental sensing information, or sleep Environment creation information according to status information, etc.
  • the computer program When loaded into memory 120, the computer program may include one or more instructions that cause processor 110 to perform methods/operations according to various embodiments of the present invention.
  • the computer program includes obtaining sleep state information of a user, generating environment creation information based on the sleep state information, and transmitting the environment creation information to an environment creation device. It may include one or more instructions to perform a method of creating a sleep environment according to state information.
  • the operation method, hardware configuration, and software configuration of the network unit 180 and memory 120 are the same as described above.
  • the processor 110 may read the computer program stored in the memory 120 and provide a sleep analysis model according to an embodiment of the present invention. According to an embodiment of the present invention, the processor 110 may perform calculations to calculate environmental composition information based on sleep state information. According to one embodiment of the present invention, the processor 110 may perform calculations to learn a sleep analysis model. The specific details of the sleep analysis model are the same as described above.
  • the processor 110 can obtain the user's sleep state information and environmental sensing information, as described above.
  • the processor 110 may generate first environment creation information to nth environment creation information. Specifically, when the user's state is before bedtime, the processor 110 obtains the second sleep state information from the time when the user is predicted to be preparing for sleep (e.g., sleep induction time) to the time when the user falls asleep. Until the point in time), first environment composition information for controlling the air purifier can be generated.
  • first environment composition information can be generated to control the air purifier to remove fine dust and harmful gases in advance until a predetermined time (e.g., 20 minutes before) before the user's sleep.
  • the first environment creation information is such as controlling the air purifier to generate noise (white noise) at a level that can induce sleep just before sleep, adjusting the blower intensity below the preset intensity, or lowering the intensity of the LED. May contain information.
  • the first environment creation information may include information for controlling the air purifier to perform dehumidification/humidification based on temperature and humidity information in the sleeping space.
  • the processor 110 turns off the LED of the air purifier, operates the air purifier with noise below a preset level, adjusts the blowing intensity to below the preset intensity, or lowers the blowing temperature.
  • Second environment creation information can be generated to control the air purifier to fit within a preset range or to maintain the humidity in the sleeping space at a predetermined temperature.
  • the processor 110 may generate third environment creation information and fourth environment creation information based on the third sleep state information and fourth sleep state information.
  • the processor 110 may determine to transmit environmental composition information to the air purifier 500. That is, the processor 110 can improve the user's sleep quality by generating information on creating an external environment that allows the user to easily fall asleep or wake up naturally when going to bed or waking up.
  • the air purifier 500 according to the present invention can operate in conjunction with the user terminal 300. That is, the system of the embodiment according to (b) of FIG. 11 may include an air purifier 500, a user terminal 300, an external server 20, and a network. In this embodiment, the air purifier 500 according to the present invention includes the configuration of the computing device 100 of FIG. 11 (a) and additional components for operating as an air cleaner.
  • FIG. 12 is a block diagram showing the configuration of an environment creation device according to the present invention.
  • the air purifier 500 which is an example of an environment creation device according to the present invention, includes a network unit 510, a memory 520, a processor 530, a driving unit 540, and a measuring unit 550. ) may include.
  • the air purifier 500 may be implemented as an air purifying device embedded in the ceiling or exterior wall of a building, apartment, or house, or as a fixed air purifier fixed to one side of an indoor space, and can be easily carried and moved. It may be implemented as a mobile air purifier, as an in-vehicle air purifier placed in a vehicle, or as a wearable air purifier worn on the body to purify the air quality around the user.
  • the air purifier 500 is a dust collection filter type air purifier that removes dust using pretreatment and a HEPA filter, an adsorption filter type that absorbs harmful gases using activated carbon, a wet type that removes dust or harmful gases using water, and a high voltage air purifier.
  • Electrostatic dust collection type which removes dust using high voltage
  • negative ion type which removes dust by generating negative ions at high voltage and supplying them into the air
  • plasma type which removes harmful gases by generating positive and negative ions with plasma
  • TiO generated by ultraviolet irradiation TiO generated by ultraviolet irradiation.
  • It can be implemented as an air purifier of various types, such as a UV photocatalytic type that removes bad odors and harmful gases through oxidation/reduction of OH radicals and active oxygen, and may also be a complex air purifier that employs two or more methods in combination.
  • the functions, operations, hardware configuration, and software configuration of the network unit 510, memory 520, and processor 530 of the air purifier 500 are the same as described above.
  • the first to nth environment composition information generated by the processor 530 may be transmitted to the driver 540.
  • the driving unit 540 operates various hardware elements provided in the air purifier 500.
  • the measuring unit 550 may include one or more sensors for sensing air components, illuminance, and condition of air purifier components in the space.
  • the measuring unit includes a dust sensor that detects invisible floating particles such as PM1.0, PM2.5, and PM10, a gas sensor that detects indoor harmful gases or odors, an illuminance sensor that detects indoor illumination, and a dust sensor that detects indoor air.
  • TVOC sensor that measures the total concentration of over 300 types of volatile organic compounds
  • CO2 sensor that measures the concentration of carbon dioxide in indoor air
  • radon sensor that measures the concentration of radon
  • a filter replacement time that measures the filter differential pressure according to the life of the filter unit. It may include a pressure sensor that measures the indoor temperature, a temperature sensor that measures the indoor temperature, etc.
  • the air purifier 500 may be composed of a housing equipped with an outlet and an inlet, a filter unit, a blowing fan, a sterilizing unit, a humidifying unit, a heating unit, a cooling unit, a measuring unit, etc.
  • the housing can be designed in various ways depending on the implementation method of the air purifier 500, such as embedded type, fixed type, mobile type, vehicle type, or wearable type.
  • the filter unit can be selected according to the air purification method, such as dust collection filter type, adsorption filter type, wet type, electric dust collection type, anion type, plasma type, and UV photocatalyst type.
  • the blowing fan may be connected to a motor that rotates by power supplied from the power supply.
  • the sterilizing unit has the function of sterilizing the inhaled air using chemical and electrical methods.
  • the humidifying part has the function of humidifying and sending out the sucked air, and the heating part and the cooling part have the function of heating or cooling the sucked air to a predetermined temperature.
  • the hardware elements of the air purifier 500 described above are only an example, and some of them may be integrated and implemented as one configuration, some configurations may be omitted, and air cleaning functions not described above may be used. Various configurations may be added to perform this.
  • environmental sensing information can be obtained through the user terminal 300.
  • the environmental sensing information may be sleep sound information obtained in the bedroom where the user sleeps.
  • the environmental sensing information may be air quality information in the sleeping space obtained from the measurement unit 550 provided in the air purifier 500.
  • Environmental sensing information acquired through the user terminal 300 or the measurement unit 550 may be the basis for obtaining the user's sleep state information in the present invention.
  • sleep state information related to whether the user is before, during, or after sleep can be obtained through environmental sensing information obtained in relation to the user's activities. Additionally, before and during sleep, the user can And information related to surrounding air quality after sleeping can be obtained.
  • the processor 530 may obtain sleep state information based on environmental sensing information obtained through the user terminal 300 and/or the measurement unit 550.
  • the processor 530 may identify a singularity in which information of a preset pattern is sensed in the environmental sensing information.
  • the preset pattern information may be related to breathing and movement patterns related to sleep. For example, in the awake state, all nervous systems are activated, so breathing patterns may be irregular and body movements may be frequent. Additionally, breathing sounds may be very low because the neck muscles are not relaxed.
  • the autonomic nervous system stabilizes, breathing changes regularly, body movements may decrease, and breathing sounds may become louder. That is, the processor 530 may identify the point in time at which sound information of a preset pattern related to regular breathing, small body movement, or small breathing sounds is detected as a singular point in the environmental sensing information.
  • the processor 530 may acquire sleep sound information based on environmental sensing information obtained based on the identified singularity.
  • the processor 530 may identify a singularity related to the user's sleep timing from environmental sensing information acquired in time series and obtain sleep sound information based on the singularity.
  • the air quality measured through the measuring unit 550 has a significant impact on the user's sleep.
  • sleep disorders show a statistically significant relationship with air pollution.
  • exposure to PM10 can result in difficulty maintaining sleep, and in particular, it has been confirmed that men are most likely to experience sleep disorders when exposed to PM1.
  • women are most likely to have sleep disorders when exposed to PM1 and PM2.5.
  • wheezing and related sleep disturbances were most likely to occur when SO 2 and O 3 were high.
  • Various studies have been conducted on the relationship between AHI and air quality measurements, and although the results are slightly different for each study, the result is the same: the correlation between air quality and sleep is very high.
  • the air purifier 500 can acquire sleep state information based on environmental sensing information, generate environmental composition information, and use this to perform operations appropriate for the sleep stage.
  • the processor 530 of the air purifier 500 determines that the user's state is in the pre-bedtime state, the user starts from the time when the user is predicted to be preparing for sleep (e.g., the time of sleep induction) to the time when the user falls asleep ( That is, the first environment creation information for controlling the air purifier can be generated until the second sleep state information is acquired.
  • the first environment composition information may be generated by reflecting the PM concentration, harmful gas concentration, CO2 concentration, SO2 concentration, O3 concentration, humidity, temperature, etc. measured by the measuring unit 550.
  • the first environment creation information includes information that controls the air purifier to remove fine dust and harmful gases in advance until a predetermined time (e.g., 20 minutes before) before the user's sleep, and noise (white noise) at a level that can induce sleep just before sleep. Control the air purifier to cause noise, adjust the blower intensity below the preset intensity, lower the intensity of the LED, etc., and use the air purifier to perform dehumidification/humidification based on the temperature and humidity information in the sleeping space. It may include information for control, etc.
  • a predetermined time e.g. 20 minutes before
  • noise white noise
  • the processor 530 turns off the LED of the air purifier based on the second sleep state information, operates the air purifier with noise below a preset level, adjusts the blowing intensity to below the preset intensity, or blows the air purifier.
  • Second environment composition information can be generated to control the air purifier to set the temperature within a preset range or maintain the humidity in the sleeping space at a predetermined temperature.
  • the second environment creation information is based on the second sleep state information, such as turning off the LED of the air purifier, operating the air purifier with noise below a preset level, adjusting the blowing intensity to below the preset intensity, or blowing air.
  • This may be control information for controlling the air purifier to set the temperature within a preset range or maintain the humidity in the sleeping space at a predetermined temperature.
  • Users can be induced to sleep by airflow and white noise in a sleeping space where fine dust and harmful gases have been removed just before sleep, and after going to sleep, they can have a good night's sleep with the optimal temperature and humidity controlled.
  • the device 100a for generating imagery induction information or the device 200a for providing imagery induction information may be a terminal or a server, and may be any type of device. Devices can include both.
  • the device 100a for generating imagery inducing information or the device 200a for providing imagery inducing information provides sleep state information corresponding to environmental sensing information by performing learning on one or more network functions through a learning data set.
  • a sleep analysis model for acquisition can be created.
  • the device 100a for generating imagery inducing information or the device 200a for providing imagery inducing information may be a server providing a cloud computing service.
  • it may be a server that provides a cloud computing service that processes information not on the user's computer but on another computer connected to the Internet.
  • a cloud computing service may be a service that stores data on the Internet and allows users to use it anytime, anywhere through an Internet connection without having to install necessary data or programs on their computer. Simple manipulation and clicking of data stored on the Internet is possible. It can be easily shared and delivered.
  • obtaining environmental sensing information performing preprocessing on the obtained environmental sensing information, converting acoustic information included in the preprocessed environmental sensing information into a spectrogram, conversion
  • a step of generating sleep state information based on the generated spectrogram and a step of providing and feeding back imagery induction information based on the generated sleep state information may be performed.
  • an electronic device obtaining environmental sensing information, performing preprocessing on the obtained environmental sensing information, and converting acoustic information included in the preprocessed environmental sensing information into a spectrogram.
  • a step of converting and transmitting the converted spectrogram to the server is performed, and when the server generates sleep state information through learning or inference based on the transmitted spectrogram, the electronic device receives the sleep state information. may be performed.
  • an electronic device where another electronic device acquires environmental sensing information, converts the acoustic information included in the obtained environmental sensing information into a spectrogram, and performs a signal based on the converted spectrogram.
  • another electronic device acquires environmental sensing information, converts the acoustic information included in the obtained environmental sensing information into a spectrogram, and performs a signal based on the converted spectrogram.
  • an electronic device equipped with environmental sensing, image induction information provision, and feedback functions may receive sleep state information from another electronic device.
  • the other electronic device refers to a device different from the electronic device equipped with environmental sensing and image inducing information generation and provision functions, and may correspond to one or more other electronic devices.
  • the steps of acquiring environmental sensing information, converting sound information included in the environmental sensing information into a spectrogram, and generating sleep state information may be performed independently.
  • an electronic device equipped with an image induction information provision and feedback function when there is an electronic device equipped with an image induction information provision and feedback function, another electronic device acquires environmental sensing information and converts the acoustic information included in the obtained environmental sensing information into a spectrum.
  • the server When converted to a gram and the converted spectrogram transmitted to the server, the server generates sleep state information based on the transmitted spectrogram, and an electronic device equipped with an image induction information provision function generates sleep state information generated by the server.
  • a receiving step may also be performed.
  • various operations such as acquisition of environmental sensing information, preprocessing of environmental sensing information, conversion of spectrogram, and generation of sleep state information do not necessarily occur within the same electronic device, This is an example to explain that it can occur in multiple devices, and that it can occur in time series, simultaneously, or independently, so the present invention is not limited to the various embodiments described above.
  • Figure 2b is a block diagram showing the configuration of an electronic device 600 according to the present invention.
  • the electronic device 600 shown in FIG. 2B may correspond to the device 100a that generates image-inducing information or the device 200a that provides image-inducing information, or includes devices corresponding to other devices.
  • the electronic device 600 shown in FIG. 2B may correspond to a user terminal, a device that generates and/or provides a sleep image, and/or generates sleep content based on other user sleep information. Or it may correspond to a provided device.
  • the electronic device 600 may include a memory 610, an output unit 620, a processor 630, and an acquisition unit 640. It is not limited to this.
  • a memory 610 in which image inducing information is recorded; An output unit 620 that outputs the recorded image guidance information; An acquisition unit 640 that acquires sleep state information from the user; means for transmitting the output image induction information and the obtained sleep state information to a server; When the server extracts the user's features based on the transmitted imagery induction information and the transmitted sleep state information, means for receiving the extracted user's features; and means for generating feature-based imagery induction information based on the received user's features.
  • a memory 610 in which image inducing information is recorded; An output unit 620 that outputs the recorded image guidance information; An acquisition unit 640 that acquires sleep state information from the user; means for transmitting the output image induction information and the sleep state information to a server; And when the server extracts the user's features based on the transmitted imagery induction information and the transmitted sleep state information, and the server generates feature-based imagery induction information based on the extracted user's features, the generation An electronic device including means for receiving feature-based image induction information may be provided.
  • a memory 610 in which image inducing information is recorded; An output unit 620 that outputs the recorded image guidance information; An acquisition unit 640 that acquires sleep state information from the user; a processor 630 that extracts user features based on the output image induction information and the acquired sleep state information; means for transmitting the extracted user features to a server; and when the server generates feature-based imagery induction information based on the transmitted features of the user, means for receiving the generated feature-based imagery induction information.
  • the imagery induction information generation and provision model generates the user's features based on the user's sleep state information acquired through the acquisition unit of the electronic device and the imagery induction information output through the output unit of the electronic device.
  • a server device implemented with an imagery induction information generation and provision model that extracts and generates feature-based imagery induction information based on the extracted user's features may be provided.
  • the memory 610 includes image guidance information based on a lookup table, image guidance information based on feature-based image guidance information, and user-related information input from the user according to an embodiment of the present invention.
  • a computer program can be stored, and the stored computer program can be read and driven by the processor 630.
  • the memory 610 may store any type of information generated or determined by the processor 630 and any type of information received from the network.
  • the memory 610 may store data related to the user's sleep.
  • the memory 610 may temporarily or permanently store input/output data.
  • the memory 610 may temporarily or permanently store feature-based imagery induction information generated or determined by the processor 630, but is not limited thereto.
  • imagery inducing information may be recorded in the memory 610.
  • the image guidance information recorded in the memory 610 may be image guidance information based on a lookup table or may be image guidance information based on feature-based image guidance information.
  • a step of preparing image guidance information may be performed in the memory 610, and the step of preparing image guidance information may include preparing image guidance information based on a lookup table or based on feature-based image guidance information. You can also prepare imagery-inducing information.
  • the output unit 620 may output the recorded image induction information.
  • the output unit 620 may set the recorded image inducing information to one or more of recorded image inducing sound information, recorded image inducing visual information, recorded image inducing text information, and recorded image inducing text sound information, Alternatively, a combination of two or more of these can be output.
  • the output unit 620 may output feature-based imagery induction information generated by the processor 630 based on the user's sleep state information.
  • the feature-based imagery inducing information output from the output unit 620 is one or more of feature-based imagery inducing sound information, feature-based imagery inducing visual information, feature-based imagery inducing text information, and feature-based imagery inducing text sound information. It may be a combination of two or more of these, but is not limited thereto.
  • the feature-based imagery inducing information output from the output unit 620 includes feature-based time-series imagery-inducing sound information with an imagery-inducing scenario, feature-based time-series imagery-inducing visual information, and feature-based time-series imagery-inducing text. It may be one or more of acoustic information and feature-based time-series imagery inducing text information, or a combination of two or more of these, but is not limited thereto.
  • the processor 630 may read a computer program stored in the memory 610 and perform data processing for machine learning according to an embodiment of the present invention.
  • the processor 630 may perform calculations for learning a neural network.
  • the processor 630 is used for learning neural networks, such as processing input data for learning in deep learning (DL), extracting features from input data, calculating errors, and updating the weights of the neural network using backpropagation. Calculations can be performed.
  • DL deep learning
  • At least one of the CPU, GPGPU, and TPU of the processor 630 may process learning of the network function.
  • CPU and GPGPU can work together to process learning of network functions and data classification using network functions.
  • the processors of a plurality of electronic devices 600 can be used together to process learning of network functions and data classification using network functions.
  • a computer program executed in the electronic device 600 may be a CPU, GPGPU, or TPU executable program.
  • the processor 630 may read a computer program stored in the memory 610 and provide a sleep analysis model according to an embodiment of the present invention.
  • the processor 630 may perform calculations to calculate configuration information of a series of imagery induction information and feature-based imagery induction information based on sleep state information.
  • imagery induction information based on a lookup table or image induction information based on feature-based imagery is output by the output unit 620
  • the image induction information of the user is based on the sleep state information obtained from the user through the imagery induction information.
  • Features can be extracted.
  • the processor 630 may perform calculations to learn a sleep analysis model. Accordingly, sleep information related to the user's sleep quality can be inferred based on the sleep analysis model.
  • environmental sensing information acquired in real time or periodically from the user is input as an input value to the sleep analysis model to output data related to the user's sleep.
  • Learning of such a sleep analysis model and inference based thereon may be performed by the device 100a that generates image-inducing information or the device 200a that provides image-inducing information.
  • both learning and inference can be designed to be performed by the device 100a that generates image-guided information or the device 200a that provides image-guided information.
  • learning may be performed in the device 100a that generates image-guided information or the device 200a that provides image-guided information, but inference may be performed in the user terminal 300.
  • the processor 630 can typically process the overall operation of the electronic device 600.
  • the processor 630 can provide or process appropriate information or functions to the user terminal by processing signals, data, information, etc. input or output through the components discussed above or by running an application program stored in the memory 610. there is.
  • the processor 630 may obtain information on the user's sleep state.
  • Acquiring sleep state information may be acquiring or loading sleep state information stored in the memory 610.
  • acquisition of sleep sound information may involve receiving or loading data from another storage medium, another electronic device, or a separate processing module within the same electronic device based on wired/wireless communication means.
  • the processor 630 may extract the user's features based on the image induction information output from the output unit 620 and the user's sleep state information obtained from the acquisition unit 640. .
  • the processor 630 may generate feature-based imagery induction information based on the extracted user features.
  • the user's features may be extracted based on the recorded information.
  • the acquisition unit 640 of the electronic device 600 may obtain sleep state information from the user.
  • the acquisition unit 640 may perform a role of receiving the obtained sleep state information from the other electronic device.
  • sleep information can be acquired from one or more sleep information sensor devices in order to achieve the purpose of the present invention.
  • the sleep information may include the user's sleep sound information acquired non-invasively during the user's activities or sleep. Additionally, sleep information may include the user's life information and the user's log data.
  • sleep information may include environmental sensing information and user's life information.
  • the user's life information may include information that affects the user's sleep.
  • information that affects the user's sleep may include the user's age, gender, disease status, occupation, bedtime, wake-up time, heart rate, electrocardiogram, and sleep time. For example, if the user's sleep time is less than the standard time, it may have the effect of requiring more sleep time in the next day's sleep. On the other hand, if the user's sleep time is more than the standard time, it may have the effect of requiring less sleep for the next day.
  • One or more sleep information sensor devices are One or more sleep information sensor devices.
  • one or more sleep sensor devices may include a microphone module, a camera, and an illumination sensor provided in the user terminal 300.
  • information related to the user's activities in a work space may be obtained through a microphone module provided in the user terminal 300.
  • the microphone module since the microphone module must be provided in the user terminal 300 of a relatively small size, it may be configured as a Micro-electro Mechanical System (MEMC).
  • MEMC Micro-electro Mechanical System
  • environmental sensing information of the present invention may be obtained through the user terminal 300.
  • Environmental sensing information may refer to sensing information obtained from the space where the user is located.
  • Environmental sensing information may be sensing information obtained in relation to the user's activities or sleep through a non-contact method.
  • FIG. 1H it may mean sensing information obtained from the sleep detection area 11a, but is not limited thereto.
  • the environmental sensing information may be sleep sound information obtained in the bedroom where the user sleeps.
  • the environmental sensing information acquired through the user terminal 300 may be information that serves as the basis for obtaining the user's sleep state information in the present invention.
  • sleep state information related to whether the user is before, during, or after sleep may be obtained through environmental sensing information obtained in relation to the user's activities.
  • environmental sensing information may include sounds generated as the user tosses and turns during sleep, sounds related to muscle movements, or sounds related to the user's breathing during sleep.
  • the sleep sound information may refer to sound information related to movement patterns and breathing patterns that occur during the user's sleep.
  • the environmental sensing information related to the user's activities in the work space may be obtained through a microphone provided in the user terminal 300.
  • the environmental sensing information may include the user's breathing and movement information.
  • the user terminal 300 may include a radar sensor as a motion sensor.
  • the user terminal 300 may generate a discrete waveform (respiration information) corresponding to the user's breathing by processing the user's movement and distance measured through the radar sensor.
  • Quantitative indicators related to sleep can be obtained based on the discrete waveforms and movements.
  • the environmental sensing information may include measured values obtained through a sensor that measures the temperature, humidity, and lighting level of the user's sleeping space.
  • the user terminal 300 may be equipped with a sensor that measures temperature, humidity, and lighting levels in the bedroom.
  • the device 100a for generating imagery induction information or the device 200a for providing imagery induction information provides sleep state information based on environmental sensing information acquired through a microphone module composed of MEMS. It can be obtained.
  • the device 100a for generating image-inducing information or the device 200a for providing image-inducing information can convert environmental sensing information obtained indistinctly, including a lot of noise, into data that can be analyzed, and the converted You can use data to learn about artificial neural networks.
  • the learned neural network may obtain the user's sleep state information based on the spectrogram obtained in response to the sleep sound information.
  • the learned neural network may be an artificial intelligence sound analysis model, but is not limited to this.
  • the device 100a for generating image induction information, the device 200a for providing image induction information, or the user terminal 300 acquires sleep sound information having a low signal-to-noise ratio, it provides data appropriate for analysis. By processing the processed data, sleep state information can be provided.
  • the device 100a for generating imagery inducing information is represented as a separate entity from the user terminal 300, but according to an embodiment of the present invention, as shown in FIG. 1c, it generates imagery inducing information.
  • the device 100a may be included in the user terminal 300 and perform the functions of measuring sleep status and providing imagery induction information in one integrated device.
  • the device 200a providing imagery inducing information is represented as a separate entity from the user terminal 300, but according to an embodiment of the present invention, as shown in FIG. 1c, imagery inducing information
  • the device 200a that provides may be included in the user terminal 300 and perform functions of measuring sleep state and feeding back information on inducing imagery in one integrated device.
  • This user terminal 300 may refer to any type of entity(s) in the system that has a mechanism for communication with the computing device 100.
  • these user terminals 300 include personal computers (PCs), notebooks (note books), mobile terminals, smart phones, tablet PCs, and artificial intelligence (AI) speakers. and artificial intelligence TVs and wearable devices, and may include all types of terminals that can access wired/wireless networks.
  • the user terminal 300 may include an arbitrary server implemented by at least one of an agent, an application programming interface (API), and a plug-in. Additionally, the user terminal 300 may include an application source and/or a client application.
  • API application programming interface
  • the external server 20 may be a server that stores information about a plurality of learning data for learning a neural network.
  • the plurality of learning data may include, for example, health checkup information or sleep checkup information.
  • the external server 20 may be at least one of a hospital server and an information server, and may be a server that stores information about a plurality of polysomnography records, electronic health records, and electronic medical records.
  • a polysomnographic record may include information on the sleep examination subject's breathing and movements during sleep, and information on sleep diagnosis results (eg, sleep stages, etc.) corresponding to the information.
  • Information stored in the external server 20 can be used as learning data, verification data, and test data to train the neural network in the present invention.
  • the computing device 100 of the present invention may receive health checkup information or sleep checkup information from the external server 20 and construct a learning data set based on the corresponding information.
  • the computing device 100 may generate a sleep analysis model to obtain sleep state information corresponding to environmental sensing information by performing learning on one or more network functions through a learning data set. A detailed description of the construction of the learning data set for learning the neural network of the present invention and the learning method using the learning data set will be described later.
  • sleep information may include environmental sensing information and user's life information.
  • the environmental sensing information may be acoustic information about the user's sleep.
  • One or more sleep information sensor devices may collect raw data about sounds generated from sleep to analyze sleep. Raw data about sounds occurring on the water may be in the time domain.
  • sleep sound information may be related to breathing and movement patterns related to the user's sleep. For example, in the awake state, all nervous systems are activated, so breathing patterns may be irregular and body movements may be frequent. Additionally, breathing sounds may be very low because the neck muscles are not relaxed.
  • the autonomic nervous system stabilizes, breathing changes regularly, body movements may decrease, and breathing sounds may become louder.
  • loud breathing sounds may occur immediately after apnea as a compensation mechanism. In other words, by collecting raw data about sleep, analysis of sleep can be performed.
  • Figure 6a is a diagram for explaining a method of obtaining a spectrogram corresponding to sleep sound information in the sleep analysis method according to the present invention. As shown in FIG. 6A, a spectrogram can be obtained by converting sleep sound information.
  • the converted information in order to analyze sleep sound information, can be converted into a visual representation of changes in the frequency components of the sleep sound information along the time axis.
  • a method of converting raw data into a spectrogram based only on the amplitude excluding the phase can be used, which not only protects privacy but also improves processing speed by lowering the data capacity.
  • the processor may generate a sleep analysis model using a spectrogram generated based on sleep sound information. If the sleeping sound information expressed as audio data is used as is, the amount of information is very large, so the amount of calculation and calculation time will increase significantly, and not only will the calculation precision be lowered because it includes unwanted signals, but also all of the user's audio. If the signal is transmitted to the server, there is a risk of privacy infringement.
  • the present invention removes noise from sleep sound information, converts it into a spectrogram (Mel spectrogram), and learns the spectrogram to create a sleep analysis model, thereby reducing the amount of computation and computation time, and protecting personal privacy. It can be achieved up to.
  • the processor 110 may generate a spectrogram (SP) in response to the sleep sound information (SS).
  • Raw data (sleeping sound information), which is the basis for creating a spectrogram (SP)
  • the raw data is acquired through the user terminal from the start point entered by the user to the end point, or is obtained through the user's terminal operation (e.g. : It can be acquired from the time when alarm setting) is made to the time corresponding to the terminal operation (e.g., alarm setting time), or the time point can be automatically selected and acquired based on the user's sleep pattern, and the time point of the user's sleep intention can be determined by sound. It can be obtained by automatically determining the viewpoint based on the user's speech, breathing, sounds of peripheral devices (TV, washing machine), etc.) or changes in illumination.
  • a process of preprocessing the input raw data may be further included.
  • the preprocessing process includes a noise reduction process of raw data.
  • noise e.g. white noise
  • the noise reduction process can be accomplished using algorithms such as spectral gating and spectral subtraction to remove background noise.
  • a noise removal process can be performed using a deep learning-based noise reduction algorithm. In other words, through deep learning, a noise reduction algorithm specialized for the user's breathing and breathing sounds can be used.
  • the present invention can generate a spectrogram based only on amplitude excluding phase from raw data, but is not limited to this. This not only protects privacy, but also improves processing speed by lowering data volume.
  • the processor 110 may generate a spectrogram (SP) corresponding to the sleep sound information (SS) by performing fast Fourier transform on the sleep sound information (SS).
  • SP spectrogram
  • a spectrogram (SP) is intended to visualize and understand sound or waves, and may be a combination of waveform and spectrum characteristics.
  • a spectrogram (SP) may represent the difference in amplitude according to changes in the time axis and frequency axis as a difference in printing density or display color.
  • the preprocessed acoustic-related raw data is cut into 30-second increments and converted into a Mel spectrogram. Accordingly, a 30-second Mel spectrogram has dimensions of 20 frequency bins x 1201 time steps.
  • the amount of information can be preserved by using the split-cat method to change the rectangular Mel spectrogram into a square shape.
  • a method of simulating breathing sounds measured in various home environments can be used by adding various noises occurring in a home environment to clean breathing sounds. Because sounds have additive properties, they can be added to each other. However, adding original sound signals such as mp3 or pcm and converting them into a mel spectrogram consumes a lot of computing resources. Therefore, the present invention proposes a method of converting breathing sounds and noise into Mel spectrograms and adding them, respectively. Through this, it is possible to secure robustness in various home environments by simulating breathing sounds measured in various home environments and using them to learn deep learning models.
  • sleep sound information relates to sounds related to breathing and body movements acquired during the user's sleep, and may be a very quiet sound.
  • the processor 110 may convert the sleeping sound information into a spectrogram (SP) and perform sound analysis.
  • the spectrogram (SP) contains information showing how the frequency spectrum of the sound changes over time, making it possible to easily identify breathing or movement patterns related to relatively small sounds, thereby facilitating analysis. Efficiency can be improved.
  • each spectrogram may be configured to have a different concentration of the frequency spectrum.
  • the processor 110 may obtain sleep stage information by processing the spectrogram (SP) as an input to a sleep analysis model.
  • the sleep analysis model is a model for obtaining sleep stage information related to changes in the user's sleep stage, and can output sleep stage information by inputting sleep sound information acquired during the user's sleep.
  • the sleep analysis model may include a neural network model constructed through one or more network functions.
  • Figure 9 is a schematic diagram showing one or more network functions for performing the sleep analysis method according to the present invention.
  • a sleep analysis model is comprised of one or more network functions, and one or more network functions may be comprised of a set of interconnected computational units, which may generally be referred to as 'nodes'. These 'nodes' may also be referred to as 'neurons'.
  • One or more network functions are composed of at least one or more nodes. Nodes (or neurons) that make up one or more network functions may be interconnected by one or more 'links'.
  • one or more nodes connected through a link may form a relative input node and output node relationship.
  • the concepts of input node and output node are relative, and any node in an output node relationship with one node may be in an input node relationship with another node, and vice versa.
  • input node to output node relationships can be created around links.
  • One or more output nodes can be connected to one input node through a link, and vice versa.
  • the value of the output node may be determined based on data input to the input node.
  • the nodes connecting the input node and the output node may have a weight. Weights may be variable and may be varied by the user or algorithm in order for the neural network to perform the desired function. For example, when one or more input nodes are connected to one output node by respective links, the output node is set to the values input to the input nodes connected to the output node and the links corresponding to each input node. The output node value can be determined based on the weight.
  • one or more nodes are interconnected through one or more links to form an input node and output node relationship within the neural network.
  • the characteristics of the neural network may be determined according to the number of nodes and links within the neural network, the correlation between the nodes and links, and the value of the weight assigned to each link. For example, if there are two neural networks with the same number of nodes and links and different weight values between the links, the two neural networks may be recognized as different from each other.
  • Some of the nodes constituting the neural network may form one layer based on the distances from the first input node.
  • a set of nodes with a distance n from the initial input node may constitute n layers.
  • the distance from the initial input node can be defined by the minimum number of links that must be passed to reach the node from the initial input node.
  • this definition of a layer is arbitrary for explanation purposes, and the order of a layer within a neural network may be defined in a different way than described above.
  • a layer of nodes may be defined by distance from the final output node.
  • the initial input node may refer to one or more nodes in the neural network through which data is directly input without going through links in relationships with other nodes. Alternatively, in the relationship between nodes based on a link within a neural network, it may refer to nodes that do not have other input nodes connected by a link. Similarly, the final output node may refer to one or more nodes that do not have an output node in their relationship with other nodes among the nodes in the neural network. Additionally, hidden nodes may refer to nodes constituting a neural network other than the first input node and the last output node.
  • the neural network according to an embodiment of the present invention may have more nodes in the input layer than the nodes in the hidden layer close to the output layer, and may be a neural network in which the number of nodes decreases as it progresses from the input layer to the hidden layer.
  • a neural network may contain one or more hidden layers.
  • the hidden node of the hidden layer can take the output of the previous layer and the output of surrounding hidden nodes as input.
  • the number of hidden nodes for each hidden layer may be the same or different.
  • the number of nodes in the input layer may be determined based on the number of data fields of the input data and may be the same as or different from the number of hidden nodes.
  • Input data input to the input layer can be operated by the hidden node of the hidden layer and output by the fully connected layer (FCL), which is the output layer.
  • FCL fully connected layer
  • a deep neural network may refer to a neural network that includes multiple hidden layers in addition to the input layer and output layer. Deep neural networks allow you to identify latent structures in data. In other words, it is possible to identify the potential structure of a photo, text, video, voice, or music (e.g., what object is in the photo, what the content and emotion of the text are, what the content and emotion of the voice are, etc.) .
  • Deep neural networks include convolutional neural networks (CNN), recurrent neural networks (RNN), auto encoders, generative adversarial networks (GAN), and restricted Boltzmann machines (RBMs). boltzmann machine), deep belief network (DBN), Q network, U network, Siamese network, Transformer, Vision Transformer (ViT), Mobile Vision Transformer (ViT), etc.
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • GAN generative adversarial networks
  • RBMs restricted Boltzmann machines
  • boltzmann machine deep belief network
  • DNN deep belief network
  • Q network deep belief network
  • U network Siamese network
  • Transformer Vision Transformer
  • ViT Mobile Vision Transformer
  • ViT Mobile Vision Transformer
  • the network function may include an auto encoder.
  • An autoencoder may be a type of artificial neural network to output output data similar to input data.
  • the autoencoder may include at least one hidden layer, and an odd number of hidden layers may be placed between input and output layers.
  • the number of nodes in each layer may be reduced from the number of nodes in the input layer to an intermediate layer called the bottleneck layer (encoding), and then expanded symmetrically and reduced from the bottleneck layer to the output layer (symmetrical to the input layer).
  • the nodes of the dimensionality reduction layer and dimensionality restoration layer may or may not be symmetric. Autoencoders can perform nonlinear dimensionality reduction.
  • the number of input layers and output layers may correspond to the number of sensors remaining after preprocessing of the input data.
  • the number of nodes in the hidden layer included in the encoder may have a structure that decreases as the distance from the input layer increases. If the number of nodes in the bottleneck layer (the layer with the fewest nodes located between the encoder and decoder) is too small, not enough information may be conveyed, so if it is higher than a certain number (e.g., more than half of the input layers, etc.) ) may be maintained.
  • a certain number e.g., more than half of the input layers, etc.
  • a neural network may be trained in at least one of supervised learning, unsupervised learning, and semi-supervised learning. Learning of a neural network is intended to minimize errors in output.
  • learning data is repeatedly input into the neural network, the output of the neural network and the error of the target for the learning data are calculated, and the error of the neural network is transferred from the output layer of the neural network to the input layer in the direction of reducing the error. This is the process of updating the weight of each node in the neural network through backpropagation.
  • supervised learning learning data in which the correct answer is labeled for each learning data is used (i.e., labeled learning data), and in the case of unsupervised learning, the correct answer may not be labeled in each learning data.
  • the learning data may be data in which each training data is labeled with a category. Labeled training data is input to the neural network, and the error can be calculated by comparing the output (category) of the neural network with the label of the training data.
  • the error can be calculated by comparing the input training data with the neural network output. The calculated error is backpropagated in the reverse direction (i.e., from the output layer to the input layer) in the neural network, and the connection weight of each node in each layer of the neural network can be updated according to backpropagation. The amount of change in the connection weight of each updated node may be determined according to the learning rate.
  • the neural network's calculation of input data and backpropagation of errors can constitute a learning cycle (epoch).
  • the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stages of neural network training, a high learning rate can be used to increase efficiency by allowing the neural network to quickly achieve a certain level of performance, and in the later stages of training, a low learning rate can be used to increase accuracy.
  • the training data can generally be a subset of real data (i.e., the data to be processed using the learned neural network), and thus the error for the training data is reduced, but the error for the real data is reduced. There may be an incremental learning cycle.
  • Overfitting is a phenomenon in which errors in actual data increase due to excessive learning on training data. For example, a phenomenon in which a neural network that learned a cat by showing a yellow cat fails to recognize that it is a cat when it sees a non-yellow cat may be a type of overfitting. Overfitting can cause errors in AI algorithms to increase. To prevent such overfitting, various optimization methods can be used. To prevent overfitting, methods such as increasing the learning data, regularization or regularization, and dropout, which omits some of the network nodes during the learning process, can be applied.
  • the data structure may include a neural network.
  • the data structure including the neural network may be stored in a computer-readable medium.
  • Data structures including neural networks may also include data input to the neural network, weights of the neural network, hyperparameters of the neural network, data obtained from the neural network, activation functions associated with each node or layer of the neural network, and loss functions for learning the neural network.
  • a data structure containing a neural network may include any of the components disclosed above. In other words, the data structure including the neural network is all or It may be configured to include any combination of these.
  • a data structure containing a neural network may include any other information that determines the characteristics of the neural network. Additionally, the data structure may include all types of data used or generated in the computational process of a neural network and is not limited to the above.
  • Computer-readable media may include computer-readable recording media and/or computer-readable transmission media.
  • a neural network can generally consist of a set of interconnected computational units, which can be referred to as nodes. These nodes may also be referred to as neurons.
  • a neural network consists of at least one node.
  • Figure 10 is a diagram illustrating the structure of a sleep analysis model using deep learning to analyze a user's sleep, according to an embodiment of the present invention.
  • the sleep analysis model provides sleep stage information by classifying each of the features extracted through the feature extraction model and the feature extraction model into one or more sleep stages, which extracts one or more features for each predetermined epoch. It may include a feature classification model to be created.
  • the sleep analysis model (code A) using deep learning for analyzing the user's sleep disclosed in Figure 10 includes a feature extraction model (code B), an intermediate layer (code C), and a feature classification model.
  • Sleep information inference (code E) can be performed through (code D).
  • the sleep analysis model (code A) using deep learning which is composed of a feature extraction model (code B), an intermediate layer (code C), and a feature classification model (code D), learns time-series features and features from multiple images. All of this is done, and the sleep analysis model (code A) using deep learning learned through this can infer the sleep stage in the entire sleep time and can infer sleep events that occur in real time.
  • the feature extraction model (code B) is input using sleep sound information or converted spectrogram input to a sleep analysis model (code A) using deep learning to analyze sleep. Features of the information can be extracted.
  • the feature extraction model (code B) can be constructed through an independent deep learning model (preferably MobileVITV2, Transformer, etc.) learned through a training data set.
  • the feature extraction model (symbol B) can be learned through supervised learning or unsupervised learning methods.
  • a feature extraction model can be trained to output output data similar to input data through a learning data set.
  • the feature extraction model (code B) may be trained by a one-to-one proxy task. Additionally, in the process of learning to extract sleep state information for one spectrogram, it can be learned to extract features by combining a feature extraction model and another NN (Neural Network).
  • NN Neuron
  • information input to a sleep analysis model (symbol A) using deep learning may be a spectrogram.
  • FIG. 6A is an exemplary diagram illustrating a process of acquiring sleep sound information from environmental sensing information related to an embodiment of the present invention.
  • the feature extraction model can extract features related to breathing sounds, breathing patterns, and movement patterns by analyzing the time-series frequency pattern of the spectrogram (SP).
  • the feature extraction model may be constructed from part of a neural network model (e.g., an autoencoder) that has been pre-trained on a training data set.
  • the feature extraction model may be constructed through an encoder in an autoencoder learned through a training data set.
  • Autoencoders can be learned through unsupervised learning methods.
  • An autoencoder can be trained to output output data similar to input data through a training data set.
  • the output data of the hidden layer may be an approximation of the input data (i.e., spectrogram) rather than a perfect copy value.
  • the autoencoder can be trained to adjust the weights so that the output data and input data are as equal as possible.
  • each of the plurality of spectrograms included in the learning data set may be tagged with sleep stage information.
  • Each of the plurality of spectrograms may be input to the encoder, and the output corresponding to each spectrogram may be stored by matching the tagged sleep stage information.
  • first learning data sets i.e., multiple spectrograms
  • first sleep stage information e.g., light sleep
  • the output of the encoder for the corresponding input is related to the input of the encoder.
  • Features may be stored by matching first sleep stage information.
  • one or more features associated with the output of an encoder may be represented in a vector space.
  • the feature data output corresponding to each of the first learning data sets is output through a spectrogram related to the first sleep stage, they may be located at a relatively close distance in the vector space. That is, the encoder can be trained so that a plurality of spectrograms output similar features corresponding to each sleep stage.
  • the decoder can be trained to extract features that enable it to well recover the input data. Therefore, as the feature extraction model is implemented through an encoder among the learned autoencoders, features (i.e., multiple features) that enable the input data (i.e., spectrogram) to be well restored can be extracted.
  • the encoder that configures the feature extraction model through the above-described learning process receives a spectrogram (for example, a spectrogram converted in response to sleep sound information) as input, it can extract features corresponding to the spectrogram.
  • a spectrogram for example, a spectrogram converted in response to sleep sound information
  • the processor 630 processes the spectrogram (SP) generated in response to the sleep sound information (SS) as an input to the feature extraction model to extract features. You can.
  • SP spectrogram
  • SS sleep sound information
  • the processor 630 may divide the spectrogram (SP) into predetermined epochs.
  • the processor 630 may obtain a plurality of spectrograms by dividing the spectrogram (SP) corresponding to the sleep sound information (SS) into 30-second increments.
  • the processor 630 may obtain 840 spectrograms by dividing the spectrogram in 30-second increments.
  • the processor 630 may process each of the plurality of divided spectrograms as input to a feature extraction model to extract a plurality of features corresponding to each of the plurality of spectrograms.
  • the number of spectrograms is 840
  • the number of features extracted by the feature extraction model correspondingly may also be 840.
  • the above-described specific numerical description regarding the spectrogram and number of features is only an example, and the present invention is not limited thereto.
  • the processor 630 may obtain sleep stage information by processing a plurality of features output through the feature extraction model as input to a feature classification model.
  • the feature classification model may be a neural network model pre-trained to predict sleep stages corresponding to features.
  • the feature classification model includes a fully connected layer and may be a model that classifies features into at least one of the sleep stages. For example, when the feature classification model inputs the first feature corresponding to the first spectrogram, it may classify the first feature as shallow water.
  • processors disclosed in the present invention e.g., processor 110, processor 210, control unit 380, etc. can also perform the above-described operations.
  • the feature classification model (code D) processes a plurality of features obtained through the feature extraction model (code B) and the middle layer (code C) as input to the feature classification model (code D) to infer sleep information (code You can proceed with E).
  • the feature classification model may be a neural network model modeled to infer sleep information corresponding to the feature.
  • the feature classification model (code D) may be configured to include a fully connected layer and may be a model that classifies a feature as at least one of sleep information.
  • a feature classification model (code D) inputs a feature corresponding to a spectrogram
  • the feature can be classified as REM sleep.
  • a feature classification model (symbol D) inputs a feature corresponding to a spectrogram, it can classify the feature as apnea during sleep.
  • the processor 110 may obtain sleep state information by processing a plurality of features output through the feature extraction model as input to a feature classification model.
  • the feature classification model may be a neural network model modeled to predict sleep stages in response to features.
  • the feature classification model includes a fully connected layer and may be a model that classifies features into at least one of the sleep stages. For example, when the feature classification model inputs the first feature corresponding to the first spectrogram, the first feature may be classified as shallow water.
  • the feature classification model can perform multi-epoch classification to predict sleep stages of multiple epochs by using spectrograms related to multiple epochs as input.
  • Multi-epoch classification does not provide one sleep stage analysis information in response to the spectrogram of a single epoch (i.e., one spectrogram corresponding to 30 seconds), but spectrograms corresponding to multiple epochs (i.e. It may be used to estimate several sleep stages (e.g., changes in sleep stages according to time changes) at once by using a combination of spectrograms (each corresponding to 30 seconds) as input.
  • the feature classification model may input 40 spectrograms (e.g., 40 spectrograms corresponding to 30 seconds each) and perform prediction for the 20 spectrograms located in the center. That is, all spectrograms from 1 to 40 are examined, but the sleep stage can be predicted through classification corresponding to the spectrograms corresponding to 10 to 20.
  • the detailed numerical description of the number of spectrograms described above is only an example, and the present invention is not limited thereto.
  • spectrograms corresponding to multiple epochs are used as input so that all information related to the past and future can be considered. By doing so, the accuracy of output can be improved.
  • Figure 6b is a diagram for explaining sleep stage analysis using a spectrogram in the sleep analysis method according to the present invention.
  • the second analysis based on sleep acoustic information uses the sleep analysis model described above, as shown in Figure 6b.
  • the corresponding sleep stage (Wake, REM, Light, Deep) can be immediately inferred.
  • secondary analysis based on sleep sound information can extract the time when sleep disorders (sleep apnea, hyperventilation) or snoring occurred through the singularity of the Mel spectrum corresponding to the sleep stage.
  • Figure 6c is a diagram for explaining sleep disorder determination using a spectrogram in the sleep analysis method according to the present invention.
  • the breathing pattern is analyzed in one Mel spectrogram, and if characteristics corresponding to a sleep apnea or hyperpnea event are detected, the point in time can be determined as the time when the sleep disorder occurred. there is. At this time, a process of classifying snoring as snoring rather than sleep apnea or hyperpnea through frequency analysis may be further included.
  • Figure 4 is a diagram showing an experimental process for verifying the performance of the sleep analysis method according to the invention.
  • the user's sleep image and sleep sound are acquired in real time, and the acquired environmental sensing information or sleep sound information includes information on the frequency domain or changes in the frequency components of the acquired information along the time axis. It can be converted into information that it contains.
  • the user's sleep sound information may be converted into a spectrogram. At this time, a preprocessing process of environmental sensing information or sleep sound information may be performed.
  • At least one of data converted into information including changes in the frequency components along the time axis, information on the converted frequency domain, or a spectrogram is input to the sleep analysis model to analyze the sleep stage. It can be. Conversion of information according to an embodiment of the present invention may be performed in real time.
  • the operation may be performed as follows.
  • converted information or a spectrogram containing time series information can be used as input to a CNN-based deep learning model to output a vector with reduced dimensionality.
  • a vector containing implied time series information can be output.
  • the output vector of the transformer-based deep learning model is input to a 1D CNN (1D Convolutional Neural Network) so that the average pooling technique can be applied, and through averaging work on time series information, The process of converting time series information into an N-dimensional vector with implied time can also be performed.
  • the N-dimensional vector containing time series information corresponds to data that still contains time series information, although there is only a difference in resolution from the input data.
  • prediction of various sleep stages can be performed by performing multi-epoch classification on a combination of N-dimensional vectors containing output time series information.
  • continuous prediction of sleep state information can be performed by using the output vectors of transformer-based deep learning models as input to a plurality of fully connected layers (FC).
  • the operation can be performed as follows.
  • the processor according to an embodiment of the present invention can output a vector with reduced dimension by using information or a spectrogram containing time series information as input to a Mobile ViT-based deep learning model.
  • features can be extracted from each spectrogram as the output of a Mobile ViT-based deep learning model.
  • a vector containing time series information can be output by using a vector with a reduced dimension as an input to the intermediate layer.
  • the intermediate layer model may include at least one of the following steps: a linearization step to imply vector information, a layer normalization step to input the average and variance, or a dropout step to disable some nodes. there is.
  • overfitting can be prevented by performing a process of outputting a vector containing time series information by using a vector with a reduced dimension as an input to the intermediate layer.
  • sleep state information can be output by using the output vector of the intermediate layer as an input to a ViT-based deep learning model.
  • output sleep state information corresponding to at least one of information including changes along the time axis of the frequency components, information in the frequency domain containing time series information, a spectrogram, or a mel spectrogram. can do.
  • information on the frequency domain containing time series information, a spectrogram, or At least one of the Mel spectrograms is composed of a series, and sleep state information corresponding to the information composed of the series can be output.
  • various deep learning models in addition to the above-mentioned AI model may be employed to perform learning or inference, and specific descriptions related to the types of deep learning models described above are provided. is merely an example, and the present invention is not limited thereto.
  • the processor 630 may obtain information or a spectrogram including changes in the frequency components of the sleep sound information along the time axis based on the sleep sound information.
  • conversion of the sleep sound information into information or a spectrogram containing changes along the time axis of the frequency components may be intended to facilitate analysis of breathing or movement patterns related to relatively small sounds.
  • the processor 630 utilizes a sleep analysis model including a feature extraction model and a feature classification model to determine sleep based on information or a spectrogram including changes along the time axis of the frequency components of the acquired sleep sound information. Step information can be generated.
  • the sleep analysis model uses as input information or spectrograms that include changes along the time axis of the frequency components of sleep sound information corresponding to multiple epochs so that both past and future information can be considered, and sleep stages are determined. Since prediction can be performed, more accurate sleep stage information can be output.
  • the processor 630 may output sleep stage information corresponding to sleep sound information using the sleep analysis model described above.
  • sleep stage information may be information related to sleep stages that change during the user's sleep.
  • sleep stage information may refer to information about changes in the user's sleep to light sleep, normal sleep, deep sleep, or REM sleep at each time point during the user's 8 hours of sleep last night.
  • the above-described sleep stage information is only an example, and the present invention is not limited thereto.
  • the above-described operation may also be performed by a processor (eg, processor 110 or processor 210, etc.) of devices according to other embodiments of the present invention.
  • an inference model is created to extract the user's sleep state and sleep stage through deep learning of environmental sensing information.
  • environmental sensing information including sound information is converted into a spectrogram, and an inference model is created based on the spectrogram.
  • the inference model according to an embodiment of the present invention may be built in the computing device 100 or the environment creation device 400 of FIG. 1F, as described above.
  • the inference model according to an embodiment of the present invention may be built in the device 100a that generates image-guided information or the device 200a that provides image-guided information, but is not limited to this.
  • it can be built in the user terminal 300, the external terminal 200, and the electronic device 600.
  • environmental sensing information including user sound information acquired through the user terminal 300 is input to the corresponding inference model, and sleep state information and/or sleep stage information are output as result values.
  • learning and inference may be performed by the same entity, but learning and inference may also be performed by separate entities.
  • both learning and inference can be performed by the device 100a that generates image-guided information or the device 200a that provides image-guided information, and learning is performed by the device 100a that generates image-guided information or the device 200a that provides image-guided information.
  • inference may be performed in the information providing device 200a, inference may be performed in the user terminal 300.
  • both learning and inference are performed using the user terminal 300. It can be performed by .
  • both learning and inference may be performed by the computing device 100 of FIG. 1F or the environment control device 400 of FIG. 1G, and learning is performed in the computing device 100 but inference is performed in the user terminal 300.
  • Learning may be performed in the computing device 100, but inference may be performed in the environment creation device 30 implemented with smart home appliances (TV, lighting, refrigerator, air purifier), etc.
  • the processor 110 may obtain sleep state information based on environmental sensing information. Specifically, the processor 110 may identify a singularity in which information of a preset pattern is sensed in the environmental sensing information.
  • the preset pattern information may be related to breathing and movement patterns related to sleep. For example, in the awake state, all nervous systems are activated, so breathing patterns may be irregular and body movements may be frequent. Additionally, breathing sounds may be very low because the neck muscles are not relaxed.
  • the autonomic nervous system stabilizes, breathing changes regularly, body movements may decrease, and breathing sounds may become louder.
  • the processor 110 may identify the point in time at which sound information of a preset pattern related to regular breathing, small body movement, or small breathing sounds is detected as a singular point in the environmental sensing information. Additionally, the processor 110 may acquire sleep sound information based on environmental sensing information obtained based on the identified singularity. The processor 110 may identify a singularity related to the user's sleep timing from environmental sensing information acquired in time series and obtain sleep sound information based on the singularity.
  • the processor 110 may identify a singularity (P) related to the point in time at which a preset pattern is identified from the environmental sensing information (E).
  • the processor 110 may acquire sleep sound information (SS) based on the identified singularity and acoustic information acquired after the singularity.
  • the waveforms and singularities related to sound in FIG. 5 are merely examples for understanding the present invention, and the present invention is not limited thereto.
  • the processor 110 can identify singularities related to the user's sleep from environmental sensing information, thereby extracting and obtaining only sleep sound information from a vast amount of acoustic information (i.e., environmental sensing information) based on the singularities. This provides convenience by allowing users to automate the process of recording their sleep time, and can also contribute to improving the accuracy of acquired sleep sound information.
  • the processor 110 may obtain sleep state information related to whether the user is before sleep or in sleep based on the singularity (P) identified from the environmental sensing information (E). Specifically, if the singular point (P) is not identified, the processor 110 may determine that the user is before sleeping, and if the singular point (P) is identified, the processor 110 may determine that the user is sleeping after the singular point (P). there is. In addition, after the outlier P is identified, the processor 110 identifies a time point (e.g., waking up time) at which a preset pattern is not observed, and when the corresponding time point is identified, it determines that the user has woken up after sleeping. can do.
  • a time point e.g., waking up time
  • the processor 110 determines whether the user is before, during, or during sleep based on whether a singular point (P) is identified in the environmental sensing information (E) and whether a preset pattern is continuously detected after the singular point is identified. Sleep state information related to whether or not the user is awake can be obtained.
  • sleep state information may include information related to whether the user is sleeping.
  • the sleep state information may include at least one of first sleep state information indicating that the user is before sleep, second sleep state information indicating that the user is sleeping, and third sleep state information indicating that the user is after sleep.
  • the processor 110 may determine that the user is in a pre-sleep state (i.e., before going to bed), and the second sleep state information is inferred. In this case, it may be determined that the user is in a sleeping state, and if third sleep state information is obtained, it may be determined that the user is in a post-sleep state (i.e., waking up).
  • the sleep state information may include information (e.g., sleep event information) about at least one of sleep apnea, snoring, tossing and turning, coughing, sneezing, or bruxism, in addition to information related to the user's sleep stage.
  • information e.g., sleep event information
  • acoustic information acquired over a long time interval may be required.
  • a relatively short time interval (e.g., 1) before and after the corresponding sleep state occurs. minutes) may require acoustic information acquired.
  • This sleep state information may be obtained based on environmental sensing information.
  • Environmental sensing information may include sensing information obtained in a non-contact manner in the space where the user is located.
  • the processor 110 may obtain sleep state information based on at least one of acoustic information, actigraphy, biometric information, and environmental sensing information obtained from the user terminal 300. Specifically, the processor 110 may identify a singularity in acoustic information.
  • the uniqueness of the acoustic information may be related to breathing and movement patterns related to sleep. For example, in the awake state, all nervous systems are activated, so breathing patterns may be irregular and body movements may be frequent. Additionally, breathing sounds may be very low because the neck muscles are not relaxed.
  • the autonomic nervous system stabilizes, breathing changes regularly, body movements may decrease, and breathing sounds may become louder.
  • the processor 110 may identify the point in time at which a pattern of acoustic information related to regular breathing, small body movements, or small breathing sounds is detected as a singular point in the acoustic information. Additionally, the processor 110 may obtain sleep sound information based on sound information obtained based on the identified singularity. The processor 110 may identify a singularity related to the user's sleep time from the sound information acquired in time series and obtain sleep sound information based on the singularity.
  • the processor 110 may obtain environmental sensing information.
  • environmental sensing information may be obtained through the user terminal 300 carried by the user.
  • environmental sensing information related to the space where the user is active may be obtained through the user terminal 300 carried by the user, and the processor 110 may receive the corresponding environmental sensing information from the user terminal 300.
  • the processor 110 may acquire sleep sound information based on environmental sensing information.
  • Environmental sensing information may be acoustic information acquired in a non-contact manner during the user's daily life.
  • environmental sensing information may include various sound information acquired according to the user's life, such as sound information related to cleaning, sound information related to cooking food, sound information related to watching TV, and sleep sound information acquired during sleep.
  • sleep sound information acquired during the user's sleep may include sounds generated as the user tosses and turns during sleep, sounds related to muscle movements, or sounds related to the user's breathing during sleep. That is, sleep sound information in the present invention may mean sound information related to movement patterns and breathing patterns related to the user's sleep.
  • the electronic device 600 may also generate or infer sleep state information. If first sleep state information is inferred regarding the user, the processor 630 may determine that the user is in a pre-sleep state (i.e., before going to bed), and if second sleep state information is inferred, the processor 630 may determine that the user is in a pre-sleep state (i.e., before going to bed). It may be determined that the user is in a sleeping state, and when third sleep state information is obtained, it may be determined that the user is in a post-sleep state (i.e., waking up).
  • the processor 630 may obtain environmental sensing information.
  • environmental sensing information may be obtained through the user terminal 300 carried by the user.
  • environmental sensing information related to the space in which the user operates may be obtained through the user terminal 300 carried by the user, and the processor 630 may receive the corresponding environmental sensing information from the user terminal 300.
  • the processor 630 may acquire sleep sound information based on environmental sensing information.
  • the processor 110 may extract sleep stage information. Sleep stage information may be extracted based on the user's environmental sensing information. Sleep stages can be divided into NREM (non-REM) sleep and REM (rapid eye movement) sleep, and NREM sleep can be further divided into multiple stages (e.g., stages 2 of light and deep, and stages 4 of N1 to N4). there is.
  • the sleep stage setting may be defined as a general sleep stage, but may also be arbitrarily set to various sleep stages depending on the designer. Through sleep stage analysis, it is possible to predict not only sleep-related sleep quality, but also sleep diseases (e.g. sleep apnea) and their underlying causes (e.g. snoring).
  • the processor 110 may generate product recommendation information and verification information related to sleep based on sleep stage information.
  • the processor 110 may generate environment composition information based on sleep stage information. For example, if the sleep stage is in the Ligh stage or N1 stage, environmental information can be generated to control environmental devices (lighting, air purifier, etc.) to induce deep sleep.
  • environmental devices lighting, air purifier, etc.
  • FIG. 3a is a diagram comparing polysomnography (PSG) results (PSG results) and analysis results (AI results) using the AI algorithm according to the present invention.
  • the sleep stage information obtained according to the present invention not only closely matches polysomnography, but also contains more precise and meaningful information related to sleep stages (Wake, Light, Deep, REM).
  • the Hypnodensity graph shown at the bottom of FIG. 3A is a graph showing Sleep Stage Probability information indicating the probability of which sleep stage it belongs to among the four sleep stage classes.
  • Sleep Stage Probability information indicating the probability of which sleep stage it belongs to among the four sleep stage classes.
  • the sleep stage probability information may mean a numerical representation of the proportion of a certain sleep stage in a certain epoch when the sleep stages are classified.
  • the Hypnogram which is the graph shown above the Hypnodensity graph, can be obtained by determining the sleep stage with the highest probability from the Hypnodensity graph.
  • the sleep analysis results obtained according to the present invention showed very consistent performance when compared with the labeling data obtained through polysomnography.
  • Figure 16 is a diagram for explaining another example of a hypnogram displaying a sleep stage within a user's sleep period according to an embodiment of the present invention.
  • the hypnogram 1000 is generally obtained through electroencephalogram (EEGs), electrooculography (EOGs), electromyography (ElectroMyoGraphy), and polysomnography (PSG). can be obtained.
  • EEGs electroencephalogram
  • EOGs electrooculography
  • EMGs electromyography
  • PSG polysomnography
  • the hypnogram displayed as shown in FIG. 16 can express sleep stages by dividing them into REM sleep and non-REM sleep. For example, it can be expressed in four stages: REM sleep, deep sleep, light sleep, and waking.
  • the processor 630 may obtain sleep intention information based on environmental sensing information. According to one embodiment, the processor 630 may identify the type of sound included in environmental sensing information.
  • the processor 630 may calculate sleep intention information based on the number of types of identified sounds.
  • the processor 630 can calculate the sleep intention information at a lower level as the number of types of sounds increases, and can calculate the sleep intention information higher as the number of types of sounds decreases.
  • the processor 630 may calculate the sleep intention information as 2 points. Also, for example, when there is only one type of sound (eg, washing machine) included in the environmental sensing information, the processor 630 may calculate sleep intention information as 6 points.
  • the processor 630 can obtain sleep intention information related to how much the user intends to sleep according to the number of types of sounds included in the environmental sensing information. For example, as more types of sounds are identified, sleep intention information indicating that the user's sleep intention is lower (i.e., sleep intention information with a lower score) may be output.
  • processor 630 has been described, the above-described operation may be performed by another processor (eg, processor 110, processor 210, control unit 380, etc.) disclosed in the present invention.
  • the processor 110 may generate or record an intent score table by pre-matching different intent scores to each of a plurality of acoustic information.
  • the first sound information related to the washing machine may be pre-matched with an intention score of 2 points
  • the second sound information related to the sound of the humidifier may be pre-matched with an intent score of 5 points
  • the intent score related to the voice may be pre-matched.
  • An intent score of 1 point may be matched to the third sound information.
  • the processor 110 pre-matches a relatively high intent score for sound information related to the user's sleep (e.g., sounds generated as the user is active, such as vacuum cleaner, dishwashing, voice sound, etc.), and not related to the user's sleep.
  • An intent score table can be created by pre-matching relatively low intent scores for acoustic information (e.g., sounds unrelated to the user's activities, vehicle noise, rain sounds, etc.).
  • acoustic information e.g., sounds unrelated to the user's activities, vehicle noise, rain sounds, etc.
  • the specific numerical description of the intention score matched to each sound information described above is only an example, and the present invention is not limited thereto.
  • the processor 110 may obtain sleep intention information based on environmental sensing information and an intention score table. Specifically, the processor 110 may record an intention score matched to the identified sound in response to a point in time when at least one of the plurality of sounds included in the intention score table is identified in the environmental sensing information. For a specific example, in the process of acquiring environmental sensing information in real time, when the sound of a vacuum cleaner is identified in response to the first time point, the processor 110 matches 2 intent scores matched to the sound of the vacuum cleaner to the first time point It can be recorded. In the process of acquiring environment scene information, the processor 110 may match and record the intent score matched to the identified sound at that time whenever each of the various sounds is identified.
  • processor 110 may obtain sleep intention information based on the sum of intention scores obtained over a predetermined period of time (eg, 10 minutes). For a specific example, the higher the intention score obtained for 10 minutes, the higher the sleep intention information can be obtained, and the lower the intention score obtained for 10 minutes, the lower the sleep intention information can be obtained.
  • a predetermined period of time eg, 10 minutes.
  • the processor 110 may obtain sleep intention information related to how much the user intends to sleep according to the characteristics of the sound included in the environmental sensing information. For example, as sounds related to the user's activity are identified, sleep intention information indicating that the user's sleep intention is low (i.e., sleep intention information with a low score) may be output.
  • processor 630 operations of the processor 110
  • processor 210 processor 210
  • control unit 380 control unit 380
  • Sleep events include various events that may occur during sleep, such as snoring, sleep breathing (for example, including information related to sleep apnea), and bruxism.
  • sleep event information indicating that a predetermined sleep event has occurred or sleep event probability information indicating the probability of determining that a predetermined sleep event has occurred can be generated.
  • sleep breathing information which is an example of sleep event information, will be described.
  • Figure 3c is a graph verifying the performance of the sleep analysis method according to the present invention, showing polysomnography (PSG) results in relation to sleep apnea and hypoventilation and the results according to the present invention.
  • PSG polysomnography
  • This is a diagram comparing the analysis results (AI results) using AI algorithms.
  • the probability graph shown at the bottom of FIG. 3C shows the probability of which of the two diseases (sleep apnea and hypoventilation) it belongs to in 30-second increments when predicting a sleep disease by receiving user sleep sound information.
  • the graph shown in the middle of the three graphs shown in FIG. 3C can be obtained by determining the disease with the highest probability from the probability graph shown below.
  • the sleep state information obtained according to the present invention showed performance very consistent with polysomnography.
  • a sleep disorder sleep hyperventilation, sleep hypopnea
  • stimulation tacrine, auditory, olfactory, etc.
  • the sleep disorder may be temporarily alleviated.
  • the probability graph according to an embodiment of the present invention may indicate the probability of which of the two diseases (sleep apnea, hypoventilation) falls into the 30 second unit when predicting a sleep disease by receiving user sleep sound information. It is not limited to seconds.
  • the graph shown in the middle of the three graphs shown in FIG. 3C can be obtained by determining the disease with the highest probability from the probability graph shown below.
  • the sleep state information obtained according to the present invention showed performance very consistent with polysomnography.
  • a sleep disorder sleep hyperventilation, sleep hypopnea
  • stimulation tacrine, auditory, olfactory, etc.
  • the sleep disorder may be temporarily alleviated.
  • CONCEPT-A One embodiment of a multimodal sleep state information analysis method (CONCEPT-A)
  • Figure 46 is a flowchart illustrating a method for analyzing sleep state information including the process of combining sleep sound information and sleep environment information into multimodal data according to an embodiment of the present invention.
  • a method for analyzing sleep state information using sleep sound information and sleep environment information in a multimodal manner includes a first method of acquiring sound information in the time domain related to the user's sleep.
  • S600 An information acquisition step
  • S602 a step of preprocessing the first information
  • S610 second information acquisition step of acquiring user sleep environment information related to the user's sleep
  • S612 a step of performing preprocessing of the second information
  • S620 a combining step of combining multi-modal data
  • S630 deep learning model
  • sound information in the time domain related to the user's sleep may be acquired from the user terminal 300.
  • Sound information in the time domain related to the user's sleep may include sound source information obtained from the sound source detection unit of the user terminal 300.
  • sleep sound information in the time domain is converted into information including changes in the frequency component along the time axis or information in the frequency domain. can do.
  • information in the frequency domain may be expressed as a spectrogram, which may be a Mel spectrogram to which the Mel scale is applied.
  • a spectrogram By converting to a spectrogram, user privacy can be protected and the amount of data processing can be reduced.
  • information converted from sleep sound information in the time domain is visualized, and in this case, sleep state information can be obtained through image analysis by using it as input to an image processing-based artificial intelligence model.
  • the step of performing data preprocessing of the first information may further include extracting features based on the acoustic information.
  • the user's sleep breathing pattern can be extracted based on the acquired acoustic information in the time domain.
  • the acquired acoustic information in the time domain can be converted into information including changes in frequency components along the time axis, and the user's breathing pattern can be extracted based on the converted information.
  • acoustic information on the time domain may be converted into information on the frequency domain, and the user's sleep breathing pattern may be extracted based on the acoustic information on the frequency domain.
  • the converted information is visualized and can be used as input to an image processing-based artificial intelligence model to output information such as the user's breathing pattern.
  • the step of performing data preprocessing of the first information may include a data augmentation process to obtain a sufficient amount of meaningful data to input sleep sound information into a deep learning model.
  • Data augmentation techniques may include pitch shifting (Pitch Shifting) augmentation, TUT (Tile UnTile) augmentation, and noise-added augmentation.
  • Pitch Shifting Pitch Shifting
  • TUT Tile UnTile
  • noise-added augmentation Pitch Shifting
  • the above-described augmentation technique is merely an example, and the present invention is not limited thereto.
  • the time required for hardware to process data can be shortened by the method added by Mel Scale.
  • the second information acquisition step (S610) of acquiring user sleep environment information related to the user's sleep may acquire user sleep environment information through the user terminal 300, an external server, or a network.
  • the user's sleep environment information may refer to information related to sleep obtained in the space where the user is located.
  • Sleep environment information may be sensing information (eg, environmental sensing information) obtained in a space where the user is located through a non-contact method.
  • Sleep environment information may be breathing movement and body movement information measured through radar.
  • Sleep environment information may be information related to the user's sleep obtained from a smart watch, smart home appliance, etc.
  • Sleep environment information may be a photoplethysmography signal (PhotoPlethysmoGraphy).
  • Sleep environment information can be Heart Rate Variability (HRV) and heart rate obtained through PhotoPlethysmoGraphy (PPG), and photoplethysmography signals can be measured by smart watches and smart rings. There is. Sleep environment information may be an electroencephalography (EEG) signal. Sleep environment information may be an Actigraphy signal measured during sleep.
  • HRV Heart Rate Variability
  • PPG PhotoPlethysmoGraphy
  • EEG electroencephalography
  • Sleep environment information may be an Actigraphy signal measured during sleep.
  • the step of preprocessing the second information involves a data augmentation process to obtain a sufficient amount of meaningful data to input the data of the user's sleep environment information into a deep learning model. It can be included.
  • the preprocessing of the second information may include processing data of the user's sleep environment information to extract features.
  • the second information is a photoplethysmography signal (PPG)
  • HRV heart rate variability
  • heart rate can be extracted from the photoplethysmography signal.
  • the image information is subjected to TUT (Tile UnTile) augmentation and noise addition. May include augmentation.
  • TUT Tile UnTile
  • the above-described augmentation technique is merely an example of an augmentation technique for image information, and the present invention is not limited thereto.
  • the user's sleep environment information may be information in various storage formats. Various methods may be employed to augment the user's sleep environment information.
  • the step of combining the first information and the second information that have undergone a data preprocessing process into multimodal data combines the data to input the multimodal data to the deep learning model.
  • a method of combining multimodal data may be combining preprocessed first information and preprocessed second information into data of the same format.
  • the first information may be acoustic image information in the frequency domain
  • the second information may be heart rate image information in the time domain obtained from a smart watch. At this time, since the domains of the first information and the second information are not the same, they can be converted to the same domain and combined.
  • a method of combining multimodal data may be combining preprocessed first information and preprocessed second information into data of the same format.
  • the first information may be acoustic image information in the frequency domain
  • the second information may be heart rate image information in the time domain obtained from a smart watch.
  • each data can be labeled as being related to the first information and the second information.
  • the step of combining multimodal data may be performed by performing first information augmentation, performing second information augmentation, and then combining.
  • the first information may be acoustic information in the user's time domain
  • the second information may be a photoplethysmography signal (PPG), which may be combined into multimodal data.
  • the first information may be sound information on the user's time domain or a spectrogram converted from sound information on the time domain into sound information on the frequency domain
  • the second information may be a photoplethysmography signal (PPG), , this can be combined into multimodal data.
  • the step of combining multimodal data may be performed by performing first information augmentation, extracting second information augmentation and features, and combining them.
  • the first information may be the user's sound information in the time domain or a spectrogram converted from the sound information in the time domain to sound information in the frequency domain
  • the second information may be the heart rate obtained from the photoplethysmography signal (PPG). This can be HRV or heart rate, and can be combined into multimodal data.
  • the step of combining multimodal data (S620) may be performed by performing first information augmentation and feature extraction, and performing second information augmentation.
  • the first information may be the user's breathing pattern extracted based on the user's acoustic information
  • the second information may be heart rate variability (HRV) or heart rate obtained from the photoplethysmography signal (PPG). Rate), and this can be combined into multimodal data.
  • HRV heart rate variability
  • PPG photoplethysmography signal
  • first information augmentation and feature extraction may be performed, and second information augmentation and feature extraction may be performed and combined.
  • first information may be a user's breathing pattern extracted based on the user's acoustic information
  • second information may be heart rate variability (HRV) or heart rate obtained from a photoplethysmography signal (PPG). and can be combined into multimodal data.
  • HRV heart rate variability
  • PPG photoplethysmography signal
  • the step of inputting multimodal combined data into a deep learning model can process the data into a matching form required for inputting the deep learning model to input multimodal combined data.
  • the step of acquiring sleep state information as an output of a deep learning model is to infer sleep state information by using multimodal combined data as an input to a deep learning model for inferring sleep state information.
  • Sleep state information may be information about the user's sleep state.
  • the user's sleep state information may include sleep stage information expressing the user's sleep as a stage. Stages of sleep can be divided into NREM (non-REM) sleep and REM (rapid eye movement) sleep, and NREM sleep can be further divided into multiple stages (e.g., stages 2 of light and deep, and stages 4 of N1 to N4). You can.
  • the sleep stage setting may be defined as a general sleep stage, but may also be arbitrarily set to various sleep stages depending on the designer.
  • the user's sleep state information may include sleep event information expressing sleep-related diseases that occur during the user's sleep or behavior during sleep.
  • sleep event information that occurs during the user's sleep may include sleep apnea and hypopnea information due to the user's sleep disease.
  • sleep event information that occurs during the user's sleep may include whether the user snores, the duration of snoring, whether the user talks in his sleep, the duration of the sleep talking, whether he tosses and turns, and the duration of the tossing and turning.
  • the user's sleep event information described is only an example for expressing events that occur during the user's sleep, and is not limited thereto.
  • Figure 47 is a flowchart illustrating a method for analyzing sleep state information including the step of combining the inferred sleep sound information and sleep environment information into multimodal data according to an embodiment of the present invention.
  • a method for analyzing sleep state information using sleep sound information and sleep environment information in a multimodal manner includes a first method of acquiring sound information in the time domain related to the user's sleep.
  • Information acquisition step (S700) performing preprocessing of the first information (S702), inferring information about sleep by using the first information as input to the deep learning model (S704), user sleep related to the user's sleep
  • a second information acquisition step of acquiring environmental information (S710), a step of preprocessing the second information (S712), a step of inferring information about sleep by using the second information as input to the deep learning model (S714) It may include a combining step of combining multi-modal data (S720) and a step of obtaining sleep state information by combining multi-modal data (S730).
  • the user terminal 300 may acquire sound information in the time domain related to the user's sleep.
  • Sound information in the time domain related to the user's sleep may include sound source information obtained from the sound source detection unit of the user terminal 300.
  • temporal acoustic information on the time domain is converted into information including changes in the frequency component along the time axis or information on the frequency domain. can do.
  • information in the frequency domain may be expressed as a spectrogram, which may be a Mel spectrogram to which the Mel scale is applied.
  • the step of performing data preprocessing of the first information may include a data augmentation process to obtain a sufficient amount of meaningful data to input sleep sound information into a deep learning model.
  • Data augmentation techniques may include pitch shifting (Pitch Shifting) augmentation, TUT (Tile UnTile) augmentation, and noise-added augmentation.
  • Pitch Shifting Pitch Shifting
  • TUT Tile UnTile
  • noise-added augmentation Pitch Shifting
  • the above-described augmentation technique is merely an example, and the present invention is not limited thereto.
  • the time required for hardware to process data can be shortened by the method added by Mel Scale.
  • the second information acquisition step (S710) of acquiring user sleep environment information related to the user's sleep may acquire user sleep environment information through the user terminal 300, an external server, or a network.
  • the user's sleep environment information may refer to information related to sleep obtained in the space where the user is located.
  • Sleep environment information may be sensing information obtained in a space where the user is located using a non-contact method.
  • Sleep environment information may be breathing movement and body movement information measured through radar.
  • Sleep environment information may be information related to the user's sleep obtained from a smart watch, smart home appliance, etc.
  • Sleep environment information may be Heart Rate Variability (HRV) and heart rate obtained through PhotoPlethysmoGraphy (PPG), and photoplethysmography signals can be measured by smart watches and smart rings. You can. Sleep environment information may be an electroencephalography (EEG) signal. Sleep environment information may be an Actigraphy signal measured during sleep.
  • HRV Heart Rate Variability
  • PPG PhotoPlethysmoGraphy
  • EEG electroencephalography
  • Sleep environment information may be an Actigraphy signal measured during sleep.
  • the step of preprocessing the second information involves a data augmentation process to obtain a sufficient amount of meaningful data to input the data of the user's sleep environment information into a deep learning model. It can be included.
  • the image information is converted to TUT (Tile UnTile) augmentation. and noise-added augmentation.
  • TUT Tile UnTile
  • noise-added augmentation augmentation technique
  • the above-described augmentation technique is merely an example of an augmentation technique for image information, and the present invention is not limited thereto.
  • the user's sleep environment information may be information in various storage formats. Various methods may be employed to augment the user's sleep environment information.
  • the step of inferring information about sleep by using the preprocessed first information as an input to a deep learning model involves inferring information about sleep by using the preprocessed first information as an input to a deep learning model. You can.
  • a previously learned deep learning model can use inferred data as input for self-learning through inferred data.
  • a deep learning sleep analysis model that infers information about sleep by using first information about sleep sounds as input may include a feature extraction model and a feature classification model.
  • the feature extraction model is a one-to-one proxy task (Proxy) in which one spectrogram is input and learned to predict sleep state information corresponding to one spectrogram. It can be pre-trained by task.
  • Proxy a one-to-one proxy task
  • learning may be performed by adopting the structure of FC (Fully Connected Layer) or FCN (Fully Connected Neural Network).
  • FC Full Connected Layer
  • FCN Fely Connected Neural Network
  • learning may be performed by adopting the structure of the intermediate layer.
  • the feature classification model inputs a plurality of consecutive spectrograms, predicts sleep state information of each spectrogram, and analyzes the sequence of the plurality of consecutive spectrograms. Thus, it can be learned to predict or classify overall sleep state information.
  • the step (S714) of inferring information about sleep by using the preprocessed second information as input to the inference model information about sleep can be inferred by using the pre-processed second information as input to the inference model.
  • the previously learned inference model may be the sleep deep learning sleep analysis model described above, but is not limited thereto, and the previously learned inference model may be an inference model of various types to achieve the purpose. A variety of methods can be used for the previously learned inference model.
  • the information is combined to determine sleep state information.
  • a method of combining multimodal data may be to combine sleep information inferred through preprocessed first information and information inferred through preprocessed second information into data of the same format.
  • the user's sleep state information in the step of acquiring sleep state information by combining multimodal data (S730), can be determined by combining data obtained through multimodality.
  • Sleep state information may be information about the user's sleep state.
  • the step of acquiring sleep state information by combining multimodal data includes the step of inferring information about sleep by using the preprocessed first information as an input to a deep learning model (S704).
  • the step (S714) of inferring information about sleep using the inferred hypnogram about the user's sleep and the preprocessed second information as input to the inference model the inferred hypnogram about the user's sleep is generated.
  • the step of acquiring sleep state information by combining multimodal data includes the step of inferring information about sleep by using the preprocessed first information as an input to a deep learning model (S704).
  • Graphs can be combined. For example, by substituting the probability of each hypnodensity graph into a formula, the sleep stage with the highest reliability at each time can be obtained as the user's sleep stage information.
  • each hypnodensity graph if the reliability over time exceeds the preset reliability threshold, it is adopted as the user's sleep stage information, and the sleep stage information whose reliability over time exceeds the preset reliability threshold is adopted as the user's sleep stage information. If there is no, sleep state information can be obtained by adopting it as sleep stage information through weighting.
  • the step of acquiring sleep state information by combining multimodal data includes the step of inferring information about sleep by using the preprocessed first information as an input to a deep learning model (S704).
  • a hypnodensity graph about the user's sleep inferred in the step (S714) of inferring information about sleep using the inferred hypnogram about the user's sleep and the preprocessed second information as input to the inference model. graph) can be combined. For example, if the reliability of the sleep stage displayed in the hypnogram and the hypnodensity graph exceeds a preset threshold, information on the user's sleep state can be obtained by adopting it as the user's sleep stage.
  • a weighted calculation is made and adopted as the user's sleep stage to obtain highly reliable user's sleep state information. You can.
  • the user's sleep state information may include sleep stage information expressing the user's sleep as a stage. Stages of sleep can be divided into NREM (non-REM) sleep and REM (rapid eye movement) sleep, and NREM sleep can be further divided into multiple stages (e.g., stages 2 of light and deep, and stages 4 of N1 to N4). You can.
  • the sleep stage setting may be defined as a general sleep stage, but may also be arbitrarily set to various sleep stages depending on the designer.
  • the user's sleep state information may include sleep stage information indicating the user's sleep as a stage.
  • Methods for displaying sleep stages may include a Hypnogram, which displays sleep stages on a graph, and a Hypnodensity graph, which displays the probability of each sleep stage on a graph, but the display method is as follows. It is not limited.
  • the user's sleep state information may include sleep event information expressing sleep-related diseases that occur during the user's sleep or behavior during sleep.
  • sleep event information that occurs during the user's sleep may include sleep apnea and hypopnea information due to the user's sleep disease.
  • sleep event information that occurs during the user's sleep may include whether the user snores, the duration of snoring, whether the user talks in his sleep, the duration of the sleep talking, whether he tosses and turns, and the duration of the tossing and turning.
  • the user's sleep event information described is only an example for expressing events that occur during the user's sleep, and is not limited thereto.
  • CONCEPT-C One embodiment of a multimodal sleep state information analysis method
  • Figure 48 is a flowchart illustrating a method for analyzing sleep state information including the step of combining inferred sleep sound information with sleep environment information and multimodal data according to an embodiment of the present invention.
  • a method for analyzing sleep state information using sleep sound information and sleep environment information in a multimodal manner includes a first method of acquiring sound information in the time domain related to the user's sleep.
  • Information acquisition step (S800) performing preprocessing of the first information (S802), inferring information about sleep by using the first information as input to the deep learning model (S804), user sleep related to the user's sleep It may include a second information acquisition step of acquiring environmental information (S810), a combining step of combining multi-modal data (3220), and a step of acquiring sleep state information by combining multi-modal data (S830).
  • the first information acquisition step (S800) may acquire sound information in the time domain related to the user's sleep from the user terminal 300.
  • Sound information in the time domain related to the user's sleep may include sound source information obtained from the sound source detection unit of the user terminal 300.
  • temporal sound information on the time domain may be converted into information on the frequency domain.
  • information in the frequency domain may be expressed as a spectrogram, which may be a Mel spectrogram to which the Mel scale is applied.
  • the step of performing data preprocessing of the first information may include a data augmentation process to obtain a sufficient amount of meaningful data to input sleep sound information into a deep learning model.
  • Data augmentation techniques may include pitch shifting (Pitch Shifting) augmentation, TUT (Tile UnTile) augmentation, and noise-added augmentation.
  • Pitch Shifting Pitch Shifting
  • TUT Tile UnTile
  • noise-added augmentation Pitch Shifting
  • the above-described augmentation technique is merely an example, and the present invention is not limited thereto.
  • the time required for hardware to process data can be shortened by the method added by Mel Scale.
  • the second information acquisition step (S810) of acquiring user sleep environment information related to the user's sleep may acquire user sleep environment information through the user terminal 300, an external server, or a network.
  • the user's sleep environment information may refer to information related to sleep obtained in the space where the user is located.
  • Sleep environment information may be sensing information obtained in a space where the user is located using a non-contact method.
  • Sleep environment information may be breathing movement and body movement information measured through radar.
  • Sleep environment information may be information related to the user's sleep obtained from a smart watch, smart home appliance, etc.
  • Sleep environment information may be Heart Rate Variability (HRV) and heart rate obtained through PhotoPlethysmoGraphy (PPG), and photoplethysmography signals can be measured by smart watches and smart rings. You can.
  • Sleep environment information may be an electroencephalography (EEG) signal.
  • Sleep environment information may be an Actigraphy signal measured during sleep.
  • Sleep environment information may be labeling data representing user information. Specifically, the labeling data may include the user's age, disease status, physical condition, race, height, weight, and body mass index, and this is only an example of labeling data representing the user's information and is not limited thereto.
  • the above-described sleep environment information is only an example of information that may affect the user's sleep, and is not limited thereto.
  • step (S804) of inferring information about sleep by using the preprocessed first information as an input to the deep learning model information about sleep is inferred by using the pre-processed first information as an input to the deep learning model. You can.
  • a deep learning sleep analysis model that infers information about sleep by using first information about sleep sounds as input may include a feature extraction model and a feature classification model.
  • the feature extraction model is a one-to-one proxy task (Proxy) in which one spectrogram is input and learned to predict sleep state information corresponding to one spectrogram. It can be pre-trained by task.
  • Proxy a one-to-one proxy task
  • learning may be performed by adopting the structure of FC (Fully Connected Layer) or FCN (Fully Connected Neural Network).
  • FC Full Connected Layer
  • FCN Fely Connected Neural Network
  • learning may be performed by adopting the structure of the intermediate layer.
  • the feature classification model inputs a plurality of consecutive spectrograms, predicts sleep state information of each spectrogram, and analyzes the sequence of the plurality of consecutive spectrograms. Thus, it can be learned to predict or classify time-series sleep state information.
  • the step of combining the first information and the second information that have undergone a data preprocessing process into multimodal data combines the data to input the multimodal data to the deep learning model.
  • a method of combining multimodal data may be to combine sleep information inferred through preprocessed first information and information inferred through preprocessed second information into data of the same format.
  • the step of acquiring sleep state information by combining multi-modal data can determine the user's sleep state information by combining data obtained through multi-modality.
  • Sleep state information may be information about the user's sleep state.
  • the user's sleep state information may include sleep stage information expressing the user's sleep as a stage. Stages of sleep can be divided into NREM (non-REM) sleep and REM (rapid eye movement) sleep, and NREM sleep can be further divided into multiple stages (e.g., stages 2 of light and deep, and stages 4 of N1 to N4). You can.
  • the sleep stage setting may be defined as a general sleep stage, but may also be arbitrarily set to various sleep stages depending on the designer.
  • the user's sleep state information may include sleep event information expressing sleep-related diseases that occur during the user's sleep or behavior during sleep.
  • sleep event information that occurs during the user's sleep may include sleep apnea and hypopnea information due to the user's sleep disease.
  • sleep event information that occurs during the user's sleep may include whether the user snores, the duration of snoring, whether the user talks in his sleep, the duration of the sleep talking, whether he tosses and turns, and the duration of the tossing and turning.
  • the user's sleep event information described is only an example for expressing events that occur during the user's sleep, and is not limited thereto.
  • analysis of sleep state information based on acoustic information may include a detection step for sleep events (eg, apnea, hypopnea, snoring, sleep talking, etc.).
  • sleep events eg, apnea, hypopnea, snoring, sleep talking, etc.
  • a detection step for sleep events eg, apnea, hypopnea, snoring, sleep talking, etc.
  • sleep events that occur during sleep have various characteristics related to sleep events. For example, there is no sound during an apnea event, but when the apnea event ends, a loud sound may be generated as air passes again, and sleep events can be detected by learning the characteristics of the apnea event in time series.
  • the deep neural network structure for analyzing the above-described sleep stages can be modified and used. Specifically, sleep stage analysis requires time-series learning of sleep sounds, but sleep event detection occurs on average between 10 and 60 seconds, so 1 epoch or 2 epochs of 30 seconds are used. ) is sufficient to accurately detect. Therefore, the deep neural network structure for analyzing sleep stages according to an embodiment of the present invention can reduce the amount of input and output of the deep neural network structure for analyzing sleep stages. For example, if the deep neural network structure for analyzing sleep stages processes 40 Mel spectrograms and outputs sleep stages of 20 epochs, the deep neural network structure for detecting sleep events processes 14 Mel spectrograms. Thus, the sleep event label of 10 epochs can be output.
  • sleep event labels may include, but are not limited to, no event, apnea, hypopnea, snoring, tossing and turning, etc.
  • a deep neural network structure for detecting sleep events that occur during sleep may include a feature extraction model and a feature classification model.
  • the feature extraction model extracts the features of sleep events found in each mel spectrogram
  • the feature classification model detects multiple epochs, finds epochs containing sleep events, and analyzes neighboring features to identify sleep events in time series. Types of events can be predicted and classified.
  • a method for detecting sleep events that occur during sleep may assign class weights to solve the class imbalance problem of each sleep event. Specifically, among sleep events that occur during sleep, “no event” may have a dominant effect on the overall sleep length, resulting in a decrease in sleep event learning efficiency. Therefore, by assigning a higher weight than “no event” to other sleep events, learning efficiency and accuracy can be improved.
  • the sleep event class is classified into three categories: “No event”, “Apnea”, and “Hypopnea”, in order to reduce the impact of "No event” on learning, "No event” has 1.0, " A weight of 1.3 can be assigned to “apnea” and a weight of 2.1 to “hypopnea.”
  • Figure 45 is a diagram for explaining consistency training according to an embodiment of the present invention.
  • the step of detecting a sleep event that occurs during sleep is to detect a sleep event that occurs during sleep in a home environment and a noise environment, and consistency training is performed as shown in Figure 45 as described above.
  • Consistency Training can be used.
  • Consistency Training is a type of semi-supervised learning model.
  • Consistency Training according to an embodiment of the present invention involves intentionally adding noise to one data, and intentionally adding noise to one data. This may be a method of performing learning with data that has not been added.
  • Consistency Training may be a method of performing learning by generating data of a virtual sleep environment using noise of the target environment.
  • Noise intentionally added according to an embodiment of the present invention may be noise of the target environment, where the noise of the target environment may be noise obtained in an environment other than polysomnography, for example.
  • various noises can be added by adjusting the SNR and type of noise to resemble the actual user's environment. Through this, you can collect and learn about the types of noise obtained in various laboratories and the noise that occurs in actual home environments.
  • corrupted data data to which noise is intentionally added
  • Corrupted data may preferably refer to data to which noise of the target environment has been intentionally added.
  • clean data data to which no noise is intentionally added will be referred to as clean data.
  • noise was intentionally added to the clean data, but noise may actually be included.
  • Clean data used for Consistency Training may be data acquired in a specific environment (preferably, a polysomnography environment), and corrupted data may be data obtained in a different environment or target environment (preferably, This may be data obtained in an environment other than polysomnography.
  • Corrupted data may be data in which noise acquired in another environment or a target environment (preferably, an environment other than polysomnography) is intentionally added to clean data.
  • Consistency Training when clean data and corrupted data are input to the same deep learning model, a loss function or consistency loss is defined so that each output is the same, learning to achieve consistent prediction. This can be done.
  • detection of sleep events may include home noise consistency training.
  • Consistency learning in the home environment can make the model perform robustly even against noise at home.
  • Consistency learning in a home environment can be made robust to noise by performing consistency learning so that the model outputs similar predictions regardless of whether there is noise or not.
  • sleep event detection that occurs during sleep can proceed with consistency learning in the home environment.
  • Consistency learning in the home environment may involve a consistency loss function.
  • consistency loss can be defined as the mean square error (MSE) between the prediction of a clean sleep breathing sound and the prediction of a corrupted version of that sound.
  • MSE mean square error
  • consistency learning in a home environment randomly samples data from the training noise to generate corrupted sounds, and adds noise to clean sleep breathing sounds with a random SNR between -20 and 5. can do.
  • consistency learning in a home environment can be done so that the length of the input sequence is 14 epochs and the total length of sampled noise is 7 minutes or more.
  • detecting a sleep event according to the present invention detects information within a shorter period of time compared to sleep stage analysis according to the present invention, and the accuracy of sleep event detection can be increased.
  • Figure 49 is a diagram illustrating a linear regression analysis function used to analyze AHI, which is a sleep apnea occurrence index, through sleep events that occur during sleep, according to an embodiment of the present invention.
  • the AHI index which means the number of respiratory events that occur per unit time (e.g., 1 hour) is, separately from sleep stage analysis, the length of an epoch for one sleep stage analysis and Analysis can be done independently. Specifically, two or three short sleep events may be included during one epoch, and one long sleep event may be included during multiple epochs.
  • sleep A regression analysis function can be used to estimate the number of actual events that occur from the number of epochs in which sleep events occur.
  • a RANSAC (Random Sample Consensus) regression analysis model can be used.
  • the RANSAC regression model is one of the methods for estimating the parameters of an approximate model (fitting model). It is a method of randomly selecting sample data and then selecting the model that matches the maximum.
  • a method for analyzing a sleep state according to an embodiment of the present invention may include analysis through a deep learning model.
  • the deep learning model according to an embodiment of the present invention is capable of multi-task learning and/or multi-task analysis.
  • multi-task learning and multi-task analysis can simultaneously learn tasks according to the above-described embodiments of the present invention (eg, multi-modal learning, real-time sleep event analysis, sleep stage analysis, etc.).
  • a deep learning model for analyzing sleep states is capable of multi-task learning and multi-task analysis.
  • a deep learning model may adopt a structure with multiple heads.
  • Each of the plurality of heads may be responsible for a specific task or task (eg, multimodal learning, real-time sleep event analysis, sleep stage analysis, etc.).
  • a deep learning model may have a structure with a total of three heads: a first head, a second head, and a third head, and the first head performs inference and/or classification on sleep stage information,
  • the second head may perform detection and/or classification of sleep apnea and hypopnea during the sleep event, and the third head may perform detection and classification of snoring during the sleep event.
  • the detailed description of the specific work or task of the head described above is only an example for explaining the present invention, and is not limited thereto.
  • the deep learning model according to the present invention can perform multi-task learning and analysis through a structure with multiple heads, and can optimize multiple tasks or specific tasks by increasing data efficiency.
  • Existing sleep analysis models predict sleep stages using ECG (Electrocardiogram) or HRV (Heart Rate Variability) as input, but the present invention converts sleep sound information into the frequency domain, spectrogram, or mel spectrogram. By using this as input, you can proceed with sleep stage analysis and inference. Therefore, unlike existing sleep analysis models, because sleep sound information is converted into frequency domain information, spectrogram, or mel spectrogram as input, the sleep stage can be sensed in real time through analysis of the specificity of the sleep pattern. It can be obtained.
  • ECG Electrocardiogram
  • HRV Heart Rate Variability
  • 3A and 3B are graphs verifying the performance of the sleep analysis method according to the present invention, showing polysomnography (PSG) results (PSG results) and analysis results (AI results) using the AI algorithm according to the present invention. This is a comparison drawing.
  • the sleep analysis results obtained according to the present invention are not only consistent with polysomnography, but also contain more precise and meaningful information related to sleep stages (Wake, Light, Deep, REM). do.
  • the hypnogram shown at the bottom of Figure 4 shows the probability of which of the four classes (Wake, Light, Deep, REM) it belongs to in 30 second increments when predicting the sleep stage by receiving the user's sleep sound information. indicates.
  • the four classes refer to the awake state, light sleep state, deep sleep state, and REM sleep state, respectively.
  • Figure 3c is a graph verifying the performance of the sleep analysis method according to the present invention, showing polysomnography (PSG) results in relation to sleep apnea and hypoventilation and the results according to the present invention.
  • PSG polysomnography
  • This is a diagram comparing the analysis results (AI results) using AI algorithms.
  • the hypnogram shown at the bottom of FIG. 3C indicates the probability of which of the two diseases (sleep apnea and hypoventilation) it belongs to in 30-second increments when predicting a sleep disease by receiving user sleep sound information.
  • the sleep state information obtained according to the present invention not only closely matches polysomnography, but also contains more precise analysis information related to apnea and hypoventilation. do.
  • the present invention can analyze the user's sleep in real time and identify the point where sleep disorders (sleep apnea, sleep hyperventilation, sleep hypopnea) occur. If stimulation (tactile, auditory, olfactory, etc.) is provided to the user at the moment the sleep disorder occurs, the sleep disorder may be temporarily alleviated. In other words, the present invention can stop the user's sleep disorder and reduce the frequency of sleep disorder based on accurate event detection related to the sleep disorder. In addition, according to the present invention, there is an effect that very accurate sleep analysis is possible by performing sleep analysis in a multimodal manner.
  • one or more data arrays regarding the user's sleep may be generated based on the user's sleep information.
  • One or more data arrays may be scalar data, vector data, matrix data, or tensor data.
  • Scalar data refers to a single number and can be dimensionless data.
  • Vector data can be expressed as a one-dimensional array of numbers and can represent various dimensions.
  • Matrix data is a two-dimensional array of numbers and can be composed of rows and columns.
  • Tensor data refers to an array of three or more dimensions and can be organized to include depth, rows, and columns.
  • one or more data arrays regarding the user's sleep can be generated for input to generative artificial intelligence that generates content.
  • One or more data arrays regarding the user's sleep are generated based on the acquired user's sleep information.
  • One or more data arrays regarding the user's sleep can be created in a lookup table based on the acquired user's sleep information, and can be created by using the acquired user's sleep information as input to a previously learned deep learning model, and
  • One or more data arrays about the user's sleep can be generated by using the user's sleep information as input to a large-scale language model.
  • the user's data arrangement regarding sleep may include preference information related to the user's sleep, the user's sleep indicator information, and the user's sleep score.
  • the user's features regarding sleep of the present invention may be generated based on preference information associated with the user's sleep.
  • preference information related to sleep may include at least one of information on factors that affect the user's sleep quality or emotions after sleep.
  • information on factors that affect the user's emotions after sleep includes temperature, humidity, sound, light, head and body position, scent, air quality, health functional foods, cosmetics used, and hormone levels of the sleeping environment. It can be included.
  • the temperature of the sleeping environment may be a numerical expression of the rate of change in the degree to which the user feels comfortable after sleeping depending on the temperature of the sleeping environment.
  • humidity may be a numerical expression of the rate of change in the degree to which the user feels comfortable after sleeping depending on the humidity of the sleeping environment.
  • the sound may be a numerical expression of the rate of change in the level of comfort the user feels after sleeping, depending on the level of noise in the sleeping environment and the background sounds in the sleeping environment.
  • light may be a numerical expression of the rate of change in the degree to which the user feels comfortable after sleeping, depending on the light amount, light temperature, and light pattern of the sleeping environment.
  • light may be a numerical expression of the rate of change in the degree to which the user feels refreshed after sleeping, depending on the amount of light, light temperature, and light pattern of the sleeping environment.
  • the position of the head and body is a numerical expression of the change rate of comfort felt by the user while sleeping when the angle between the center of gravity of the head and body is 10 degrees, 15 degrees, 20 degrees, and 30 degrees.
  • the scent may be a numerical expression of the rate of change in the degree to which the user feels comfortable after sleeping, depending on the scent of the sleeping environment.
  • air quality may be a numerical expression of the rate of change in the degree to which the user feels comfortable after sleeping due to air humidity, degree of pollution, concentration of fine dust, etc. in the sleeping environment.
  • a health functional food product may express in numbers the rate of change in the level of comfort the user feels after sleeping when the user takes the health functional food before sleeping and then goes to sleep.
  • cosmetics may be a numerical expression of the rate of change in the level of comfort the user feels after sleeping when the user uses the cosmetics before sleeping and then goes to bed.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Psychology (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Anesthesiology (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Business, Economics & Management (AREA)
  • Hematology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Acoustics & Sound (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Tourism & Hospitality (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Developmental Disabilities (AREA)
  • Psychiatry (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Pain & Pain Management (AREA)
  • Hospice & Palliative Care (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)

Abstract

La présente invention concerne un procédé, un dispositif, un programme informatique et un support d'enregistrement lisible par ordinateur permettant de générer et de fournir un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur et, plus particulièrement, de générer et de fournir une image de sommeil ou une vidéo de sommeil, par laquelle le sommeil de la dernière nuit d'un utilisateur peut être converti en une image ou une vidéo et puis fourni visuellement à l'utilisateur, de générer et de fournir des informations imaginaires guidées, ou de générer et de fournir, à l'aide d'une intelligence artificielle générative, un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur.
PCT/KR2023/014988 2022-10-13 2023-09-27 Procédé, dispositif, programme informatique et support d'enregistrement lisible par ordinateur permettant de générer et de fournir un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur WO2024080647A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020247000275A KR20240052740A (ko) 2022-10-13 2023-09-27 심상 유도 정보 제공 및 수면 상태 정보 획득 방법, 장치, 컴퓨터 프로그램 및 컴퓨터 판독 가능한 기록매체
KR1020237041711A KR20240052723A (ko) 2022-10-13 2023-09-27 사용자 수면 정보 기반의 수면 콘텐츠 생성 및 제공방법, 장치, 컴퓨터 프로그램 및 컴퓨터 판독 가능한 기록매체

Applications Claiming Priority (16)

Application Number Priority Date Filing Date Title
KR20220131829 2022-10-13
KR10-2022-0131829 2022-10-13
KR10-2022-0134472 2022-10-18
KR20220134472 2022-10-18
KR10-2022-0149003 2022-11-09
KR20220149003 2022-11-09
KR10-2023-0026104 2023-02-27
KR20230026104 2023-02-27
KR10-2023-0044340 2023-04-04
KR20230044340 2023-04-04
KR10-2023-0068718 2023-05-26
KR20230068718 2023-05-26
KR10-2023-0071935 2023-06-02
KR20230071935 2023-06-02
KR10-2023-0107174 2023-08-16
KR20230107174 2023-08-16

Publications (1)

Publication Number Publication Date
WO2024080647A1 true WO2024080647A1 (fr) 2024-04-18

Family

ID=90669548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/014988 WO2024080647A1 (fr) 2022-10-13 2023-09-27 Procédé, dispositif, programme informatique et support d'enregistrement lisible par ordinateur permettant de générer et de fournir un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur

Country Status (2)

Country Link
KR (1) KR20240052723A (fr)
WO (1) WO2024080647A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6550257B2 (ja) * 2015-04-13 2019-07-24 日本電信電話株式会社 睡眠状態表示装置、方法及びプログラム
KR20200061294A (ko) * 2018-11-23 2020-06-02 주식회사 스칼라웍스 머신 러닝을 이용하여 은닉 이미지를 추론하는 방법 및 장치
KR20220078457A (ko) * 2020-12-03 2022-06-10 한국전자기술연구원 사용자 맞춤 모듈형 수면관리 시스템 및 방법
KR20220082146A (ko) * 2020-12-09 2022-06-17 주식회사 닥터송 인공지능 및 자연어 처리 기반의 의료 콘텐츠 저작 및 관리 시스템
KR102429256B1 (ko) * 2021-12-31 2022-08-04 주식회사 에이슬립 음향 정보를 통해 사용자의 수면 상태를 분석하기 위한 방법, 컴퓨팅 장치 및 컴퓨터 프로그램

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100456730B1 (ko) 2001-10-18 2004-11-10 이동민 취침유도기 및 수면유도방법
KR20220015835A (ko) 2020-07-31 2022-02-08 삼성전자주식회사 수면 질을 평가하기 위한 전자 장치 및 그 전자 장치에서의 동작 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6550257B2 (ja) * 2015-04-13 2019-07-24 日本電信電話株式会社 睡眠状態表示装置、方法及びプログラム
KR20200061294A (ko) * 2018-11-23 2020-06-02 주식회사 스칼라웍스 머신 러닝을 이용하여 은닉 이미지를 추론하는 방법 및 장치
KR20220078457A (ko) * 2020-12-03 2022-06-10 한국전자기술연구원 사용자 맞춤 모듈형 수면관리 시스템 및 방법
KR20220082146A (ko) * 2020-12-09 2022-06-17 주식회사 닥터송 인공지능 및 자연어 처리 기반의 의료 콘텐츠 저작 및 관리 시스템
KR102429256B1 (ko) * 2021-12-31 2022-08-04 주식회사 에이슬립 음향 정보를 통해 사용자의 수면 상태를 분석하기 위한 방법, 컴퓨팅 장치 및 컴퓨터 프로그램

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HAI HONG TRAN: "Prediction of Sleep Stages Via Deep Learning Using Smartphone Audio Recordings in Home Environments: Model Development and Validation", JOURNAL OF MEDICAL INTERNET RESEARCH, JMIR PUBLICATIONS, CA, vol. 25, 1 June 2023 (2023-06-01), CA , pages e46216, XP093159265, ISSN: 1438-8871, DOI: 10.2196/46216 *
HONG JOONKI, HAI TRAN, JINHWAN JEONG, HYERYUNG JANG, IN-YOUNG YOON, JUNG KYUNG HONG, JEONG-WHUN KIM: "0348 SLEEP STAGING USING END-TO-END DEEP LEARNING MODEL BASED ON NOCTURNAL SOUND FOR SMARTPHONES", SLEEP, vol. 45, no. Suppl 1, 25 May 2022 (2022-05-25), pages A156, XP093131680, DOI: 10.1101/2021.10.13.21264974 *
HONG JOONKI, TRAN HAI HONG, JUNG JINHWAN, JANG HYERYUNG, LEE DONGHEON, YOON IN-YOUNG, HONG JUNG KYUNG, KIM JEONG-WHUN: "End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices", NATURE AND SCIENCE OF SLEEP, DOVE MEDICAL PRESS, vol. 14, 1 June 2022 (2022-06-01), pages 1187 - 1201, XP093131683, ISSN: 1179-1608, DOI: 10.2147/NSS.S361270 *
HONG JUNG KYUNG, LEE TAEYOUNG, DELOS REYES ROBEN DEOCAMPO, HONG JOONKI, TRAN HAI HONG, LEE DONGHEON, JUNG JINHWAN, YOON IN-YOUNG: "Confidence-Based Framework Using Deep Learning for Automated Sleep Stage Scoring", NATURE AND SCIENCE OF SLEEP, DOVE MEDICAL PRESS, vol. Volume 13, 1 January 2021 (2021-01-01), pages 2239 - 2250, XP093131678, ISSN: 1179-1608, DOI: 10.2147/NSS.S333566 *
JONGMOK KIM: "Sound-Based Sleep Staging By Exploiting Real-World Unlabeled Data. ", T ICLR 2023. WORKSHOP ON TIME SERIES REPRESENTATION LEARNING FOR HEALTH (TSRL4H), 2 March 2023 (2023-03-02), pages - 7, XP093159263 *
LE VU LINH, KIM DAEWOO, CHO EUNSUNG, JANG HYERYUNG, REYES ROBEN DELOS, KIM HYUNGGUG, LEE DONGHEON, YOON IN-YOUNG, HONG JOONKI, KIM: "Real-Time Detection of Sleep Apnea Based on Breathing Sounds and Prediction Reinforcement Using Home Noises: Algorithm Development and Validation", JOURNAL OF MEDICAL INTERNET RESEARCH, JMIR PUBLICATIONS, CA, vol. 25, 22 February 2023 (2023-02-22), CA , pages e44818, XP093131684, ISSN: 1438-8871, DOI: 10.2196/44818 *

Also Published As

Publication number Publication date
KR20240052723A (ko) 2024-04-23

Similar Documents

Publication Publication Date Title
WO2019177400A1 (fr) Équipement de réalité virtuelle pliable
Ostwald The semiotics of human sound
US7224282B2 (en) Control apparatus and method for controlling an environment based on bio-information and environment information
CN107106063A (zh) 智能音频头戴式耳机系统
CN107427716A (zh) 人类绩效优化与训练的方法及系统
WO2023128713A1 (fr) Procédé, appareil informatique, et programme informatique permettant d'analyser l'état de sommeil d'un utilisateur par l'intermédiaire d'informations sonores
US11141556B2 (en) Apparatus and associated methods for adjusting a group of users' sleep
CN113195031B (zh) 用于递送音频输出的系统和方法
JPWO2018074224A1 (ja) 雰囲気醸成システム、雰囲気醸成方法、雰囲気醸成プログラム、及び雰囲気推定システム
WO2024080647A1 (fr) Procédé, dispositif, programme informatique et support d'enregistrement lisible par ordinateur permettant de générer et de fournir un contenu de sommeil en fonction d'informations de sommeil de l'utilisateur
WO2023146271A1 (fr) Procédé d'analyse de sommeil sans contact basé sur l'intelligence artificielle (ia) et procédé de création d'environnement de sommeil en temps réel
KR101907090B1 (ko) 조명이 구비된 스피커
KR20240052740A (ko) 심상 유도 정보 제공 및 수면 상태 정보 획득 방법, 장치, 컴퓨터 프로그램 및 컴퓨터 판독 가능한 기록매체
WO2024080646A1 (fr) Procédé, appareil et système de création d'environnement par analyse de sommeil sans contact à base d'ia
WO2024096419A1 (fr) Procédé pour fournir une interface utilisateur graphique représentant des informations ou une évaluation du sommeil d'un utilisateur
US20240001068A1 (en) Mood adjusting method and system based on real-time biosensor signals from a subject
WO2024019567A1 (fr) Procédé, appareil et programme informatique pour générer un modèle d'analyse de sommeil prédisant un état de sommeil sur la base d'informations sonores
Flanagan et al. Future fashion–at the interface
CN112999037A (zh) 一种语音识别控制按摩床
WO2024058488A1 (fr) Système pour fournir un service de gestion de santé de sommeil en temps réel à l'aide d'une synchronisation d'ondes cérébrales à base d'ia et d'une commande de système nerveux autonome
JP2020103491A (ja) 環境制御システム及び環境制御方法
JP7233031B2 (ja) 環境制御システム及び環境制御方法
Nielsen et al. Beyond Vision. Moving and Feeling in Colour Illuminated Space
US20230256192A1 (en) Systems and methods for inducing sleep of a subject
WO2022244298A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23877575

Country of ref document: EP

Kind code of ref document: A1