WO2024010113A1 - System for producing video for mental and physical stability, and method therefor - Google Patents

System for producing video for mental and physical stability, and method therefor Download PDF

Info

Publication number
WO2024010113A1
WO2024010113A1 PCT/KR2022/009814 KR2022009814W WO2024010113A1 WO 2024010113 A1 WO2024010113 A1 WO 2024010113A1 KR 2022009814 W KR2022009814 W KR 2022009814W WO 2024010113 A1 WO2024010113 A1 WO 2024010113A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sound
image
image information
attribute information
Prior art date
Application number
PCT/KR2022/009814
Other languages
French (fr)
Korean (ko)
Inventor
이상훈
Original Assignee
이상훈
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이상훈 filed Critical 이상훈
Priority to PCT/KR2022/009814 priority Critical patent/WO2024010113A1/en
Publication of WO2024010113A1 publication Critical patent/WO2024010113A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the present invention relates to an image production system and method for mental and physical stability, and in particular, to relieve eye fatigue by filtering images containing sound with natural colors, and to relieve ear fatigue by filtering sound with natural sounds. It is about a video production system and method for mental and physical stability.
  • desired service e.g. video service (e.g. video sharing, video call service, etc.)
  • video service e.g. video sharing, video call service, etc.
  • the problem to be solved by the present invention is to provide an image production system and method for mental and physical stability.
  • the image production method for mental and physical stability is an image production method for mental and physical stability performed by a management server, and extracts image information included in basic image information. and generating image attribute information by filtering the image information; Extracting sound information included in the basic image information and filtering the sound information to generate sound attribute information; And generating final image information by matching the image attribute information and the sound attribute information, wherein the step of generating the final video information includes the image attribute information made up of natural colors by filtering the image information and the It may include the step of generating the final image information in which the basic image information is reconstructed by matching and merging the sound attribute information consisting of natural sounds through filtering of sound information.
  • the step of generating the image attribute information includes analyzing the image information and i-Soft information, and if the image information and i-Soft information are different according to the analysis result, the image information and The image attribute information is generated by filtering the image information so that the i-Soft information is the same, and if the image information and the i-Soft information are the same according to the analysis result, the image information is filtered so that the image information is maintained.
  • Image attribute information can be created.
  • the sound attribute information in the step of generating the sound attribute information, may be generated by filtering using white noise and pink noise based on the sound information.
  • the step of generating the sound attribute information may include extracting keywords included in the sound information and filtering using the extracted keywords to generate the sound attribute information consisting of natural sounds. there is.
  • an image production system for mental and physical stability extracts image information included in basic image information, filters the image information to generate image attribute information, and A management server that extracts sound information included in basic image information, filters the sound information to generate sound attribute information, and matches the image attribute information and the sound attribute information to generate final video information and transmits it to the user terminal. It includes; wherein the management server matches and merges the image attribute information, which consists of natural colors by filtering the image information, and the sound attribute information, which consists of natural sounds by filtering the sound information, so that the basic image information is reconstructed. The final image information can be generated.
  • the program according to one embodiment of the present invention is combined with a computer, which is hardware, and is stored in a computer-readable recording medium so that the method of producing an image for mental and physical stability can be performed.
  • an image containing sound is filtered into natural colors to relieve eye fatigue, and noise in the sound is filtered to natural sounds to relieve ear fatigue, thereby creating an image for the user's mental and physical stability.
  • the present invention when watching a video designed to relieve ear fatigue and eye fatigue during activities such as Pilates, yoga, home training, etc. that require concentration, concentration is improved and muscles are more comfortably relaxed, allowing exercise. The effect can be further increased.
  • Figure 1 is a diagram for explaining an image production system for mental and physical stability according to an embodiment of the present invention.
  • FIG. 3 is a diagram for explaining the step of filtering image information shown in FIG. 2.
  • FIG. 4 is a diagram for explaining the step of filtering sound information shown in FIG. 2.
  • Figure 1 is a diagram for explaining an image production system for mental and physical stability according to an embodiment of the present invention.
  • the video production system for mental and physical stability may include a user terminal 10 and a management server 20.
  • the wireless communication network may support various long-distance communication methods, such as Wireless LAN (WLAN), DLNA (Digital Living Network Alliance), Wibro (Wireless Broadband: Wibro), and Wimax (World Interoperability for Microwave Access: Wimax).
  • WLAN Wireless LAN
  • DLNA Digital Living Network Alliance
  • Wibro Wireless Broadband: Wibro
  • Wimax Worldwide Interoperability for Microwave Access: Wimax
  • GSM Global System for Mobile communication
  • CDMA Code Division Multi Access
  • CDMA2000 Code Division Multi Access 2000
  • EV-DO Enhanced Voice-Data Optimized or Enhanced Voice-Data Only
  • WCDMA Wideband CDMA
  • HSDPA High Speed Downlink Packet Access
  • HSUPA High Speed Uplink Packet Access
  • LTE Long Term Evolution
  • LTEA Long Term Evolution-Advanced
  • broadband wireless mobile communication service Wireless Mobile
  • WMBS WMBS
  • BLE Bluetooth Low Energy
  • Zigbee Zigbee
  • RF Radio Frequency
  • LoRa LoRa
  • the user terminal 10 can receive final image information through communication with the management server 20 and upload basic image information corresponding to the final image information to be provided.
  • the final image information is information reconstructed by filtering the image and sound based on the basic image information.
  • the image is filtered by iSoft information and the sound is reconstructed by white noise and pink noise. It may be possible, but it is not limited to this.
  • the basic image information may be a video including various types of images or moving images including sound, but is not limited to this and may not include sound.
  • the user terminal 10 can select and receive final video information corresponding to the basic video information to be viewed from the management server 20.
  • the user terminal 10 may receive customized final video information from the management server 20.
  • the user terminal 10 can select basic video information to be viewed, transmit it to the management server 20, and then receive final video information in which the basic video information is reconstructed from the management server 20.
  • the user terminal 10 may use a service in which the final video information is provided, then generate feedback information about this and transmit it to the management server 20.
  • the feedback information may be sent to the service in which the final video information is provided.
  • Information such as satisfaction, accuracy, and reliability may be included.
  • the feedback information can be generated and transmitted after using the service in which the final video information is provided, but alternatively, it can also be transmitted before or during use.
  • the user terminal 10 may receive event information including advertising information or information about discounts or events for services for which final video information is provided from the management server 20, but is not limited to this. No.
  • the user terminal 10 may output the current operating state and final image information visually and audibly.
  • the user terminal 10 may advance membership registration using user information in order to perform services with the management server 20, but is not limited to this.
  • user information may include, but is not limited to, personal information such as name, contact information, and occupation, as well as additional information such as tastes and preferred music and videos.
  • the user terminal 10 may consist of at least one, but is not limited to this.
  • the user terminal 10 may include various portable electronic communication devices that support communication with the management server 20.
  • the user terminal 10 is a separate smart device, such as a smart phone, a personal digital assistant (PDA), a tablet, a wearable device, a smartwatch, It may include various portable terminals such as glass-type terminals (including Smart Glass, HMD (Head Mounted Display), etc.) and various IoT (Internet of Things) terminals, but in contrast, non-portable desktop computers and It may include electronic communication devices such as workstation computers.
  • PDA personal digital assistant
  • HMD Head Mounted Display
  • IoT Internet of Things
  • the user terminal 10 can operate using an application program (application program or application) in the present disclosure, and such application program can be downloaded from an external server or management server 20 through wireless communication. there is.
  • application program application program or application
  • the management server 20 may include a communication unit 22, a storage unit 24, a monitoring unit 26, and a server control unit 28.
  • the communication unit 22 may receive basic image information from the user terminal 10, or transmit final image information in which the basic image information is reconstructed to the user terminal 10.
  • the storage unit 24 can store data transmitted and received from the user terminal 10 through a wireless communication network.
  • the storage unit 24 can store data supporting various functions of the management server 20.
  • the storage unit 24 may store a number of application programs (application programs or applications) running on the management server 20, data for operation of the management server 20, and commands. At least some of these applications may be downloaded from an external server through wireless communication.
  • the server control unit 28 can reconstruct the basic image information and generate final image information. That is, the server control unit 28 can filter the image of the basic video information with natural colors and filter the sound corresponding to the image of the basic video information with natural sounds to reconstruct the basic video information and generate the final video information.
  • the server control unit 28 may calculate image information for an image included in the basic image information and then compare, analyze, and filter the calculated image information and iSoft information to generate image attribute information.
  • the server control unit 28 can extract images for each frame from basic image information and calculate image information for the extracted images.
  • the server control unit 28 sets the unit of at least 5 seconds from the basic video information, sets a bounding box for each frame, and then provides predicted coordinate information for the image located within the set bounding box. After extraction, the image can be extracted by labeling the predicted coordinate information.
  • the server control unit 28 compares and analyzes the calculated image information and i-Soft information and, if the image information and i-Soft information are different, filters the image information so that it is the same as i-Soft information and generates image attribute information. .
  • iSoft information is information including standard values for color, brightness, and saturation, and may include RGB values that relieve eye fatigue.
  • iSoft information is preset information that can help recover from mental and physical fatigue, including eye fatigue, and can have a green (2.5G) color with a luminance of 8 and a saturation of 1.5 to 2. there is.
  • the server control unit 28 generates iSoft information RGB (163, 204, 163), RGB (181, 214, 146) generated using high brightness with a value of 8 and low saturation with a value of 2.
  • Image information can be filtered into natural colors using RGB (214, 230, 245).
  • the server control unit 28 can compare and analyze the calculated image information and i-Soft information and, if the image information and i-Soft information are the same, create image attribute information in which the image information is maintained.
  • the server control unit 28 may automatically calculate image information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
  • the server control unit 28 may classify images according to preset conditions and calculate image information.
  • image information may be calculated by classifying an image based on at least one of nature, water, trees, outdoors or trees, nature, outdoors, forest or trees, grass, outdoors, or plants.
  • the server control unit 28 may calculate sound information for the basic image information and then filter the calculated sound information to generate sound attribute information consisting of natural sounds.
  • the server control unit 28 may generate sound attribute information consisting of natural sounds by filtering the calculated sound information using white noise and pink noise.
  • white noise is noise containing all frequency components, and its energy is distributed in all frequency ranges.
  • white noise is used in synthesizers to produce natural sounds (sounds of waves, wind, waterfalls, insects, birds, etc.). ) can be an important sound source when producing
  • pink noise is a noise level that is evenly reproduced in the frequency band of 20Hz to 20kHz, and can be a noise signal that corrects white noise by attenuating it by 3dB per octave, making all frequency bands sound at the same level in actual hearing.
  • it refers to a noise signal artificially created to measure the size of a signal, and may be a noise level that is reproduced evenly in the reproduction frequency band rather than a single noise.
  • the server control unit 28 may automatically calculate sound information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
  • the server control unit 28 may filter the sound information to remove noise included in the basic image information before generating sound attribute information consisting of natural sounds. At this time, noise removal may be performed simultaneously when basic image information is preprocessed.
  • the server control unit 28 may filter image information and then filter sound information, but may not be limited to this.
  • the server control unit 28 may filter sound information and then filter image information, or may filter sound information and image information at the same time.
  • the server control unit 28 may classify sounds according to preset conditions and calculate sound information.
  • sound information can be calculated by classifying sounds based on at least one of the sounds of water, insects, birds or waterfalls, wind, birds or wind, insects, and waterfalls.
  • the server control unit 28 may preprocess the basic video information before or simultaneously with filtering the image information and sound information. At this time, noise may be removed simultaneously with preprocessing.
  • the server control unit 28 calculates sound information included in the basic image information, extracts keywords included in the calculated sound information, and filters using the extracted keywords to obtain sound properties consisting of natural sounds. Information can be generated.
  • the server control unit 28 extracts words included in sound information and performs tokenization using a space removal filter and a special character removal filter so that the extracted words are divided into the minimum unit of meaningful words. Afterwards, a refinement process is performed to highlight the meaning of the remaining words by removing noise data for words with low frequency of occurrence or words that are repeated a lot included in the data after the tokenization process, and the data after the refinement process is performed. Sound information can be generated by normalizing.
  • the server control unit 28 matches image attribute information made up of natural colors filtered based on the basic image information and sound attribute information made up of natural sounds filtered based on the basic image information, and merges them to reconstruct the basic image information.
  • Final image information can be generated.
  • the server control unit 28 reconstructs basic image information by matching basic sound information and image attribute information consisting of natural colors and merging them, or matching basic image information and sound attribute information consisting of natural sounds and merging them. Thus, basic image information can be reconstructed.
  • the server control unit 28 matches image attribute information with sound attribute information consisting of natural sounds generated by filtering only sound information added separately at the user's request. And by merging them, the basic image information can be reconstructed to generate the final image information.
  • the server control unit 28 matches the image attribute information with sound attribute information consisting of natural sounds generated by filtering the sound information in which the user's voice is recorded, and matches the image attribute information.
  • the final image information can be generated by merging and reconstructing the basic image information.
  • the server control unit 28 may reconstruct the basic video information using only image attribute information to generate final video information.
  • the server control unit 28 may reconfigure the basic image information to suit the user, generate customized final image information, and provide it to the user. .
  • the server control unit 28 may generate a feedback control signal that can improve the service providing the final video information in response to the feedback signal received from the user terminal 10.
  • the server control unit 28 may generate a feedback control signal containing event information to further increase the use of the service providing the final image information of the user terminal 10. At this time, event information may be included in the final video information.
  • Such management server 20 may be implemented by hardware circuits (eg, CMOS-based logic circuits), firmware, software, or a combination thereof.
  • CMOS-based logic circuits e.g., CMOS-based logic circuits
  • firmware e.g., firmware
  • software e.g., firmware
  • a combination thereof e.g., firmware
  • software e.g., firmware
  • a combination thereof e.g., firmware, software, or a combination thereof.
  • it can be implemented using transistors, logic gates, and electronic circuits in the form of various electrical structures.
  • the management server 20 with this structure is filtered based on the basic image information, matches image attribute information made up of natural colors and sound attribute information made up of natural sounds, and then merges them to generate final video information in which the basic video information is reconstructed.
  • the user watching the final video information is relieved of ear fatigue and eye fatigue, and can further increase concentration while maintaining mental and physical stability.
  • FIG. 2 is a diagram for explaining a method of producing an image for mental and physical stability according to an embodiment of the present invention
  • FIG. 3 is a diagram for explaining the step of filtering the image information shown in FIG. 2
  • FIG. 4 is a diagram for This is a diagram to explain the steps of filtering sound information shown in Figure 2.
  • the management server 20 can preprocess basic image information (S100).
  • the management server 20 may remove noise simultaneously with preprocessing.
  • the management server 20 detects and automatically removes and corrects damage, unclear parts, brightness, sharpness, etc. of the image information included in the basic image information, or adjusts sound deviation, sound range, sound balance, etc. of the sound information. can be identified and automatically removed and corrected.
  • the basic image information may be information uploaded from the user terminal 10, but is not limited thereto.
  • the management server 20 can calculate image information included in the basic image information (S110).
  • the management server 20 can extract images for each frame from basic image information and calculate image information for the extracted images.
  • the management server 20 sets the unit of at least 5 seconds from the basic video information, sets a bounding box for each frame, and then provides predicted coordinate information for the image located within the set bounding box. After extraction, the image can be extracted by labeling the predicted coordinate information, and image information for the extracted image can be calculated.
  • the management server 20 may automatically calculate image information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
  • the management server 20 may classify images according to preset conditions and calculate image information.
  • the management server 20 may filter the image information into natural colors to generate image attribute information (S130, S140).
  • the management server 20 may filter the image information so that the image information is identical to the iSoft information.
  • the management server 20 can calculate sound information (S160).
  • the management server 20 can automatically calculate sound information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
  • the management server 20 may classify sounds according to preset conditions and calculate sound information.
  • the management server 20 can generate sound attribute information using white noise and pink noise based on the sound information (S170).
  • the management server 20 uses white noise and pink noise (see FIG. 4(b)) based on noise-removed sound information (see FIG. 4(a)) to generate sound information.
  • white noise and pink noise see FIG. 4(b)
  • noise-removed sound information see FIG. 4(a)
  • sound attribute information consisting of natural sounds can be generated.
  • the management server 20 may classify sounds according to preset conditions and calculate sound information.
  • the management server 20 calculates sound information included in the basic image information, extracts keywords included in the calculated sound information, and filters using the extracted keywords to obtain sound properties consisting of natural sounds. Information can be generated.
  • the management server 20 matches filtered image attribute information consisting of natural colors and sound attribute information consisting of natural sounds, and merges them to reconstruct the basic image information to generate final image information. You can do it (S180, S190).
  • the management server 20 filters image information for basic image information into natural colors to relieve eye fatigue, and filters sound information into natural sounds to relieve ear fatigue, thereby improving the user's mental and physical stability and concentration. You can create a video that can be done.
  • the management server 20 matches basic sound information and image attribute information made up of natural colors and merges them to reconstruct basic image information, or matches basic image information and sound attribute information made up of natural sounds and merges them. Thus, basic image information can be reconstructed.
  • the management server 20 can generate image attribute information in which the image information is maintained (S200).
  • the management server 20 reconstructs the basic video information using only the image attribute information made up of natural colors by filtering the image information. Final image information can be generated.
  • the management server 20 matches image attribute information with sound attribute information consisting of natural sounds generated by filtering only sound information added separately at the user's request. And by merging them, the basic image information can be reconstructed to generate the final image information.
  • the management server 20 matches the image attribute information with sound attribute information consisting of natural sounds generated by filtering the sound information in which the user's voice is recorded, and matches the image attribute information.
  • the final image information can be generated by merging and reconstructing the basic image information.
  • the management server 20 may reorganize the basic image information to suit the user, generate customized final image information, and provide it to the user. .
  • an EEG evaluation test for improvement in concentration of the final image information produced for mental and physical stability according to an embodiment of the present invention was conducted as follows.
  • concentration index refers to an index that can keep performance constant while stress does not increase and the brain remains clear. The higher the index, the higher the concentration.
  • test conditions are that the test subject has no physical or mental disabilities using brain wave experiment equipment based on the 10-20 method, does not exercise vigorously 1 day before the test, and does not consume drugs or caffeine 3 hours prior to the test. can be inspected, but is not limited to this.
  • the basic image information may be an image containing various types of images or videos including sound
  • the final image information may be information reconstructed by filtering the images and sounds based on the basic image information.
  • the concentration index was 87, and when she watched the final image information (example), the concentration index was 117, measuring brain waves. It can be seen that the concentration index value was measured to be 32.23% higher than that of the comparative example.
  • the concentration index was 121
  • the concentration index was 160, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 34.48% higher than the comparative example.
  • the concentration index was 97, and when she watched the final image information (example), the concentration index was 121, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 24.74% higher than the comparative example.
  • the concentration index was 101, and when she watched the final image information (example), the concentration index was 133, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 31.68% higher than the comparative example.
  • the concentration index was 108, and when he watched the final image information (example), the concentration index was 147, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 36.11% higher than the comparative example.
  • concentration is further improved when watching the embodiment of the present invention compared to the comparative example. That is, in the present invention, when watching the final video information provided by reconstructing the basic video information, eye fatigue is reduced by the filtered video, ear fatigue is reduced by the filtered sound, and mental and physical stability and concentration are improved. I got the result.
  • the steps of the method or algorithm described in connection with embodiments of the present invention may be implemented directly in hardware, implemented as a software module executed by hardware, or a combination thereof.
  • Software modules can be RAM (Random Access Memory), ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), Flash Memory, hard disk, removable disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides a system for producing video for mental and physical stability, and a method therefor. The method, performed by a management server, for producing a video for mental and physical stability may comprise the steps of: extracting image information included in basic image information and generating image attribute information by filtering the image information; extracting sound information included in the basic image information and generating sound attribute information by filtering the sound information; and generating final image information by matching the image attribute information and the sound attribute information, wherein the step of generating the final image information may comprise a step of generating the final image information in which the basic image information is reconstructed by matching and merging the image attribute information, composed of natural colors and obtained by filtering the image information, and the sound attribute information, composed of natural sounds and obtained by filtering the sound information.

Description

심신안정을 위한 영상 제작 시스템 및 그 방법Video production system and method for mental and physical stability
본 발명은 심신안정을 위한 영상 제작 시스템 및 그 방법에 관한 것으로써, 특히 사운드가 포함된 영상을 자연색으로 필터링하여 눈의 피로를 풀어주고, 사운드를 자연음으로 필터링하여 귀의 피로를 풀어줄 수 있는, 심신안정을 위한 영상 제작 시스템 및 그 방법에 관한 것이다.The present invention relates to an image production system and method for mental and physical stability, and in particular, to relieve eye fatigue by filtering images containing sound with natural colors, and to relieve ear fatigue by filtering sound with natural sounds. It is about a video production system and method for mental and physical stability.
최근 이동통신 및 하드웨어/소프트웨어 기술의 발달에 따라, 이동통신 단말기, 스마트 폰(smart phone), 태블릿(tablet) PC(Personal Computer), PDA(Personal Digital Assistant), 전자수첩, 노트북(notebook), 웨어러블 기기(wearable device), IoT(Internet of Things) 기기, 또는 음향 장치(audible device) 등과 같은 다양한 유형의 전자 장치가 널리 사용되고 있다.With recent developments in mobile communication and hardware/software technology, mobile communication terminals, smart phones, tablet PCs (Personal Computers), PDA (Personal Digital Assistants), electronic notebooks, laptops, and wearables have been developed. Various types of electronic devices, such as wearable devices, IoT (Internet of Things) devices, or audible devices, are widely used.
최근에는 전자 장치의 급속한 보급에 따라 단순한 음성통화 위주의 기존의 음성 통신 서비스(voice communication service)에서 데이터 통신 위주의 데이터 통신 서비스(data communication service)로 전환되고 있으며, 다양한 형태의 서비스가 제안되고 있다.Recently, with the rapid spread of electronic devices, there is a transition from the existing voice communication service, which focuses on simple voice calls, to a data communication service, which focuses on data communication, and various types of services are being proposed. .
예를 들면, 전자 장치에서 인터넷을 이용하여 웹 페이지를 열람하거나 어플리케이션(application)을 설치하여 사용자가 원하는 서비스(예: 영상 서비스(예: 영상 공유, 영상 통화 서비스 등))를 전자 장치를 통해 어디서나 제공받을 수 있다(대한민국 공개특허 제10-2017-0037545호 참조).For example, you can view web pages using the Internet on an electronic device or install an application to access the desired service (e.g. video service (e.g. video sharing, video call service, etc.)) from anywhere through the electronic device. It can be provided (see Korean Patent Publication No. 10-2017-0037545).
상기의 배경기술로서 설명된 사항들은 본 발명의 배경에 대한 이해 증진을 위한 것을 뿐, 이 기술분야에서 통상의 지식을 가진 자에게 이미 알려진 종래기술에 해당함을 인정하는 것으로 받아들여서는 안될 것이다.The matters described as background technology above are only for the purpose of improving understanding of the background of the present invention, and should not be taken as an acknowledgment that they correspond to prior art already known to those skilled in the art.
본 발명이 해결하고자 하는 과제는 심신안정을 위한 영상 제작 시스템 및 그 방법을 제공하는 것이다.The problem to be solved by the present invention is to provide an image production system and method for mental and physical stability.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems not mentioned can be clearly understood by those skilled in the art from the description below.
상술한 과제를 해결하기 위한 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 방법은, 관리서버에 의해 수행되는 심신안정을 위한 영상 제작 방법에 있어서, 기초영상정보에 포함된 이미지정보를 추출하고, 상기 이미지정보를 필터링하여 이미지속성정보를 생성하는 단계; 상기 기초영상정보에 포함된 사운드정보를 추출하고, 상기 사운드정보를 필터링하여 사운드속성정보를 생성하는 단계; 및 상기 이미지속성정보 및 상기 사운드속성정보를 매칭하여 최종영상정보를 생성하는 단계;를 포함하고, 상기 최종영상정보를 생성하는 단계는, 상기 이미지정보가 필터링되어 자연색으로 이루어진 상기 이미지속성정보와 상기 사운드정보가 필터링되어 자연음으로 이루어진 상기 사운드속성정보를 매칭 및 병합하여 상기 기초영상정보가 재구성된 상기 최종영상정보를 생성하는 단계;를 포함할 수 있다.The image production method for mental and physical stability according to an embodiment of the present invention to solve the above-described problem is an image production method for mental and physical stability performed by a management server, and extracts image information included in basic image information. and generating image attribute information by filtering the image information; Extracting sound information included in the basic image information and filtering the sound information to generate sound attribute information; And generating final image information by matching the image attribute information and the sound attribute information, wherein the step of generating the final video information includes the image attribute information made up of natural colors by filtering the image information and the It may include the step of generating the final image information in which the basic image information is reconstructed by matching and merging the sound attribute information consisting of natural sounds through filtering of sound information.
본 발명의 일실시예에 있어서, 상기 이미지속성정보를 생성하는 단계는, 상기 이미지정보와 아이소프트정보를 분석하고, 분석결과에 따라 상기 이미지정보와 상기 아이소프트정보가 상이한 경우, 상기 이미지정보와 상기 아이소프트정보가 동일하도록 상기 이미지정보를 필터링하여 상기 이미지속성정보를 생성하며, 분석결과에 따라 상기 이미지정보와 상기 아이소프트정보가 동일한 경우, 상기 이미지정보가 유지되도록 상기 이미지정보를 필터링하여 상기 이미지속성정보를 생성할 수 있다.In one embodiment of the present invention, the step of generating the image attribute information includes analyzing the image information and i-Soft information, and if the image information and i-Soft information are different according to the analysis result, the image information and The image attribute information is generated by filtering the image information so that the i-Soft information is the same, and if the image information and the i-Soft information are the same according to the analysis result, the image information is filtered so that the image information is maintained. Image attribute information can be created.
본 발명의 일실시예에 있어서, 상기 사운드속성정보를 생성하는 단계는, 상기 사운드정보를 기초로 화이트노이즈 및 핑크노이즈를 이용하여 필터링하여 상기 사운드속성정보를 생성할 수 있다.In one embodiment of the present invention, in the step of generating the sound attribute information, the sound attribute information may be generated by filtering using white noise and pink noise based on the sound information.
본 발명의 일실시예에 있어서, 상기 사운드속성정보를 생성하는 단계는, 상기 사운드정보에 포함된 키워드를 추출하고, 추출된 키워드를 이용하여 필터링하여 자연음으로 이루어진 상기 사운드속성정보를 생성할 수 있다.In one embodiment of the present invention, the step of generating the sound attribute information may include extracting keywords included in the sound information and filtering using the extracted keywords to generate the sound attribute information consisting of natural sounds. there is.
또한, 본 발명의 일실시예에 따른 일실시예에 따른 심신안정을 위한 영상 제작 시스템은, 기초영상정보에 포함된 이미지정보를 추출하고, 상기 이미지정보를 필터링하여 이미지속성정보를 생성하고, 상기 기초영상정보에 포함된 사운드정보를 추출하고, 상기 사운드정보를 필터링하여 사운드속성정보를 생성하며, 상기 이미지속성정보 및 상기 사운드속성정보를 매칭하여 최종영상정보를 생성하여 사용자 단말기로 전송하는 관리서버;를 포함하고, 상기 관리서버는, 상기 이미지정보가 필터링되어 자연색으로 이루어진 상기 이미지속성정보와 상기 사운드정보가 필터링되어 자연음으로 이루어진 상기 사운드속성정보를 매칭 및 병합하여 상기 기초영상정보가 재구성된 상기 최종영상정보를 생성할 수 있다.In addition, an image production system for mental and physical stability according to an embodiment of the present invention extracts image information included in basic image information, filters the image information to generate image attribute information, and A management server that extracts sound information included in basic image information, filters the sound information to generate sound attribute information, and matches the image attribute information and the sound attribute information to generate final video information and transmits it to the user terminal. It includes; wherein the management server matches and merges the image attribute information, which consists of natural colors by filtering the image information, and the sound attribute information, which consists of natural sounds by filtering the sound information, so that the basic image information is reconstructed. The final image information can be generated.
본 발명의 일실시예에 따른 프로그램은 하드웨어인 컴퓨터와 결합되어, 상기 심신안정을 위한 영상 제작 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된다.The program according to one embodiment of the present invention is combined with a computer, which is hardware, and is stored in a computer-readable recording medium so that the method of producing an image for mental and physical stability can be performed.
본 발명의 기타 구체적인 사항들은 상세한 설명 및 도면들에 포함되어 있다.Other specific details of the invention are included in the detailed description and drawings.
본 발명에 따르면, 사운드가 포함된 영상을 자연색으로 필터링하여 눈의 피로를 풀어주고, 사운드의 노이즈를 제거하여 자연음으로 필터링하여 귀의 피로를 풀어줌으로써, 사용자의 심신안정을 위한 영상을 제작할 수 있다.According to the present invention, an image containing sound is filtered into natural colors to relieve eye fatigue, and noise in the sound is filtered to natural sounds to relieve ear fatigue, thereby creating an image for the user's mental and physical stability. .
본 발명에 따르면, 집중력을 요구하는 필라테스, 요가, 홈트레이닝 등과 같은 활동시에 귀의 피로 및 눈의 피로가 완화되도록 제작된 영상을 시청하는 경우, 집중력 향상과 더불어 근육 등이 더욱 편안하게 이완되어 운동 효과를 더욱 증대시킬 수 있다.According to the present invention, when watching a video designed to relieve ear fatigue and eye fatigue during activities such as Pilates, yoga, home training, etc. that require concentration, concentration is improved and muscles are more comfortably relaxed, allowing exercise. The effect can be further increased.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the description below.
도 1은 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 시스템을 설명하기 위한 도면이다.Figure 1 is a diagram for explaining an image production system for mental and physical stability according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 방법을 설명하기 위한 도면이다.Figure 2 is a diagram for explaining a method of producing an image for mental and physical stability according to an embodiment of the present invention.
도 3은 도 2에 도시된 이미지정보를 필터링하는 단계를 설명하기 위한 도면이다.FIG. 3 is a diagram for explaining the step of filtering image information shown in FIG. 2.
도 4는 도 2에 도시된 사운드정보를 필터링하는 단계를 설명하기 위한 도면이다.FIG. 4 is a diagram for explaining the step of filtering sound information shown in FIG. 2.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다.The advantages and features of the present invention and methods for achieving them will become clear by referring to the embodiments described in detail below along with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below and may be implemented in various different forms. The present embodiments are merely provided to ensure that the disclosure of the present invention is complete and to provide a general understanding of the technical field to which the present invention pertains. It is provided to fully inform the skilled person of the scope of the present invention, and the present invention is only defined by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1", "제2" 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.The terminology used herein is for describing embodiments and is not intended to limit the invention. As used herein, singular forms also include plural forms, unless specifically stated otherwise in the context. As used in the specification, “comprises” and/or “comprising” does not exclude the presence or addition of one or more other elements in addition to the mentioned elements. Like reference numerals refer to like elements throughout the specification, and “and/or” includes each and every combination of one or more of the referenced elements. Although “first”, “second”, etc. are used to describe various components, these components are of course not limited by these terms. These terms are merely used to distinguish one component from another. Therefore, it goes without saying that the first component mentioned below may also be a second component within the technical spirit of the present invention.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used in this specification may be used with meanings commonly understood by those skilled in the art to which the present invention pertains. Additionally, terms defined in commonly used dictionaries are not interpreted ideally or excessively unless clearly specifically defined.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings.
도 1은 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 시스템을 설명하기 위한 도면이다.Figure 1 is a diagram for explaining an image production system for mental and physical stability according to an embodiment of the present invention.
도 1에 도시된 바와 같이, 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 시스템은 사용자 단말기(10) 및 관리서버(20)를 포함할 수 있다.As shown in FIG. 1, the video production system for mental and physical stability according to an embodiment of the present invention may include a user terminal 10 and a management server 20.
여기서, 사용자 단말기(10) 및 관리서버(20)는 무선통신망을 이용하여 실시간으로 동기화되어 데이터를 송수신할 수 있다. 무선통신망은 다양한 원거리 통신 방식이 지원될 수 있으며, 예를 들어 무선랜(Wireless LAN: WLAN), DLNA(Digital Living Network Alliance), 와이브로(Wireless Broadband: Wibro), 와이맥스(World Interoperability for Microwave Access: Wimax), GSM(Global System for Mobile communication), CDMA(Code Division Multi Access), CDMA2000(Code Division Multi Access 2000), EV-DO(Enhanced Voice-Data Optimized or Enhanced Voice-Data Only), WCDMA(Wideband CDMA), HSDPA(High Speed Downlink Packet Access), HSUPA(High Speed Uplink Packet Access), IEEE 802.16, 롱 텀 에볼루션(Long Term Evolution: LTE), LTEA(Long Term Evolution-Advanced), 광대역 무선 이동 통신 서비스(Wireless Mobile Broadband Service: WMBS), BLE(Bluetooth Low Energy), 지그비(Zigbee), RF(Radio Frequency), LoRa(Long Range) 등과 같은 다양한 통신 방식이 적용될 수 있으나 이에 한정되지 않으며 널리 알려진 다양한 무선통신 또는 이동통신 방식이 적용될 수도 있다.Here, the user terminal 10 and the management server 20 can transmit and receive data in real-time synchronization using a wireless communication network. The wireless communication network may support various long-distance communication methods, such as Wireless LAN (WLAN), DLNA (Digital Living Network Alliance), Wibro (Wireless Broadband: Wibro), and Wimax (World Interoperability for Microwave Access: Wimax). ), GSM (Global System for Mobile communication), CDMA (Code Division Multi Access), CDMA2000 (Code Division Multi Access 2000), EV-DO (Enhanced Voice-Data Optimized or Enhanced Voice-Data Only), WCDMA (Wideband CDMA) , HSDPA (High Speed Downlink Packet Access), HSUPA (High Speed Uplink Packet Access), IEEE 802.16, Long Term Evolution (LTE), LTEA (Long Term Evolution-Advanced), broadband wireless mobile communication service (Wireless Mobile) Broadband Service: Various communication methods such as WMBS (WMBS), BLE (Bluetooth Low Energy), Zigbee, RF (Radio Frequency), LoRa (Long Range), etc. may be applied, but are not limited to these and include various widely known wireless or mobile communications. method may be applied.
사용자 단말기(10)는 관리서버(20)와의 통신을 통해 최종영상정보를 제공받고, 제공받고자 하는 최종영상정보에 대응하는 기초영상정보를 업로드할 수 있다.The user terminal 10 can receive final image information through communication with the management server 20 and upload basic image information corresponding to the final image information to be provided.
이때, 최종영상정보는 기초영상정보를 기초로 영상 및 사운드가 필터링되어 재구성된 정보로써, 예를 들어, 영상이 아이소프트정보에 의해 필터링되고, 사운드가 화이트 노이즈와 핑크 노이즈에 의해 재구성된 정보일 수 있지만, 이에 한정하지 않는다.At this time, the final image information is information reconstructed by filtering the image and sound based on the basic image information. For example, the image is filtered by iSoft information and the sound is reconstructed by white noise and pink noise. It may be possible, but it is not limited to this.
또한, 기초영상정보는 사운드를 포함하는 다양한 종류의 이미지 또는 동영상이 포함된 영상일 수 있지만, 이에 한정하지 않고, 사운드를 포함하지 않을 수도 있다.Additionally, the basic image information may be a video including various types of images or moving images including sound, but is not limited to this and may not include sound.
구체적으로, 사용자 단말기(10)는 시청하고자 하는 기초영상정보에 대응하는 최종영상정보를 관리서버(20)에서 선택하여 제공받을 수 있다.Specifically, the user terminal 10 can select and receive final video information corresponding to the basic video information to be viewed from the management server 20.
실시예에 따라, 사용자 단말기(10)는 관리서버(20)로부터 맞춤형 최종영상정보를 제공받을 수도 있다.Depending on the embodiment, the user terminal 10 may receive customized final video information from the management server 20.
또한, 사용자 단말기(10)는 시청하고자 하는 기초영상정보를 선택하여 관리서버(20)로 전송한 후, 기초영상정보가 재구성된 최종영상정보를 관리서버(20)로부터 제공받을 수 있다.Additionally, the user terminal 10 can select basic video information to be viewed, transmit it to the management server 20, and then receive final video information in which the basic video information is reconstructed from the management server 20.
실시예에 따라, 사용자 단말기(10)는 최종영상정보가 제공되는 서비스를 이용한 후, 이에 대한 피드백정보를 생성하여 관리서버(20)로 전송할 수 있다.피드백정보는 최종영상정보가 제공되는 서비스에 대한 만족도, 정확도 및 신뢰도 등의 정보가 포함될 수 있다. 이때, 피드백정보는 최종영상정보가 제공되는 서비스를 사용 후에 생성 및 전송할 수 있지만, 이와 달리 사용전 또는 사용중에 전송할 수도 있다.Depending on the embodiment, the user terminal 10 may use a service in which the final video information is provided, then generate feedback information about this and transmit it to the management server 20. The feedback information may be sent to the service in which the final video information is provided. Information such as satisfaction, accuracy, and reliability may be included. At this time, the feedback information can be generated and transmitted after using the service in which the final video information is provided, but alternatively, it can also be transmitted before or during use.
실시예에 따라, 사용자 단말기(10)는 관리서버(20)로부터 광고정보, 또는 최종영상정보가 제공되는 서비스에 대한 할인 또는 행사에 대한 정보가 포함된 이벤트 정보를 수신할 수 있지만, 이에 한정하지 않는다.Depending on the embodiment, the user terminal 10 may receive event information including advertising information or information about discounts or events for services for which final video information is provided from the management server 20, but is not limited to this. No.
실시예에 따라, 사용자 단말기(10)는 현재동작상태 및 최종영상정보를 시각적 및 청각적으로 출력할 수 있다.Depending on the embodiment, the user terminal 10 may output the current operating state and final image information visually and audibly.
실시예에 따라, 사용자 단말기(10)는 관리서버(20)와 서비스를 수행하기 위해 사용자정보를 이용하여 회원가입을 선진행할 수 있지만, 이에 한정하는 것은 아니다. 여기서, 사용자정보에는 이름, 연락처 및 직업 등의 개인정보와 취향, 선호하는 음악 및 영상 등의 추가정보를 포함할 수 있지만, 이에 한정하지 않는다. 이때, 사용자 단말기(10)는 적어도 하나 이상으로 이루어질 수 있지만, 이에 한정하지 않는다.Depending on the embodiment, the user terminal 10 may advance membership registration using user information in order to perform services with the management server 20, but is not limited to this. Here, user information may include, but is not limited to, personal information such as name, contact information, and occupation, as well as additional information such as tastes and preferred music and videos. At this time, the user terminal 10 may consist of at least one, but is not limited to this.
이와 같은, 사용자 단말기(10)는 관리서버(20)와의 통신을 지원하는 각종 휴대 가능한 전자통신기기를 포함할 수 있다. 예를 들어, 사용자 단말기(10)는 별도의 스마트 기기로써, 스마트폰(Smart phone), PDA(Personal Digital Assistant), 테블릿(Tablet), 웨어러블 디바이스(Wearable Device), 워치형 단말기(Smartwatch), 글래스형 단말기(Smart Glass), HMD(Head Mounted Display)등 포함), 각종 IoT(Internet of Things) 단말과 같은 다양한 휴대 단말을 포함할 수 있지만 이와 달리 휴대 가능하지 않는 데스크 탑 컴퓨터(desktop computer) 및 워크스테이션 컴퓨터 등의 전자통신기기를 포함할 수 있다.As such, the user terminal 10 may include various portable electronic communication devices that support communication with the management server 20. For example, the user terminal 10 is a separate smart device, such as a smart phone, a personal digital assistant (PDA), a tablet, a wearable device, a smartwatch, It may include various portable terminals such as glass-type terminals (including Smart Glass, HMD (Head Mounted Display), etc.) and various IoT (Internet of Things) terminals, but in contrast, non-portable desktop computers and It may include electronic communication devices such as workstation computers.
또한, 사용자 단말기(10)는 본 개시에서 응용 프로그램(application program 또는 애플리케이션(application))을 이용하여 동작할 수 있으며, 이러한 응용 프로그램은 무선통신을 통해 외부서버 또는 관리서버(20)로부터 다운로드 될 수 있다.In addition, the user terminal 10 can operate using an application program (application program or application) in the present disclosure, and such application program can be downloaded from an external server or management server 20 through wireless communication. there is.
다시 말하면, 사용자 단말기(10)는 관리서버(20)로부터 재구성되어 제공되는 최종영상정보를 시청하는 경우, 필터링된 영상에 의해 눈의 피로가 감소하고, 필터링된 사운드에 의해 귀의 피로가 감소하여 심신안정과 집중력이 향상될 수 있다.In other words, when the user terminal 10 watches the final image information reconstructed and provided from the management server 20, eye fatigue is reduced by the filtered image, and ear fatigue is reduced by the filtered sound, thereby improving mental and physical health. Stability and concentration can be improved.
관리서버(20)는 통신부(22), 저장부(24), 모니터링부(26) 및 서버제어부(28)를 포함할 수 있다.The management server 20 may include a communication unit 22, a storage unit 24, a monitoring unit 26, and a server control unit 28.
통신부(22)는 사용자 단말기(10)로부터 기초영상정보를 수신하거나, 기초영상정보가 재구성된 최종영상정보를 사용자 단말기(10)로 전송할 수 있다.The communication unit 22 may receive basic image information from the user terminal 10, or transmit final image information in which the basic image information is reconstructed to the user terminal 10.
저장부(24)는 무선통신망을 통해 사용자 단말기(10)와 송수신되는 데이터를 저장할 수 있다.The storage unit 24 can store data transmitted and received from the user terminal 10 through a wireless communication network.
저장부(24)는 관리서버(20)의 다양한 기능을 지원하는 데이터를 저장할 수 있다. 저장부(24)는 관리서버(20)에서 구동되는 다수의 응용 프로그램(application program 또는 애플리케이션(application)), 관리서버(20)의 동작을 위한 데이터들, 명령어들을 저장할 수 있다. 이러한 응용 프로그램 중 적어도 일부는, 무선통신을 통해 외부 서버로부터 다운로드 될 수 있다.The storage unit 24 can store data supporting various functions of the management server 20. The storage unit 24 may store a number of application programs (application programs or applications) running on the management server 20, data for operation of the management server 20, and commands. At least some of these applications may be downloaded from an external server through wireless communication.
모니터링부(26)는 사용자 조작에 의한 사용자 단말기(10)의 동작상태, 관리서버(20)의 동작상태, 그리고 사용자 단말기(10)와 관리서버(20) 사이의 송수신되는 데이터 등을 화면을 통해 모니터링 할 수 있다. 즉, 사용자 단말기(10)의 사용 상태를 실시간으로 확인함으로써, 사용자의 사용을 편리하게 하여 사용자에게 더욱 신뢰감을 줄 수 있다.The monitoring unit 26 monitors the operating state of the user terminal 10 due to user manipulation, the operating state of the management server 20, and data transmitted and received between the user terminal 10 and the management server 20 through the screen. It can be monitored. In other words, by checking the usage status of the user terminal 10 in real time, it is possible to conveniently use the user and provide the user with greater confidence.
서버제어부(28)는 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다. 즉, 서버제어부(28)는 기초영상정보의 이미지를 자연색으로 필터링하고, 기초영상정보의 이미지에 대응하는 사운드를 자연음으로 필터링하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.The server control unit 28 can reconstruct the basic image information and generate final image information. That is, the server control unit 28 can filter the image of the basic video information with natural colors and filter the sound corresponding to the image of the basic video information with natural sounds to reconstruct the basic video information and generate the final video information.
구체적으로, 서버제어부(28)는 기초영상정보에 포함된 이미지에 대한 이미지정보를 산출한 후, 산출된 이미지정보와 아이소프트정보를 비교 및 분석 및 필터링하여 이미지속성정보를 생성할 수 있다.Specifically, the server control unit 28 may calculate image information for an image included in the basic image information and then compare, analyze, and filter the calculated image information and iSoft information to generate image attribute information.
여기서, 서버제어부(28)는 기초영상정보로부터 프레임별로 이미지를 추출하고, 추출된 이미지에 대한 이미지정보를 산출할 수 있다.Here, the server control unit 28 can extract images for each frame from basic image information and calculate image information for the extracted images.
예를 들어, 서버제어부(28)는 기초영상정보로부터 최소 5초이상의 단위로 설정한 후, 프레임별로 바운딩 박스(Bounding Box)를 설정한 후, 설정된 바운딩 박스내에 위치하는 이미지에 대한 예측좌표정보를 추출한 후, 예측좌표정보를 라벨링하여 이미지를 추출할 수 있다.For example, the server control unit 28 sets the unit of at least 5 seconds from the basic video information, sets a bounding box for each frame, and then provides predicted coordinate information for the image located within the set bounding box. After extraction, the image can be extracted by labeling the predicted coordinate information.
즉, 서버제어부(28)는 산출된 이미지정보와 아이소프트정보를 비교 및 분석하여 이미지정보와 아이소프트정보가 상이한 경우, 이미지정보가 아이소프트정보와 동일하도록 필터링하여 이미지속성정보를 생성할 수 있다.That is, the server control unit 28 compares and analyzes the calculated image information and i-Soft information and, if the image information and i-Soft information are different, filters the image information so that it is the same as i-Soft information and generates image attribute information. .
이때, 아이소프트정보는 색상, 명도 및 채도에 대한 표준값을 포함하는 정보로써, 눈의 피로를 풀어주는 RGB값을 포함할 수 있다. 즉, 아이소프트정보는 눈의 피로를 포함한 정신적/육체적인 피로회복을 도모할 수 있는 기설정된 정보로써, 명도(Luminance) 8, 채도(Saturation) 1.5 ~ 2의 녹색(2.5G) 색상을 가질 수 있다.At this time, iSoft information is information including standard values for color, brightness, and saturation, and may include RGB values that relieve eye fatigue. In other words, iSoft information is preset information that can help recover from mental and physical fatigue, including eye fatigue, and can have a green (2.5G) color with a luminance of 8 and a saturation of 1.5 to 2. there is.
예를 들어, 서버제어부(28)는 8의 값을 갖는 고명도와 2의 값을 갖는 저채도를 이용하여 생성된 아이소프트정보 RGB(163, 204, 163), RGB(181, 214, 146), RGB(214, 230, 245)를 이용하여 이미지정보를 자연색으로 필터링할 수 있다.For example, the server control unit 28 generates iSoft information RGB (163, 204, 163), RGB (181, 214, 146) generated using high brightness with a value of 8 and low saturation with a value of 2. Image information can be filtered into natural colors using RGB (214, 230, 245).
한편, 서버제어부(28)는 산출된 이미지정보와 아이소프트정보를 비교 및 분석하여 이미지정보와 아이소프트정보가 동일한 경우, 이미지정보가 유지되는 이미지속성정보를 생성할 수 있다.Meanwhile, the server control unit 28 can compare and analyze the calculated image information and i-Soft information and, if the image information and i-Soft information are the same, create image attribute information in which the image information is maintained.
실시예에 따라, 서버제어부(28)는 빅데이터를 기반으로 딥러닝 기법 또는 머신러닝 기법을 이용하여 반복 학습하여 기초영상정보에 포함된 이미지정보를 자동으로 산출할 수 있다.Depending on the embodiment, the server control unit 28 may automatically calculate image information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
실시예에 따라, 서버제어부(28)는 기설정된 조건별로 이미지를 분류하여 이미지정보를 산출할 수 있다.Depending on the embodiment, the server control unit 28 may classify images according to preset conditions and calculate image information.
예를 들어, 자연, 물, 나무, 실외 또는 나무, 자연, 실외, 숲 또는 나무, 잔디, 실외, 식물 중 적어도 하나를 기준으로 이미지를 분류하여 이미지정보를 산출할 수 있다.For example, image information may be calculated by classifying an image based on at least one of nature, water, trees, outdoors or trees, nature, outdoors, forest or trees, grass, outdoors, or plants.
또한, 서버제어부(28)는 기초영상정보에 사운드가 포함된 경우, 기초영상정보에 대한 사운드정보를 산출한 후, 산출된 사운드정보를 필터링하여 자연음으로 이루어진 사운드속성정보를 생성할 수 있다.Additionally, when the basic image information includes sound, the server control unit 28 may calculate sound information for the basic image information and then filter the calculated sound information to generate sound attribute information consisting of natural sounds.
예를 들어, 서버제어부(28)는 산출된 사운드정보를 기초로 화이트노이즈(white noise) 및 핑크노이즈(pink noise)를 이용하여 필터링하여 자연음으로 이루어진 사운드속성정보를 생성할 수 있다.For example, the server control unit 28 may generate sound attribute information consisting of natural sounds by filtering the calculated sound information using white noise and pink noise.
이때, 화이트노이즈는 모든 주파수 성분을 포함하는 노이즈로서 그 에너지가 모든 주파수 영역에 분포되어 있으며, 각종 측정에 사용되는 것 말고도 신시사이저에서 자연음(파도소리, 바람소리, 폭포소리, 벌레소리, 새소리 등)을 만들어낼 경우의 중요한 음원이 될 수 있다. 또한, 핑크노이즈는 20Hz 내지 20kHz의 주파수 대역에서 고르게 재생되는 노이즈 레벨로서 화이트노이즈를 옥타브당 3dB씩 감쇠하여 보정해 줌으로써 실제 청감에서 모든 주파수 대역이 동일한 레벨로 들리게 만드는 노이즈 시그널일 수 있다. 즉, 신호의 크기를 측정하기 위하여 인위적으로 만든 잡음 신호를 말하며, 단일 노이즈가 아닌 재생 주파수 대역에서 고르게 재생되는 노이즈 레벨일 수 있다.At this time, white noise is noise containing all frequency components, and its energy is distributed in all frequency ranges. In addition to being used in various measurements, white noise is used in synthesizers to produce natural sounds (sounds of waves, wind, waterfalls, insects, birds, etc.). ) can be an important sound source when producing In addition, pink noise is a noise level that is evenly reproduced in the frequency band of 20Hz to 20kHz, and can be a noise signal that corrects white noise by attenuating it by 3dB per octave, making all frequency bands sound at the same level in actual hearing. In other words, it refers to a noise signal artificially created to measure the size of a signal, and may be a noise level that is reproduced evenly in the reproduction frequency band rather than a single noise.
실시예에 따라, 서버제어부(28)는 빅데이터를 기반으로 딥러닝 기법 또는 머신러닝 기법을 이용하여 반복 학습하여 기초영상정보에 포함된 사운드정보를 자동으로 산출할 수 있다.Depending on the embodiment, the server control unit 28 may automatically calculate sound information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
실시예에 따라, 서버제어부(28)는 사운드정보를 필터링하여 자연음으로 이루어진 사운드속성정보를 생성하기 전에 기초영상정보에 포함된 노이즈를 제거할 수 있다. 이때, 노이즈 제거는 기초영상정보가 전처리될 때 동시에 이루어질 수도 있다.Depending on the embodiment, the server control unit 28 may filter the sound information to remove noise included in the basic image information before generating sound attribute information consisting of natural sounds. At this time, noise removal may be performed simultaneously when basic image information is preprocessed.
실시예에 따라, 서버제어부(28)는 이미지정보를 필터링한 후, 사운드정보를 필터링할 수 있지만, 이에 한정하지 않을 수 있다.Depending on the embodiment, the server control unit 28 may filter image information and then filter sound information, but may not be limited to this.
예를 들어, 서버제어부(28)는 사운드정보를 필터링 한 후, 이미지정보를 필터링 하거나, 동시에 사운드정보와 이미지정보를 필터링할 수 있다.For example, the server control unit 28 may filter sound information and then filter image information, or may filter sound information and image information at the same time.
실시예에 따라, 서버제어부(28)는 기설정된 조건별로 사운드를 분류하여 사운드정보를 산출할 수 있다.Depending on the embodiment, the server control unit 28 may classify sounds according to preset conditions and calculate sound information.
예를 들어, 물소리, 벌레소리, 새소리 또는 폭포소리, 바람소리, 새소리 또는 바람소리, 벌레소리, 폭포소리 중 적어도 하나를 기준으로 사운드를 분류하여 사운드정보를 산출할 수 있다.For example, sound information can be calculated by classifying sounds based on at least one of the sounds of water, insects, birds or waterfalls, wind, birds or wind, insects, and waterfalls.
실시예에 따라, 서버제어부(28)는 기초영상정보를 이미지정보 및 사운드정보를 필터링하기 이전에 또는 동시에 전처리할 수 있다. 이때, 전처리와 동시에 노이즈가 함께 제거될 수도 있다.Depending on the embodiment, the server control unit 28 may preprocess the basic video information before or simultaneously with filtering the image information and sound information. At this time, noise may be removed simultaneously with preprocessing.
예를 들어, 기초영상정보에 포함된 이미지정보의 깨짐, 불명확한 부분, 밝기, 선명도 등을 파악하여 자동으로 제거 및 보정하거나, 사운드정보의 음이탈, 음역대, 사운드 밸런스 등을 파악하여 자동으로 제거 및 보정할 수 있다.For example, automatically remove and correct the image information included in the basic image information by identifying brokenness, unclear parts, brightness, clarity, etc., or automatically remove the sound information by identifying the sound deviation, sound range, sound balance, etc. and can be corrected.
실시예에 따라, 서버제어부(28)는 기초영상정보에 포함된 사운드정보를 산출한 후, 산출된 사운드정보에 포함된 키워드를 추출하고, 추출된 키워드를 이용하여 필터링하여 자연음으로 이루어진 사운드속성정보를 생성할 수 있다.According to the embodiment, the server control unit 28 calculates sound information included in the basic image information, extracts keywords included in the calculated sound information, and filters using the extracted keywords to obtain sound properties consisting of natural sounds. Information can be generated.
예를 들어, 서버제어부(28)는 사운드정보에 포함된 단어를 추출하고, 추출된 단어들이 의미 있는 단어의 최소 단위로 구분되도록 공백제거필터, 특수문자제거필터를 이용하여 토큰화 작업을 수행한 후, 토큰화 작업이 끝난 데이터에 포함된 등장 빈도가 낮은 단어 또는 다수 반복되는 해당 단어들에 대한 노이즈 데이터를 제거하여 잔존하는 단어들의 의미가 부각되도록 정제화 작업을 수행하며, 정제화 작업이 끝난 데이터를 정규화하여 사운드정보를 생성할 수 있다.For example, the server control unit 28 extracts words included in sound information and performs tokenization using a space removal filter and a special character removal filter so that the extracted words are divided into the minimum unit of meaningful words. Afterwards, a refinement process is performed to highlight the meaning of the remaining words by removing noise data for words with low frequency of occurrence or words that are repeated a lot included in the data after the tokenization process, and the data after the refinement process is performed. Sound information can be generated by normalizing.
그리고, 서버제어부(28)는 기초영상정보를 기초로 필터링되어 자연색으로 이루어진 이미지속성정보와 기초영상정보를 기초로 필터링되어 자연음으로 이루어진 사운드속성정보를 매칭하고 이를 병합하여 기초영상정보가 재구성된 최종영상정보를 생성할 수 있다.Then, the server control unit 28 matches image attribute information made up of natural colors filtered based on the basic image information and sound attribute information made up of natural sounds filtered based on the basic image information, and merges them to reconstruct the basic image information. Final image information can be generated.
실시예에 따라, 서버제어부(28)는 기본 사운드정보와 자연색으로 이루어진 이미지속성정보와 매칭하여 이를 병합하여 기초영상정보를 재구성하거나, 기본 이미지정보와 자연음으로 이루어진 사운드속성정보와 매칭하여 이를 병합하여 기초영상정보를 재구성할 수 있다.Depending on the embodiment, the server control unit 28 reconstructs basic image information by matching basic sound information and image attribute information consisting of natural colors and merging them, or matching basic image information and sound attribute information consisting of natural sounds and merging them. Thus, basic image information can be reconstructed.
실시예에 따라, 서버제어부(28)는 기초영상정보에 사운드가 포함되지 않은 경우, 사용자의 요청에 의해 별도로 추가된 사운드정보만을 필터링하여 생성된 자연음으로 이루어진 사운드속성정보와 이미지속성정보를 매칭하고 이를 병합하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.Depending on the embodiment, when the basic image information does not include sound, the server control unit 28 matches image attribute information with sound attribute information consisting of natural sounds generated by filtering only sound information added separately at the user's request. And by merging them, the basic image information can be reconstructed to generate the final image information.
실시예에 따라, 서버제어부(28)는 기초영상정보에 사운드가 포함되지 않은 경우, 사용자의 목소리가 녹음된 사운드정보를 필터링하여 생성된 자연음으로 이루어진 사운드속성정보와 이미지속성정보를 매칭하고 이를 병합하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.According to the embodiment, when the basic video information does not include sound, the server control unit 28 matches the image attribute information with sound attribute information consisting of natural sounds generated by filtering the sound information in which the user's voice is recorded, and matches the image attribute information. The final image information can be generated by merging and reconstructing the basic image information.
실시예에 따라, 서버제어부(28)는 기초영상정보에 사운드가 포함되지 않은 경우, 이미지속성정보만을 이용하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.Depending on the embodiment, when the basic video information does not include sound, the server control unit 28 may reconstruct the basic video information using only image attribute information to generate final video information.
실시예에 따라, 서버제어부(28)는 사용자 단말기(10)에 의해 기초영상정보가 업로드된 경우, 기초영상정보에 기초하여 사용자 맞춤형으로 재구성하여 맞춤형 최종영상정보를 생성하여 사용자에게 제공할 수 있다.Depending on the embodiment, when basic image information is uploaded by the user terminal 10, the server control unit 28 may reconfigure the basic image information to suit the user, generate customized final image information, and provide it to the user. .
실시예에 따라, 서버제어부(28)는 사용자 단말기(10)로부터 수신된 피드백신호에 대응하여 최종영상정보가 제공되는 서비스를 개선할 수 있는 피드백 제어신호를 생성할 수 있다.Depending on the embodiment, the server control unit 28 may generate a feedback control signal that can improve the service providing the final video information in response to the feedback signal received from the user terminal 10.
실시예에 따라, 서버제어부(28)는 이벤트정보가 포함된 피드백 제어신호를 생성하여 사용자 단말기(10)의 최종영상정보가 제공되는 서비스의 사용을 더욱 증대시킬 수 있다. 이때, 이벤트정보는 최종영상정보에 포함될 수도 있다.Depending on the embodiment, the server control unit 28 may generate a feedback control signal containing event information to further increase the use of the service providing the final image information of the user terminal 10. At this time, event information may be included in the final video information.
이와 같은 관리서버(20)는 하드웨어 회로(예를 들어, CMOS 기반 로직 회로), 펌웨어, 소프트웨어 또는 이들의 조합에 의해 구현될 수 있다. 예를 들어, 다양한 전기적 구조의 형태로 트랜지스터, 로직게이트 및 전자회로를 활용하여 구현될 수 있다.Such management server 20 may be implemented by hardware circuits (eg, CMOS-based logic circuits), firmware, software, or a combination thereof. For example, it can be implemented using transistors, logic gates, and electronic circuits in the form of various electrical structures.
이와 같은 구조의 관리서버(20)는 기초영상정보를 기초로 필터링되어 자연색으로 이루어진 이미지속성정보와 자연음으로 이루어진 사운드속성정보를 매칭한 후 이를 병합하여 기초영상정보가 재구성된 최종영상정보를 생성하여 사용자 단말기(10)에게 제공함으로써, 최종영상정보를 시청하는 사용자는 귀의 피로 및 눈의 피로가 완화되어 심신안정을 유지하면서 집중력을 더욱 높일 수 있다.The management server 20 with this structure is filtered based on the basic image information, matches image attribute information made up of natural colors and sound attribute information made up of natural sounds, and then merges them to generate final video information in which the basic video information is reconstructed. By providing this information to the user terminal 10, the user watching the final video information is relieved of ear fatigue and eye fatigue, and can further increase concentration while maintaining mental and physical stability.
이와 같은 구조를 갖는 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 시스템의 동작은 다음과 같다. 도 2는 본 발명의 일실시예에 따른 심신안정을 위한 영상 제작 방법을 설명하기 위한 도면이고, 도 3은 도 2에 도시된 이미지정보를 필터링하는 단계를 설명하기 위한 도면이며, 도 4는 도 2에 도시된 사운드정보를 필터링하는 단계를 설명하기 위한 도면이다.The operation of the image production system for mental and physical stability according to an embodiment of the present invention having this structure is as follows. FIG. 2 is a diagram for explaining a method of producing an image for mental and physical stability according to an embodiment of the present invention, FIG. 3 is a diagram for explaining the step of filtering the image information shown in FIG. 2, and FIG. 4 is a diagram for This is a diagram to explain the steps of filtering sound information shown in Figure 2.
우선, 도 2에 도시된 바와 같이, 관리서버(20)는 기초영상정보를 전처리할 수 있다(S100).First, as shown in FIG. 2, the management server 20 can preprocess basic image information (S100).
이때, 관리서버(20)는 전처리와 동시에 노이즈를 함께 제거할 수도 있다.At this time, the management server 20 may remove noise simultaneously with preprocessing.
예를 들어, 관리서버(20)는 기초영상정보에 포함된 이미지정보의 깨짐, 불명확한 부분, 밝기, 선명도 등을 파악하여 자동으로 제거 및 보정하거나, 사운드정보의 음이탈, 음역대, 사운드 밸런스 등을 파악하여 자동으로 제거 및 보정할 수 있다.For example, the management server 20 detects and automatically removes and corrects damage, unclear parts, brightness, sharpness, etc. of the image information included in the basic image information, or adjusts sound deviation, sound range, sound balance, etc. of the sound information. can be identified and automatically removed and corrected.
여기서, 기초영상정보는 사용자 단말기(10)로부터 업로드된 정보일 수도 있지만, 이에 한정하지 않는다.Here, the basic image information may be information uploaded from the user terminal 10, but is not limited thereto.
다음으로, 관리서버(20)는 기초영상정보에 포함된 이미지정보를 산출할 수 있다(S110).Next, the management server 20 can calculate image information included in the basic image information (S110).
구체적으로, 관리서버(20)는 기초영상정보로부터 프레임별로 이미지를 추출하고, 추출된 이미지에 대한 이미지정보를 산출할 수 있다.Specifically, the management server 20 can extract images for each frame from basic image information and calculate image information for the extracted images.
예를 들어, 관리서버(20)는 기초영상정보로부터 최소 5초이상의 단위로 설정한 후, 프레임별로 바운딩 박스(Bounding Box)를 설정한 후, 설정된 바운딩 박스내에 위치하는 이미지에 대한 예측좌표정보를 추출한 후, 예측좌표정보를 라벨링하여 이미지를 추출하고, 추출된 이미지에 대한 이미지정보를 산출할 수 있다.For example, the management server 20 sets the unit of at least 5 seconds from the basic video information, sets a bounding box for each frame, and then provides predicted coordinate information for the image located within the set bounding box. After extraction, the image can be extracted by labeling the predicted coordinate information, and image information for the extracted image can be calculated.
실시예에 따라, 관리서버(20)는 빅데이터를 기반으로 딥러닝 기법 또는 머신러닝 기법을 이용하여 반복 학습하여 기초영상정보에 포함된 이미지정보를 자동으로 산출할 수 있다.Depending on the embodiment, the management server 20 may automatically calculate image information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
실시예에 따라, 관리서버(20)는 기설정된 조건별로 이미지를 분류하여 이미지정보를 산출할 수 있다.Depending on the embodiment, the management server 20 may classify images according to preset conditions and calculate image information.
다음으로, 이미지정보와 아이소프트정보의 일치 여부를 판단하여(S120), 일치하지 않는 경우 관리서버(20)는 이미지정보를 자연색으로 필터링하여 이미지속성정보를 생성할 수 있다(S130, S140).Next, it is determined whether the image information and the iSoft information match (S120), and if they do not match, the management server 20 may filter the image information into natural colors to generate image attribute information (S130, S140).
구체적으로, 관리서버(20)는 이미지정보가 아이소프트정보와 동일하도록 이미지정보를 필터링할 수 있다.Specifically, the management server 20 may filter the image information so that the image information is identical to the iSoft information.
예를 들어, 도 3을 참조하면, 이미지정보와 아이소프트정보가 서로 일치하지 않는 경우(도 3(a) 참조), 8의 값을 갖는 고명도와 2의 값을 갖는 저채도를 이용하여 생성된 아이소프트정보 RGB(163, 204, 163), RGB(181, 214, 146), RGB(214, 230, 245)를 이용하여(도 3(b) 참조), 이미지정보를 자연색으로 필터링하여 이미지속성정보를 생성할 수 있다(도 3(c) 참조).For example, referring to Figure 3, when the image information and iSoft information do not match each other (see Figure 3(a)), the image generated using high brightness with a value of 8 and low saturation with a value of 2 is used. Using iSoft Information RGB(163, 204, 163), RGB(181, 214, 146), and RGB(214, 230, 245) (see Figure 3(b)), image information is filtered into natural colors to determine image properties. Information can be generated (see Figure 3(c)).
다음으로, 기초영상정보에 사운드가 포함되었는지 유무를 판단하여(S150), 사운드가 포함된 경우 관리서버(20)는 사운드정보를 산출할 수 있다(S160).Next, it is determined whether sound is included in the basic image information (S150), and if sound is included, the management server 20 can calculate sound information (S160).
예를 들어, 관리서버(20)는 빅데이터를 기반으로 딥러닝 기법 또는 머신러닝 기법을 이용하여 반복 학습하여 기초영상정보에 포함된 사운드정보를 자동으로 산출할 수 있다.For example, the management server 20 can automatically calculate sound information included in basic image information by repeatedly learning using deep learning techniques or machine learning techniques based on big data.
실시예에 따라, 관리서버(20)는 기설정된 조건별로 사운드를 분류하여 사운드정보를 산출할 수 있다.Depending on the embodiment, the management server 20 may classify sounds according to preset conditions and calculate sound information.
다음으로, 관리서버(20)는 사운드정보를 기초로 화이트노이즈 및 핑크노이즈를 이용하여 사운드속성정보를 생성할 수 있다(S170).Next, the management server 20 can generate sound attribute information using white noise and pink noise based on the sound information (S170).
예를 들어, 도 4를 참조하면 관리서버(20)는 노이즈가 제거된 사운드정보를 기초로(도 4(a) 참조) 화이트노이즈 및 핑크노이즈를 이용하여(도 4(b) 참조) 사운드정보를 필터링하여 자연음으로 이루어진 사운드속성정보를 생성할 수 있다.For example, referring to FIG. 4, the management server 20 uses white noise and pink noise (see FIG. 4(b)) based on noise-removed sound information (see FIG. 4(a)) to generate sound information. By filtering, sound attribute information consisting of natural sounds can be generated.
실시예에 따라, 관리서버(20)는 기설정된 조건별로 사운드를 분류하여 사운드정보를 산출할 수 있다.Depending on the embodiment, the management server 20 may classify sounds according to preset conditions and calculate sound information.
실시예에 따라, 관리서버(20)는 기초영상정보에 포함된 사운드정보를 산출한 후, 산출된 사운드정보에 포함된 키워드를 추출하고, 추출된 키워드를 이용하여 필터링하여 자연음으로 이루어진 사운드속성정보를 생성할 수 있다.According to the embodiment, the management server 20 calculates sound information included in the basic image information, extracts keywords included in the calculated sound information, and filters using the extracted keywords to obtain sound properties consisting of natural sounds. Information can be generated.
마지막으로, 관리서버(20)는 기초영상정보를 기초로, 필터링되어 자연색으로 이루어진 이미지속성정보와 자연음으로 이루어진 사운드속성정보를 매칭하고, 이를 병합하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다(S180, S190).Finally, based on the basic image information, the management server 20 matches filtered image attribute information consisting of natural colors and sound attribute information consisting of natural sounds, and merges them to reconstruct the basic image information to generate final image information. You can do it (S180, S190).
이에 따라, 관리서버(20)는 기초영상정보에 대한 이미지정보를 자연색으로 필터링하여 눈의 피로를 풀어주고, 사운드정보를 자연음으로 필터링하여 귀의 피로를 풀어줌으로써, 사용자의 심신안정 및 집중력을 향상시킬 수 있는 영상을 제작할 수 있다.Accordingly, the management server 20 filters image information for basic image information into natural colors to relieve eye fatigue, and filters sound information into natural sounds to relieve ear fatigue, thereby improving the user's mental and physical stability and concentration. You can create a video that can be done.
실시예에 따라, 관리서버(20)는 기본 사운드정보와 자연색으로 이루어진 이미지속성정보와 매칭하여 이를 병합하여 기초영상정보를 재구성하거나, 기본 이미지정보와 자연음으로 이루어진 사운드속성정보와 매칭하여 이를 병합하여 기초영상정보를 재구성할 수 있다.Depending on the embodiment, the management server 20 matches basic sound information and image attribute information made up of natural colors and merges them to reconstruct basic image information, or matches basic image information and sound attribute information made up of natural sounds and merges them. Thus, basic image information can be reconstructed.
한편, 이미지정보와 아이소프트정보의 일치 여부를 판단하여(S120), 일치하는 경우, 관리서버(20)는 이미지정보가 유지되는 이미지속성정보를 생성할 수 있다(S200).Meanwhile, it is determined whether the image information and the iSoft information match (S120), and if they match, the management server 20 can generate image attribute information in which the image information is maintained (S200).
또한, 기초영상정보에 사운드가 포함되었는지 유무를 판단하여(S150), 사운드가 포함되지 않는 경우 관리서버(20)는 이미지정보가 필터링되어 자연색으로 이루어진 이미지속성정보만을 이용하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.In addition, it is determined whether sound is included in the basic video information (S150), and if sound is not included, the management server 20 reconstructs the basic video information using only the image attribute information made up of natural colors by filtering the image information. Final image information can be generated.
실시예에 따라, 관리서버(20)는 기초영상정보에 사운드가 포함되지 않은 경우, 사용자의 요청에 의해 별도로 추가된 사운드정보만을 필터링하여 생성된 자연음으로 이루어진 사운드속성정보와 이미지속성정보를 매칭하고 이를 병합하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.Depending on the embodiment, when the basic image information does not include sound, the management server 20 matches image attribute information with sound attribute information consisting of natural sounds generated by filtering only sound information added separately at the user's request. And by merging them, the basic image information can be reconstructed to generate the final image information.
실시예에 따라, 관리서버(20)는 기초영상정보에 사운드가 포함되지 않은 경우, 사용자의 목소리가 녹음된 사운드정보를 필터링하여 생성된 자연음으로 이루어진 사운드속성정보와 이미지속성정보를 매칭하고 이를 병합하여 기초영상정보를 재구성하여 최종영상정보를 생성할 수 있다.According to the embodiment, when the basic video information does not include sound, the management server 20 matches the image attribute information with sound attribute information consisting of natural sounds generated by filtering the sound information in which the user's voice is recorded, and matches the image attribute information. The final image information can be generated by merging and reconstructing the basic image information.
실시예에 따라, 관리서버(20)는 사용자 단말기(10)에 의해 기초영상정보가 업로드된 경우, 기초영상정보에 기초하여 사용자 맞춤형으로 재구성하여 맞춤형 최종영상정보를 생성하여 사용자에게 제공할 수 있다.Depending on the embodiment, when basic image information is uploaded by the user terminal 10, the management server 20 may reorganize the basic image information to suit the user, generate customized final image information, and provide it to the user. .
상기와 같이, 본 발명의 일실시예에 따른 심신안정을 위해 제작된 최종영상정보의 집중력 향상에 대한 뇌파 평가 검사를 하기와 같이 실시하였다.As described above, an EEG evaluation test for improvement in concentration of the final image information produced for mental and physical stability according to an embodiment of the present invention was conducted as follows.
본 발명에서 집중력 향상을 검증하기 위해 뇌파 평가 검사는 20~30대 성인 남녀 5명을 피실험자(A~E)로 선정하여 실시하였으며, 기초영상정보(비교예)와 최종영상정보(실시예)를 시청하면서 운동을 할 때 또는 운동을 한 후 뇌파를 측정하여 심리적 안정을 바탕으로 각각의 집중력 지수를 평가하였다. 여기서, 집중력 지수(beta/highbetax 100)는 스트레스는 증가하지 않고 뇌는 맑은 상태를 유지하면서 수행을 일정하게 유지할 수 있는 지수를 뜻하며, 지수가 높을수록 집중력이 높을 수 있다.In order to verify the improvement of concentration in the present invention, an electroencephalographic evaluation test was conducted by selecting five adult men and women in their 20s and 30s as test subjects (A to E), and basic image information (comparative example) and final image information (example) were used. Brain waves were measured while watching or exercising while watching, and each concentration index was evaluated based on psychological stability. Here, the concentration index (beta/highbetax 100) refers to an index that can keep performance constant while stress does not increase and the brain remains clear. The higher the index, the higher the concentration.
이때, 검사 조건은 10-20법을 기초로 하는 뇌파실험장비를 이용하여 피실험자가 육체적 및 정신적 장애가 없고, 검사 1일전에 과격한 운동을 하지 않고, 3시간전에 약물 또는 카페인을 섭취하지 않는 상태에서 뇌파를 검사할 수 있지만, 이에 한정하지 않는다.At this time, the test conditions are that the test subject has no physical or mental disabilities using brain wave experiment equipment based on the 10-20 method, does not exercise vigorously 1 day before the test, and does not consume drugs or caffeine 3 hours prior to the test. can be inspected, but is not limited to this.
또한, 기초영상정보는 사운드를 포함하는 다양한 종류의 이미지 또는 동영상이 포함된 영상이고, 최종영상정보는 기초영상정보를 기초로 영상 및 사운드가 필터링되어 재구성된 정보일 수 있다.Additionally, the basic image information may be an image containing various types of images or videos including sound, and the final image information may be information reconstructed by filtering the images and sounds based on the basic image information.
아래의 표 1을 참조하면, 본 발명의 실시예를 시청하는 경우, 비교예보다 집중력이 더욱 향상되는 결과를 얻었다.Referring to Table 1 below, when watching the example of the present invention, concentration was further improved compared to the comparative example.
구체적으로, 피실험자 A(20대 여성)가 기초영상정보(비교예)을 시청한 경우에 집중력 지수는 87이고, 최종영상정보(실시예)를 시청한 경우에 집중력 지수는 117로써, 뇌파를 측정한 집중력 지수값이 비교예보다 32.23% 높게 측정됨을 알 수 있다.Specifically, when subject A (female in her 20s) watched the basic image information (comparative example), the concentration index was 87, and when she watched the final image information (example), the concentration index was 117, measuring brain waves. It can be seen that the concentration index value was measured to be 32.23% higher than that of the comparative example.
또한, 피실험자 B(20대 여성)가 기초영상정보(비교예)을 시청한 경우에 집중력 지수는 121이고, 최종영상정보(실시예)를 시청한 경우에 집중력 지수는 160로써, 뇌파를 측정한 집중력 지수값이 비교예보다 34.48% 높게 측정됨을 알 수 있다.In addition, when subject B (a woman in her 20s) watched the basic image information (comparative example), the concentration index was 121, and when she watched the final image information (example), the concentration index was 160, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 34.48% higher than the comparative example.
또한, 피실험자 C(30대 여성)가 기초영상정보(비교예)을 시청한 경우에 집중력 지수는 97이고, 최종영상정보(실시예)를 시청한 경우에 집중력 지수는 121로써, 뇌파를 측정한 집중력 지수값이 비교예보다 24.74% 높게 측정됨을 알 수 있다.In addition, when subject C (a woman in her 30s) watched the basic image information (comparative example), the concentration index was 97, and when she watched the final image information (example), the concentration index was 121, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 24.74% higher than the comparative example.
또한, 피실험자 D(30대 여성)가 기초영상정보(비교예)을 시청한 경우에 집중력 지수는 101이고, 최종영상정보(실시예)를 시청한 경우에 집중력 지수는 133로써, 뇌파를 측정한 집중력 지수값이 비교예보다 31.68% 높게 측정됨을 알 수 있다.In addition, when subject D (a woman in her 30s) watched the basic image information (comparative example), the concentration index was 101, and when she watched the final image information (example), the concentration index was 133, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 31.68% higher than the comparative example.
또한, 피실험자 E(20대 남성)가 기초영상정보(비교예)을 시청한 경우에 집중력 지수는 108이고, 최종영상정보(실시예)를 시청한 경우에 집중력 지수는 147로써, 뇌파를 측정한 집중력 지수값이 비교예보다 36.11% 높게 측정됨을 알 수 있다.In addition, when subject E (male in his 20s) watched the basic image information (comparative example), the concentration index was 108, and when he watched the final image information (example), the concentration index was 147, which was measured by measuring brain waves. It can be seen that the concentration index value was measured to be 36.11% higher than the comparative example.
집중력 지수(beta/highbetax 100)Concentration index (beta/highbetax 100)
피실험자subject 비교예(기초영상정보)Comparative example (basic image information) 실시예(최종영상정보)Example (final image information) 결과값(%)Result value (%)
피실험자 ASubject A 8787 117117 32.23%32.23%
피실험자 BSubject B 121121 160160 34.48%34.48%
피실험자 CSubject C 9797 121121 24.74%24.74%
피실험자 DSubject D 101101 133133 31.68%31.68%
피실험자 ESubject E 108108 147147 36.11%36.11%
평균값medium 102.8102.8 135.6135.6 31.9%31.9%
다시 말하면, 피실험자(A~E)의 전체 자극별 집중력 지수를 살펴보면, 기초영상정보(비교예)을 시청한 경우에 집중력 지수는 102.8이고, 최종영상정보(실시예)를 시청한 경우에 집중력 지수는 135.6로써, 뇌파를 측정한 집중력 지수값이 비교예보다 31.9% 높게 측정됨을 알 수 있다.In other words, looking at the concentration index for each stimulus of the subjects (A ~ E), when watching the basic image information (comparative example), the concentration index was 102.8, and when watching the final image information (example), the concentration index was 102.8. is 135.6, showing that the concentration index value measured by brain waves is 31.9% higher than that of the comparative example.
이에 따라, 비교예에 비해 본 발명의 실시예를 시청하는 경우, 비교예보다 집중력이 더욱 향상되는 것을 알 수 있다. 즉, 본 발명에서는 기초영상정보를 재구성하여 제공되는 최종영상정보를 시청하는 경우, 필터링된 영상에 의해 눈의 피로가 감소하고, 필터링된 사운드에 의해 귀의 피로가 감소하여 심신안정과 집중력이 향상되는 결과를 얻었다.Accordingly, it can be seen that concentration is further improved when watching the embodiment of the present invention compared to the comparative example. That is, in the present invention, when watching the final video information provided by reconstructing the basic video information, eye fatigue is reduced by the filtered video, ear fatigue is reduced by the filtered sound, and mental and physical stability and concentration are improved. I got the result.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모듈로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모듈은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.The steps of the method or algorithm described in connection with embodiments of the present invention may be implemented directly in hardware, implemented as a software module executed by hardware, or a combination thereof. Software modules can be RAM (Random Access Memory), ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), Flash Memory, hard disk, removable disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the invention pertains.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다.Above, embodiments of the present invention have been described with reference to the attached drawings, but those skilled in the art will understand that the present invention can be implemented in other specific forms without changing its technical idea or essential features. You will be able to understand it. Therefore, the embodiments described above should be understood in all respects as illustrative and not restrictive.

Claims (6)

  1. 관리서버에 의해 수행되는 심신안정을 위한 영상 제작 방법에 있어서,In the video production method for mental and physical stability performed by the management server,
    기초영상정보에 포함된 이미지정보를 추출하고, 상기 이미지정보를 필터링하여 이미지속성정보를 생성하는 단계;Extracting image information included in basic image information and filtering the image information to generate image attribute information;
    상기 기초영상정보에 포함된 사운드정보를 추출하고, 상기 사운드정보를 필터링하여 사운드속성정보를 생성하는 단계; 및Extracting sound information included in the basic image information and filtering the sound information to generate sound attribute information; and
    상기 이미지속성정보 및 상기 사운드속성정보를 매칭하여 최종영상정보를 생성하는 단계;를 포함하고,Comprising: generating final video information by matching the image attribute information and the sound attribute information,
    상기 최종영상정보를 생성하는 단계는,The step of generating the final image information is,
    상기 이미지정보가 필터링되어 자연색으로 이루어진 상기 이미지속성정보와 상기 사운드정보가 필터링되어 자연음으로 이루어진 상기 사운드속성정보를 매칭 및 병합하여 상기 기초영상정보가 재구성된 상기 최종영상정보를 생성하는 단계;를 포함하는, 심신안정을 위한 영상 제작 방법.Generating the final image information in which the basic image information is reconstructed by matching and merging the image attribute information, which consists of natural colors by filtering the image information, and the sound attribute information, which consists of natural sounds by filtering the sound information; Including video production methods for mental and physical stability.
  2. 제1항에 있어서,According to paragraph 1,
    상기 이미지속성정보를 생성하는 단계는,The step of generating the image attribute information is,
    상기 이미지정보와 아이소프트정보를 분석하고, 분석결과에 따라 상기 이미지정보와 상기 아이소프트정보가 상이한 경우, 상기 이미지정보와 상기 아이소프트정보가 동일하도록 상기 이미지정보를 필터링하여 상기 이미지속성정보를 생성하며, 분석결과에 따라 상기 이미지정보와 상기 아이소프트정보가 동일한 경우, 상기 이미지정보가 유지되도록 상기 이미지정보를 필터링하여 상기 이미지속성정보를 생성하는, 심신안정을 위한 영상 제작 방법.Analyze the image information and i-Soft information, and if the image information and i-Soft information are different according to the analysis results, filter the image information so that the image information and i-Soft information are the same to generate the image attribute information. And, if the image information and the iSoft information are the same according to the analysis results, the image attribute information is generated by filtering the image information so that the image information is maintained.
  3. 제1항에 있어서,According to paragraph 1,
    상기 사운드속성정보를 생성하는 단계는,The step of generating the sound attribute information is,
    상기 사운드정보를 기초로 화이트노이즈 및 핑크노이즈를 이용하여 필터링하여 상기 사운드속성정보를 생성하는, 심신안정을 위한 영상 제작 방법.A video production method for mental and physical stability that generates the sound attribute information by filtering using white noise and pink noise based on the sound information.
  4. 제1항에 있어서,According to paragraph 1,
    상기 사운드속성정보를 생성하는 단계는,The step of generating the sound attribute information is,
    상기 사운드정보에 포함된 키워드를 추출하고, 추출된 키워드를 이용하여 필터링하여 자연음으로 이루어진 상기 사운드속성정보를 생성하는, 심신안정을 위한 영상 제작 방법.A video production method for mental and physical stability that extracts keywords included in the sound information and filters them using the extracted keywords to generate the sound attribute information consisting of natural sounds.
  5. 기초영상정보에 포함된 이미지정보를 추출하고, 상기 이미지정보를 필터링하여 이미지속성정보를 생성하고, 상기 기초영상정보에 포함된 사운드정보를 추출하고, 상기 사운드정보를 필터링하여 사운드속성정보를 생성하며, 상기 이미지속성정보 및 상기 사운드속성정보를 매칭하여 최종영상정보를 생성하여 사용자 단말기로 전송하는 관리서버;를 포함하고,Extracting image information included in basic image information, filtering the image information to generate image attribute information, extracting sound information included in the basic image information, and filtering the sound information to generate sound attribute information; , a management server matching the image attribute information and the sound attribute information to generate final video information and transmitting it to the user terminal;
    상기 관리서버는,The management server is,
    상기 이미지정보가 필터링되어 자연색으로 이루어진 상기 이미지속성정보와 상기 사운드정보가 필터링되어 자연음으로 이루어진 상기 사운드속성정보를 매칭 및 병합하여 상기 기초영상정보가 재구성된 상기 최종영상정보를 생성하는, 심신안정을 위한 영상 제작 시스템.The image attribute information, which is made up of natural colors by filtering the image information, and the sound attribute information, which is made up of natural sounds by filtering the sound information, are matched and merged to generate the final image information in which the basic image information is reconstructed, providing mental and physical stability. Video production system for.
  6. 하드웨어인 컴퓨터와 결합되어, 제1항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터 프로그램.A computer program combined with a computer as hardware and stored in a computer-readable recording medium so as to perform the method of claim 1.
PCT/KR2022/009814 2022-07-07 2022-07-07 System for producing video for mental and physical stability, and method therefor WO2024010113A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2022/009814 WO2024010113A1 (en) 2022-07-07 2022-07-07 System for producing video for mental and physical stability, and method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2022/009814 WO2024010113A1 (en) 2022-07-07 2022-07-07 System for producing video for mental and physical stability, and method therefor

Publications (1)

Publication Number Publication Date
WO2024010113A1 true WO2024010113A1 (en) 2024-01-11

Family

ID=89453557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/009814 WO2024010113A1 (en) 2022-07-07 2022-07-07 System for producing video for mental and physical stability, and method therefor

Country Status (1)

Country Link
WO (1) WO2024010113A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090098009A (en) * 2008-03-13 2009-09-17 이현태 Tinnitus therapy device for tinnitus retraining therapy
KR20120065090A (en) * 2010-12-10 2012-06-20 한국전자통신연구원 Apparatus of psychotherapy based on mobile terminal
US20130120773A1 (en) * 2005-09-21 2013-05-16 Samsung Electronics Co., Ltd. Terminal Device Having Correction Function For Natural Color And Method Thereof
KR101775999B1 (en) * 2016-04-07 2017-09-07 포스인주식회사 Mental Healing Device
KR20200127336A (en) * 2019-05-02 2020-11-11 김옥기 Method and system for color theraphy consulting based on hue & tone
KR102433767B1 (en) * 2021-05-24 2022-08-18 이상훈 Video profuction system for mental and physical stability and method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120773A1 (en) * 2005-09-21 2013-05-16 Samsung Electronics Co., Ltd. Terminal Device Having Correction Function For Natural Color And Method Thereof
KR20090098009A (en) * 2008-03-13 2009-09-17 이현태 Tinnitus therapy device for tinnitus retraining therapy
KR20120065090A (en) * 2010-12-10 2012-06-20 한국전자통신연구원 Apparatus of psychotherapy based on mobile terminal
KR101775999B1 (en) * 2016-04-07 2017-09-07 포스인주식회사 Mental Healing Device
KR20200127336A (en) * 2019-05-02 2020-11-11 김옥기 Method and system for color theraphy consulting based on hue & tone
KR102433767B1 (en) * 2021-05-24 2022-08-18 이상훈 Video profuction system for mental and physical stability and method thereof

Similar Documents

Publication Publication Date Title
CN108236464A (en) Feature extracting method and its Detection and Extraction system based on EEG signals
WO2018205373A1 (en) Method and apparatus for estimating injury claims settlement and loss adjustment expense, server and medium
CN104182048B (en) Telephone system and its method based on brain-computer interface
WO2022030685A1 (en) Skin disease detection system and skin disease management method using portable terminal
WO2010041836A2 (en) Method of detecting skin-colored area using variable skin color model
WO2024010113A1 (en) System for producing video for mental and physical stability, and method therefor
WO2017057926A1 (en) Display device and method for controlling same
WO2021157956A1 (en) Broadcast management server and broadcast management method using same
KR102433767B1 (en) Video profuction system for mental and physical stability and method thereof
CN110570420A (en) no-reference contrast distortion image quality evaluation method
CN109934097A (en) A kind of expression and mental health management system based on artificial intelligence
WO2016104990A1 (en) Content providing apparatus, display apparatus and control method therefor
CN111739181A (en) Attendance checking method and device, electronic equipment and storage medium
CN106920429A (en) A kind of information processing method and device
WO2023200280A1 (en) Method for estimating heart rate on basis of corrected image, and device therefor
WO2020054892A1 (en) Health risk assessment device and method
WO2022045516A1 (en) Audio and video synchronization method and device
WO2021060748A1 (en) Connectivity learning device and connectivity learning method
CN110392271A (en) The method and apparatus that live video examines
WO2024101466A1 (en) Attribute-based missing person tracking apparatus and method
CN108495186B (en) Video marking method, video marking device, electronic equipment and computer readable storage medium
WO2023033444A1 (en) Service providing device and method for providing issue-based news information
WO2022010176A1 (en) Client-customized cardiopulmonary resuscitation system
WO2021182782A1 (en) Audio data identification apparatus
CN114403900A (en) Electroencephalogram data automatic recording and analyzing system and method in electroencephalogram machine

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22950339

Country of ref document: EP

Kind code of ref document: A1