WO2022050461A1 - Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing - Google Patents

Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing Download PDF

Info

Publication number
WO2022050461A1
WO2022050461A1 PCT/KR2020/012003 KR2020012003W WO2022050461A1 WO 2022050461 A1 WO2022050461 A1 WO 2022050461A1 KR 2020012003 W KR2020012003 W KR 2020012003W WO 2022050461 A1 WO2022050461 A1 WO 2022050461A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
decision
level
learning
wireless
Prior art date
Application number
PCT/KR2020/012003
Other languages
French (fr)
Korean (ko)
Inventor
임태성
이홍원
조한규
윤정환
유호민
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020237005371A priority Critical patent/KR20230043134A/en
Priority to PCT/KR2020/012003 priority patent/WO2022050461A1/en
Publication of WO2022050461A1 publication Critical patent/WO2022050461A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Definitions

  • the present specification relates to a method of identifying a user or a gesture based on wireless sensing, and more particularly, to a method and apparatus in which a wireless device performs wireless sensing in cooperation with another device.
  • wireless signals e.g. WiFi
  • WiFi wireless signals
  • Radio signal propagation eg reflection, diffraction, and scattering
  • researchers can extract ready-to-use signal measurements, or employ frequency-modulated signals for frequency shifting. Due to its low cost and non-intrusion detection properties, wireless-based human activity detection has attracted considerable attention and has become a prominent research area in the past decade.
  • This specification examines the existing wireless sensing system in terms of basic principle, technology and system architecture. Specifically, it describes how wireless signals can be utilized to facilitate a variety of applications including intrusion detection, room occupancy monitoring, daily activity recognition, gesture recognition, vital sign monitoring, user identification and indoor location. Future research directions and limitations using radio signals for human activity detection are also discussed.
  • the present specification proposes a method and apparatus for a wireless device to perform wireless sensing in cooperation with another device based on wireless sensing.
  • An example of the present specification proposes a method for a wireless device to perform wireless sensing in cooperation with another device.
  • This embodiment proposes a method of identifying a user (or gesture) by performing learning and prediction through mutual cooperation when there are a plurality of devices based on wireless sensing.
  • IoT Internet of Things
  • a first device performs capabilities negotiation with a second device (Capabilities Negotiation).
  • the first device receives first decision information from the second device based on the result of the capability negotiation.
  • the first device transmits second decision information that is a result of processing the first decision information to the second device.
  • the first decision information is preliminary information required for identification based on wireless sensing (soft decision)
  • the second decision information is a result of identification based on the wireless sensing (hard decision).
  • FIG. 1 shows an example of a transmitting apparatus and/or a receiving apparatus of the present specification.
  • WLAN wireless LAN
  • 3 is a view for explaining a general link setup process.
  • FIG. 4 shows a flowchart of a WiFi sensing procedure.
  • FIG. 5 shows a flow diagram of a general procedure for sensing human activity via a wireless signal.
  • FIG. 6 shows a CSI spectrogram according to a human gait.
  • FIG. 7 shows a deep learning architecture for user authentication.
  • FIG. 8 shows a problem that occurs when a wireless sensing-based device independently performs a procedure for measuring, processing, and predicting a wireless signal.
  • FIG. 9 shows a block diagram of a wireless sensing device.
  • FIG. 10 is a block diagram of a functional unit of a wireless sensing device.
  • FIG. 11 is a block diagram of a wireless sensing device including an interface.
  • FIG. 12 is a diagram illustrating a type of cooperative device.
  • FIG. 13 shows an example of a procedure in which a wireless sensing device cooperates to perform learning and prediction.
  • Example 15 is a diagram illustrating Example 1 in which predictions are made in AI Cloud and the results are shared.
  • Example 16 is a diagram illustrating Example 2 in which a representative device predicts and shares a result.
  • Example 17 is a diagram illustrating Example 3 in which predictions are made in each device and the results are shared.
  • FIG. 18 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
  • 19 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
  • FIG. 20 shows a modified example of a transmitting apparatus and/or a receiving apparatus of the present specification.
  • a or B (A or B) may mean “only A”, “only B” or “both A and B”.
  • a or B (A or B)” may be interpreted as “A and/or B (A and/or B)”.
  • A, B or C (A, B or C)” herein means “only A,” “only B,” “only C,” or “any and any combination of A, B and C. combination of A, B and C)”.
  • a slash (/) or a comma (comma) used herein may mean “and/or”.
  • A/B may mean “and/or B”.
  • A/B may mean “only A”, “only B”, or “both A and B”.
  • A, B, C may mean “A, B, or C”.
  • At least one of A and B may mean “only A”, “only B” or “both A and B”.
  • the expression “at least one of A or B” or “at least one of A and/or B” means “at least one It can be interpreted the same as “at least one of A and B”.
  • At least one of A, B and C means “only A”, “only B”, “only C” or “of A, B and C”. any combination of A, B and C”. Also, “at least one of A, B or C” or “at least one of A, B and/or C” means may mean “at least one of A, B and C”.
  • control information EHT-Signal
  • EHT-Signal when displayed as “control information (EHT-Signal)”, “EHT-Signal” may be proposed as an example of “control information”.
  • control information of the present specification is not limited to “EHT-Signal”, and “EHT-Signal” may be proposed as an example of “control information”.
  • control information ie, EHT-signal
  • EHT-Signal even when displayed as “control information (ie, EHT-signal)”, “EHT-Signal” may be proposed as an example of “control information”.
  • the following examples of the present specification may be applied to various wireless communication systems.
  • the following example of the present specification may be applied to a wireless local area network (WLAN) system.
  • the present specification may be applied to the IEEE 802.11a/g/n/ac standard or the IEEE 802.11ax standard.
  • this specification may be applied to the newly proposed EHT standard or IEEE 802.11be standard.
  • an example of the present specification may be applied to the EHT standard or a new wireless LAN standard that is an enhancement of IEEE 802.11be.
  • an example of the present specification may be applied to a mobile communication system.
  • LTE Long Term Evolution
  • 3GPP 3rd Generation Partnership Project
  • an example of the present specification may be applied to a communication system of the 5G NR standard based on the 3GPP standard.
  • FIG. 1 shows an example of a transmitting apparatus and/or a receiving apparatus of the present specification.
  • the example of FIG. 1 may perform various technical features described below.
  • 1 relates to at least one STA (station).
  • the STAs 110 and 120 of the present specification are a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), It may also be called by various names such as a mobile station (MS), a mobile subscriber unit, or simply a user.
  • the STAs 110 and 120 in the present specification may be referred to by various names such as a network, a base station, a Node-B, an access point (AP), a repeater, a router, and a relay.
  • the STAs 110 and 120 may be referred to by various names such as a receiving device, a transmitting device, a receiving STA, a transmitting STA, a receiving device, and a transmitting device.
  • the STAs 110 and 120 may perform an access point (AP) role or a non-AP role. That is, the STAs 110 and 120 of the present specification may perform AP and/or non-AP functions.
  • the AP may also be indicated as an AP STA.
  • the STAs 110 and 120 of the present specification may support various communication standards other than the IEEE 802.11 standard.
  • a communication standard eg, LTE, LTE-A, 5G NR standard
  • the STA of the present specification may be implemented in various devices such as a mobile phone, a vehicle, and a personal computer.
  • the STA of the present specification may support communication for various communication services such as voice call, video call, data communication, and autonomous driving (Self-Driving, Autonomous-Driving).
  • the STAs 110 and 120 may include a medium access control (MAC) conforming to the IEEE 802.11 standard and a physical layer interface for a wireless medium.
  • MAC medium access control
  • the STAs 110 and 120 will be described based on the sub-view (a) of FIG. 1 as follows.
  • the first STA 110 may include a processor 111 , a memory 112 , and a transceiver 113 .
  • the illustrated processor, memory, and transceiver may each be implemented as separate chips, or at least two or more blocks/functions may be implemented through one chip.
  • the transceiver 113 of the first STA performs a signal transmission/reception operation. Specifically, IEEE 802.11 packets (eg, IEEE 802.11a/b/g/n/ac/ax/be, etc.) may be transmitted/received.
  • IEEE 802.11 packets eg, IEEE 802.11a/b/g/n/ac/ax/be, etc.
  • the first STA 110 may perform an intended operation of the AP.
  • the processor 111 of the AP may receive a signal through the transceiver 113 , process the received signal, generate a transmission signal, and perform control for signal transmission.
  • the memory 112 of the AP may store a signal (ie, a received signal) received through the transceiver 113 , and may store a signal to be transmitted through the transceiver (ie, a transmission signal).
  • the second STA 120 may perform an intended operation of a non-AP STA.
  • the transceiver 123 of the non-AP performs a signal transmission/reception operation.
  • IEEE 802.11 packets eg, IEEE 802.11a/b/g/n/ac/ax/be, etc.
  • IEEE 802.11a/b/g/n/ac/ax/be, etc. may be transmitted/received.
  • the processor 121 of the non-AP STA may receive a signal through the transceiver 123 , process the received signal, generate a transmission signal, and perform control for signal transmission.
  • the memory 122 of the non-AP STA may store a signal (ie, a received signal) received through the transceiver 123 and may store a signal to be transmitted through the transceiver (ie, a transmission signal).
  • an operation of a device indicated as an AP in the following specification may be performed by the first STA 110 or the second STA 120 .
  • the operation of the device marked as AP is controlled by the processor 111 of the first STA 110 , and is controlled by the processor 111 of the first STA 110 .
  • Relevant signals may be transmitted or received via the controlled transceiver 113 .
  • control information related to an operation of the AP or a transmission/reception signal of the AP may be stored in the memory 112 of the first STA 110 .
  • the operation of the device indicated by the AP is controlled by the processor 121 of the second STA 120 and controlled by the processor 121 of the second STA 120 .
  • a related signal may be transmitted or received via the transceiver 123 that is used.
  • control information related to an operation of the AP or a transmission/reception signal of the AP may be stored in the memory 122 of the second STA 110 .
  • an operation of a device indicated as a non-AP in the following specification may be performed by the first STA 110 or the second STA 120 .
  • the operation of the device marked as non-AP is controlled by the processor 121 of the second STA 120, and the processor ( A related signal may be transmitted or received via the transceiver 123 controlled by 121 .
  • control information related to the operation of the non-AP or the AP transmit/receive signal may be stored in the memory 122 of the second STA 120 .
  • the operation of the device marked as non-AP is controlled by the processor 111 of the first STA 110 , and the processor ( Related signals may be transmitted or received via transceiver 113 controlled by 111 .
  • control information related to the operation of the non-AP or the AP transmission/reception signal may be stored in the memory 112 of the first STA 110 .
  • transmission / reception STA, first STA, second STA, STA1, STA2, AP, first AP, second AP, AP1, AP2, (transmission / reception) Terminal, (transmission / reception) device , (transmitting/receiving) apparatus, a device called a network, etc. may refer to the STAs 110 and 120 of FIG. 1 .
  • a device indicated by a /receiver) device, a (transmit/receive) apparatus, and a network may also refer to the STAs 110 and 120 of FIG. 1 .
  • an operation in which various STAs transmit and receive signals may be performed by the transceivers 113 and 123 of FIG. 1 .
  • an example of an operation of generating a transmission/reception signal or performing data processing or operation in advance for a transmission/reception signal is 1) Determining bit information of a subfield (SIG, STF, LTF, Data) field included in a PPDU /Acquisition/configuration/computation/decoding/encoding operation, 2) time resource or frequency resource (eg, subcarrier resource) used for the subfield (SIG, STF, LTF, Data) field included in the PPDU, etc.
  • a specific sequence eg, pilot sequence, STF / LTF sequence, SIG
  • SIG subfield
  • SIG subfield
  • STF subfield
  • LTF LTF
  • Data subfield
  • an operation related to determination / acquisition / configuration / operation / decoding / encoding of an ACK signal may include
  • various information eg, field/subfield/control field/parameter/power related information used by various STAs for determination/acquisition/configuration/computation/decoding/encoding of transmit/receive signals is may be stored in the memories 112 and 122 of FIG. 1 .
  • the device/STA of the sub-view (a) of FIG. 1 described above may be modified as shown in the sub-view (b) of FIG. 1 .
  • the STAs 110 and 120 of the present specification will be described based on the sub-drawing (b) of FIG. 1 .
  • the transceivers 113 and 123 illustrated in (b) of FIG. 1 may perform the same function as the transceivers illustrated in (a) of FIG. 1 .
  • the processing chips 114 and 124 illustrated in (b) of FIG. 1 may include processors 111 and 121 and memories 112 and 122 .
  • the processors 111 and 121 and the memories 112 and 122 illustrated in (b) of FIG. 1 are the processors 111 and 121 and the memories 112 and 122 illustrated in (a) of FIG. ) can perform the same function.
  • a technical feature in which a transmitting STA transmits a control signal is that the control signals generated by the processors 111 and 121 shown in the sub-drawings (a)/(b) of FIG. 1 are (a) of FIG. ) / (b) can be understood as a technical feature transmitted through the transceivers 113 and 123 shown in (b).
  • the technical feature in which the transmitting STA transmits the control signal is a technical feature in which a control signal to be transmitted to the transceivers 113 and 123 is generated from the processing chips 114 and 124 shown in the sub-view (b) of FIG. can be understood
  • the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal is received by the transceivers 113 and 123 shown in the sub-drawing (a) of FIG. 1 .
  • the technical feature that the receiving STA receives the control signal is that the control signal received by the transceivers 113 and 123 shown in the sub-drawing (a) of FIG. 1 is the processor shown in (a) of FIG. 111, 121) can be understood as a technical feature obtained by.
  • the technical feature for the receiving STA to receive the control signal is that the control signal received by the transceivers 113 and 123 shown in the sub-view (b) of FIG. 1 is the processing chip shown in the sub-view (b) of FIG. It can be understood as a technical feature obtained by (114, 124).
  • software codes 115 and 125 may be included in the memories 112 and 122 .
  • the software codes 115 and 125 may include instructions for controlling the operations of the processors 111 and 121 .
  • Software code 115, 125 may be included in a variety of programming languages.
  • the processors 111 and 121 or the processing chips 114 and 124 shown in FIG. 1 may include an application-specific integrated circuit (ASIC), other chipsets, logic circuits, and/or data processing devices.
  • the processor may be an application processor (AP).
  • the processors 111 and 121 or the processing chips 114 and 124 illustrated in FIG. 1 may include a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), and a modem (Modem). and demodulator).
  • DSP digital signal processor
  • CPU central processing unit
  • GPU graphics processing unit
  • Modem modem
  • demodulator demodulator
  • SNAPDRAGONTM series processor manufactured by Qualcomm®, an EXYNOSTM series processor manufactured by Samsung®, and a processor manufactured by Apple®. It may be an A series processor, a HELIOTM series processor manufactured by MediaTek®, an ATOMTM series processor manufactured by INTEL®, or a processor enhanced therewith.
  • uplink may mean a link for communication from a non-AP STA to an AP STA, and an uplink PPDU/packet/signal may be transmitted through the uplink.
  • downlink may mean a link for communication from an AP STA to a non-AP STA, and a downlink PPDU/packet/signal may be transmitted through the downlink.
  • WLAN wireless LAN
  • FIG. 2 shows the structure of an infrastructure basic service set (BSS) of the Institute of Electrical and Electronic Engineers (IEEE) 802.11.
  • BSS infrastructure basic service set
  • IEEE Institute of Electrical and Electronic Engineers
  • a wireless LAN system may include one or more infrastructure BSSs 200 and 205 (hereinafter, BSSs).
  • BSSs 200 and 205 are a set of APs and STAs such as an access point (AP) 225 and a station 200-1 (STA1) that can communicate with each other through successful synchronization, and are not a concept indicating a specific area.
  • the BSS 205 may include one or more combinable STAs 205 - 1 and 205 - 2 to one AP 230 .
  • the BSS may include at least one STA, the APs 225 and 230 providing a distribution service, and a distribution system (DS) 210 connecting a plurality of APs.
  • DS distribution system
  • the distributed system 210 may implement an extended service set (ESS) 240 that is an extended service set by connecting several BSSs 200 and 205 .
  • ESS 240 may be used as a term indicating one network in which one or several APs are connected through the distributed system 210 .
  • APs included in one ESS 240 may have the same service set identification (SSID).
  • the portal 220 may serve as a bridge connecting a wireless LAN network (IEEE 802.11) and another network (eg, 802.X).
  • IEEE 802.11 IEEE 802.11
  • 802.X another network
  • a network between the APs 225 and 230 and a network between the APs 225 and 230 and the STAs 200 - 1 , 205 - 1 and 205 - 2 may be implemented.
  • a network that establishes a network and performs communication even between STAs without the APs 225 and 230 is defined as an ad-hoc network or an independent basic service set (IBSS).
  • FIG. 2 The lower part of FIG. 2 is a conceptual diagram illustrating the IBSS.
  • the IBSS is a BSS operating in an ad-hoc mode. Since IBSS does not include an AP, there is no centralized management entity that performs a centralized management function. That is, in the IBSS, the STAs 250-1, 250-2, 250-3, 255-4, and 255-5 are managed in a distributed manner. In IBSS, all STAs (250-1, 250-2, 250-3, 255-4, 255-5) can be mobile STAs, and access to a distributed system is not allowed, so a self-contained network network) is formed.
  • 3 is a view for explaining a general link setup process.
  • the STA may perform a network discovery operation.
  • the network discovery operation may include a scanning operation of the STA. That is, in order for the STA to access the network, it is necessary to find a network in which it can participate.
  • An STA must identify a compatible network before participating in a wireless network.
  • the process of identifying a network existing in a specific area is called scanning. Scanning methods include active scanning and passive scanning.
  • an STA performing scanning transmits a probe request frame to discover which APs exist nearby while moving channels, and waits for a response.
  • a responder transmits a probe response frame to the STA that has transmitted the probe request frame in response to the probe request frame.
  • the responder may be an STA that last transmitted a beacon frame in the BSS of the channel being scanned.
  • the AP since the AP transmits a beacon frame, the AP becomes the responder.
  • the STAs in the IBSS rotate and transmit the beacon frame, so the responder is not constant.
  • an STA that transmits a probe request frame on channel 1 and receives a probe response frame on channel 1 stores BSS-related information included in the received probe response frame and channel) to perform scanning (ie, probe request/response transmission/reception on channel 2) in the same way.
  • the scanning operation may be performed in a passive scanning manner.
  • An STA performing scanning based on passive scanning may wait for a beacon frame while moving channels.
  • the beacon frame is one of the management frames in IEEE 802.11, and is periodically transmitted to inform the existence of a wireless network, and to allow a scanning STA to search for a wireless network and participate in the wireless network.
  • the AP plays a role of periodically transmitting a beacon frame, and in the IBSS, the STAs in the IBSS rotate and transmit the beacon frame.
  • the STA performing scanning receives the beacon frame, it stores information on the BSS included in the beacon frame and records beacon frame information in each channel while moving to another channel.
  • the STA may store BSS-related information included in the received beacon frame, move to the next channel, and perform scanning on the next channel in the same manner.
  • the STA discovering the network may perform an authentication process through step SS320.
  • This authentication process may be referred to as a first authentication process in order to clearly distinguish it from the security setup operation of step S340 to be described later.
  • the authentication process of S320 may include a process in which the STA transmits an authentication request frame to the AP, and in response thereto, the AP transmits an authentication response frame to the STA.
  • An authentication frame used for an authentication request/response corresponds to a management frame.
  • the authentication frame includes an authentication algorithm number, an authentication transaction sequence number, a status code, a challenge text, a Robust Security Network (RSN), and a Finite Cyclic Group), etc. may be included.
  • RSN Robust Security Network
  • Finite Cyclic Group Finite Cyclic Group
  • the STA may transmit an authentication request frame to the AP.
  • the AP may determine whether to allow authentication for the corresponding STA based on information included in the received authentication request frame.
  • the AP may provide the result of the authentication process to the STA through the authentication response frame.
  • the successfully authenticated STA may perform a connection process based on step S330.
  • the association process includes a process in which the STA transmits an association request frame to the AP, and in response, the AP transmits an association response frame to the STA.
  • the connection request frame includes information related to various capabilities, a beacon listening interval, a service set identifier (SSID), supported rates, supported channels, RSN, and a mobility domain.
  • SSID service set identifier
  • supported rates supported channels
  • RSN radio station
  • TIM broadcast request Traffic Indication Map Broadcast request
  • connection response frame includes information related to various capabilities, status codes, Association IDs (AIDs), support rates, Enhanced Distributed Channel Access (EDCA) parameter sets, Received Channel Power Indicator (RCPI), Received Signal to Noise (RSNI). indicator), mobility domain, timeout interval (association comeback time), overlapping BSS scan parameters, TIM broadcast response, QoS map, and the like.
  • AIDs Association IDs
  • EDCA Enhanced Distributed Channel Access
  • RCPI Received Channel Power Indicator
  • RSNI Received Signal to Noise
  • indicator mobility domain
  • timeout interval association comeback time
  • overlapping BSS scan parameters TIM broadcast response
  • QoS map QoS map
  • step S340 the STA may perform a security setup process.
  • the security setup process of step S340 may include, for example, a process of private key setup through 4-way handshaking through an Extensible Authentication Protocol over LAN (EAPOL) frame. .
  • EAPOL Extensible Authentication Protocol over LAN
  • WiFi networks grow very rapidly as they offer high throughput and are easy to deploy.
  • CSI Channel State Information
  • this specification comprehensively reviews the signal processing technology, algorithm, application, and performance results of WiFi sensing using CSI.
  • Different WiFi sensing algorithms and signal processing technologies have their own advantages and limitations and are suitable for different WiFi sensing applications.
  • This specification classifies CSI-based WiFi sensing applications into three categories: sensing, recognition, and estimation according to whether the output is binary/multi-class classification or numeric. With the development and deployment of new WiFi technologies, there will be more WiFi sensing opportunities where objects can move from humans to the environment, animals and objects.
  • This specification emphasizes the coexistence of three challenges in WiFi sensing: robustness and generalization, privacy and security, and WiFi sensing and networking.
  • this specification proposes three future WiFi sensing trends: inter-layer network information integration, multi-device cooperation, and fusion of different sensors to enhance the existing WiFi sensing function and enable new WiFi sensing opportunities.
  • MIMO Multiple-Input Multiple-Output
  • OFDM Orthogonal Frequency-Division Multiplexing
  • CSI channel state information
  • CSI refers to how a radio path propagates from a transmitter to a receiver at a specific carrier frequency along multiple paths.
  • CSI is a 3D matrix of complex values representing the amplitude attenuation and phase shift of a multipath WiFi channel.
  • Time series of CSI measurements can be used for other wireless sensing applications by capturing how radio signals travel through surrounding objects and people in time, frequency, and spatial domains.
  • CSI amplitude fluctuations in the time domain can be used for human presence detection, fall detection, motion detection, activity recognition, gesture recognition, and human, activity, gesture, etc. different patterns that can be used for human identification/authentication. has
  • CSI phase shift in spatial and frequency domains i.e., transmit/receive antenna and carrier frequencies
  • the CSI phase shift in the time domain can have other dominant frequency components that can be used to estimate the respiration rate.
  • Various WiFi sensing applications have specific requirements for signal processing techniques and classification/estimation algorithms.
  • This specification proposes signal processing technologies, algorithms, applications, performance results, challenges, and future trends of WiFi sensing through CSI to increase understanding of existing WiFi sensing technologies and gain insight into future WiFi sensing directions.
  • FIG. 4 shows a flowchart of a WiFi sensing procedure.
  • a WiFi signal (eg, CSI measurement value) including a mathematical model, a measurement procedure, an actual WiFi model, a basic processing principle, and an experimental platform is input from the Input stage 410 .
  • Raw CSI measurements are fed to a signal processing module for noise reduction, signal conversion and/or signal extraction as indicated by the Signal Processing stage 420 .
  • the pre-processed CSI tracking is supplied as a modeling-based, learning-based or hybrid algorithm, such as the Algorithm stage 430, to obtain an output for various WiFi sensing purposes. Depending on the output type, WiFi sensing can be classified into three categories.
  • the detection/recognition application tries to solve the binary/multi-class classification problem, and the estimation application tries to obtain the quantity value of another task.
  • FIG. 5 shows a flow diagram of a general procedure for sensing human activity via a wireless signal.
  • the sensing system is a different sensing method (eg, Received Signal Strength Indicator (RSSI), Channel State Information (CSI), Frequency Modulated Carrier Wave (FMCW), and Doppler shift) based on human activity and
  • the related signal change is first extracted.
  • a series of signal preprocessing procedures eg, filtering, denoising, and correction
  • filtering, denoising, and correction are then employed to mitigate the effects of interference, ambient noise, and system offsets.
  • unique features are extracted and served as machine learning models to perform human activity detection and recognition.
  • the human activity sensing procedure of FIG. 5 is as follows.
  • the IoT future smart home market is changing from device connection-centric to service-centric, and as a result, the need for AI device-based personalization and automation services is increasing.
  • Wireless sensing-based technology which is one of the element technologies for IoT service of artificial intelligence devices, is being actively developed. Research on user identification by learning the pattern of this signal using
  • Wireless Signal is a commercial product because it is impossible to create and pre-distribute a general model as the signal pattern according to the influence of user movement changes even for the same user depending on the environment. For this purpose, it is necessary to create a model through learning suitable for each environment, but pre-learning using supervised used in existing research requires user participation for the collection and labeling of learning data. The practicality of commercialization is low.
  • the present specification proposes a post-learning automation method for wireless sensing-based user identification.
  • the personal electronic device PED
  • PED personal identification information
  • Learning methods for post-learning can be applied to several methods such as unsupervised, supervised, semi-supervised, and unsupervised/supervised fusion learning.
  • CSI measurement collection Collect CSI measurement values of 30 ⁇ 52 subcarriers based on 20MHz bandwidth as many as the number of TX/RX antennas.
  • FIG. 6 shows a CSI spectrogram according to a human gait.
  • torso reflection and leg reflection are illustrated in a CSI spectogram in a time/frequency domain.
  • the CSI spectogram has a certain cycle time.
  • Example of Human Activity estimation method Using time domain features (max, min, mean, skewness, kurtiosis, std), which are low level features of CSI, human movements and contours, frequency domain features (spcetrogram energy, percentile frequency) component, spectrogram energy difference) to predict the movement speed of the torso and legs, and express walking or stationary activities using these features.
  • time domain features max, min, mean, skewness, kurtiosis, std
  • frequency domain features spcetrogram energy, percentile frequency component
  • spectrogram energy difference spectrogram energy difference
  • Supervised Learning Using machine learning and deep learning learning algorithms such as decision tree-based machine learning classifier, SVM (Support Vector Machine), Softmax classifier, etc.
  • the predictive model is created only by supervised learning, and the unsupervised learning algorithm is used to construct the layers of the supervised learning model (some studies)
  • Training data is trained by manually mapping the correct answer (e.g. Label) for each person and using it as an input for the Machine/Deep learning model.
  • auto feature extraction and clustering are performed using unsupervised learning to increase the degree of freedom in the data collection environment, and then user identification is performed using a supervised learning model (eg, Softmax classifier).
  • a supervised learning model eg, Softmax classifier
  • Unsupervised learning is a learning method in which only the problem is studied without teaching the answer (label). According to unsupervised learning, a correct answer is found by clustering (a typical example of unsupervised learning) based on the relationship between variables (eg, recommending a YouTuber, classifying animals).
  • supervised learning is a learning method that teaches and studies answers.
  • Supervised learning is divided into regression and classification.
  • Regression is a learning method that predicts outcomes within a range of continuous data (eg, age 0-100).
  • Classification is a learning method that predicts outcomes within a range of discretely separated data (for example, whether a tumor is malignant or benign).
  • semi-supervised learning is a method of simultaneously learning data with answers and data without answers, and it is a learning method that studies a lot of data without answers without discarding them.
  • FIG. 7 shows a deep learning architecture for user authentication.
  • the deep learning architecture of FIG. 7 is an example in which auto feature extraction is performed using an autoencoder for each hidden layer, and softmax classification is used for classification.
  • the supervised learning model constitutes each hidden layer, and the unsupervised learning model is used only for constructing the corresponding layer.
  • Activity Separation, Activity Recognition, and User Authentication of FIG. 7 are all characteristics obtained by auto feature extraction.
  • the IoT future smart home market is changing from device connection-centric to service-centric, and as a result, the need for AI device-based personalization and automation services is increasing.
  • Wireless sensing-based technology which is one of the element technologies for IoT service of artificial intelligence devices, is being actively developed. Research on human recognition and user identification / gesture identification by learning the pattern of this signal using At this time, the wireless sensing-based device independently performs the procedure of measuring, processing, and predicting a wireless signal such as Wi-Fi CSI (channel state information).
  • Wi-Fi CSI channel state information
  • Capabilities supported may differ depending on the device type (for example, a water purifier can collect only wireless signals, and a refrigerator can learn/predict by collecting wireless signals - there are restrictions such as resources)
  • Devices that do not (including legacy devices) need to learn and predict with the help of peripherals that support higher capabilities.
  • the present specification proposes a wireless sensing-based cooperative architecture protocol and signaling (Cooperation Architecture Protocol & Signaling) method.
  • a wireless sensing-based cooperative architecture protocol and signaling Cooperation Architecture Protocol & Signaling
  • Capabilities information can be exchanged with each other through mutual negotiation.
  • Hard Decision and Soft Decision information a representative device may be selected according to the negotiation process.
  • Hard Decision and Soft Decision information can be exchanged according to the capabilities of the device (or the device status, burden, etc. may be considered).
  • the hard decision information is the wireless sensing-based identification decision result
  • the soft decision information is the prior information required for wireless sensing-based identification (eg, signal raw data, data that has undergone signal preprocessing, learning and input data for learning, etc.) prior data for prediction).
  • An interface between Wireless PHY/MAC and an application can be defined to exchange information between devices for cooperation (by doing so, hard decision and soft decision information can be exchanged without going to the upper layer).
  • the existing protocols based on Wireless Sensing and the existing operation methods will be described as follows. 1) The transmitting device transmits a measurable signal such as Wi-Fi CSI (Channel State Information). 2) The receiving device measures the CSI radio signal sent from the transmitting device. 3) The transmitting/receiving device performs wireless signal pre-processing to refine the collected signal. 4) The transmitting/receiving device performs a process of extracting features for learning and prediction (Feature Extraction). 5) The sending/receiving device divides the data set that has undergone Wireless Signal Pre-processing and Feature Extraction into an appropriate ratio (eg 8:2) and uses the large ratio as the data input for learning, and the remaining data is used for evaluation of the learning model. do.
  • a measurable signal such as Wi-Fi CSI (Channel State Information).
  • the receiving device measures the CSI radio signal sent from the transmitting device.
  • the transmitting/receiving device performs wireless signal pre-processing to refine the collected signal. 4)
  • FIG. 8 shows a problem that occurs when a wireless sensing-based device independently performs a procedure for measuring, processing, and predicting a wireless signal.
  • a wireless sensing-based device independently performs a procedure of measuring, processing, and predicting a wireless signal such as Wi-Fi CSI (Channel State Information).
  • supported capabilities may be different depending on the device type. For example, there may be devices that can only collect wireless signals, and when there are devices that can learn/predict by collecting wireless signals, learning/prediction is impossible for devices that can only collect wireless signals (compatibility problem with legacy products).
  • a device capable of learning/predicting by collecting wireless signals there is a problem that the capability of the device capable of only collecting wireless signals is not known, and that the collected wireless signals cannot be received.
  • the AP transmits a wireless signal such as Wi-Fi CSI.
  • TV and air conditioner measure this radio signal.
  • TV includes AI (Artificial Intelligence) function and can learn/predict, but air conditioner does not include AI function, so learning/prediction is impossible.
  • AI Artificial Intelligence
  • TV measures learns/predicts radio signals such as CSI and recognizes Paul passing in front of the TV, but the air conditioner does not.
  • the result of Paul's learning information is not transmitted from the TV to the air conditioner, and (7) Paul's wireless signal is not transmitted from the air conditioner to the TV. That is, the device can only operate according to its own capabilities, and since the devices do not cooperate with each other, it is impossible to learn and predict with the help of peripheral devices.
  • FIG. 9 shows a block diagram of a wireless sensing device.
  • FIG. 10 is a block diagram of a functional unit of a wireless sensing device.
  • FIGS. 9 and 10 may be defined as follows.
  • the Wireless PHY/MAC Driver block 10 serves to exchange information with the PHY/MAC layer of the wireless sensing device.
  • the device discovery block 20 serves to discover peripheral devices.
  • the Capabilities Negotiation block 30 serves to negotiate whether the discovered peripheral devices are wireless sensing or not, representative device settings, and decision methods between devices.
  • the Wireless Sensing block 40 serves to transmit and collect wireless signals such as Wi-Fi CSI (Channel State Information).
  • the Signal Pre-Processing block 50 serves to perform CSI Measurement, Phase Offset Calibration, De-Noising, and the like.
  • the Feature Selection & Extraction block 60 serves to select and extract features for learning and prediction.
  • the Machine/Deep Learning block 70 serves to perform Training & Prediction through various Machine/Deep Learning-based algorithms.
  • the information exchange network unit 80 is a wireless network that transmits and receives information of the Device Discovery block 20 , the Capabilities Negotiation block 30 , and the Wireless Sensing block 40 .
  • AI Cloud (90) must include Deep/Machine Learning (70) function, and a Cloud that can perform some or all of Signal Pre-processing (50) and Feature Selection/Extraction (60) functions depending on the connected device configuration is the server
  • the cloud information exchange network unit 100 is a network for exchanging information between the cloud and the wireless sensing device.
  • FIG. 11 is a block diagram of a wireless sensing device including an interface.
  • the Soft Decision Interface (110) communicates with the Wireless PHY/MAC Driver (10) PHY/MAC and the Wireless Sensing (40), Signal Pre-Processing (50), Feature Selection/Extraction (60) information in the Wireless Sensing device. It is an interface between applications.
  • the Hard Decision Interface 120 is an interface between the PHY/MAC and the Application for exchanging the Deep/Machine Learning 70 information in the wireless sensing device with the Wireless PHY/MAC Driver 10 .
  • Hard Decision information is data that has gone through Wireless Sensing (40) Data collection, Signal Pre-processing (50), Feature Selection/Extraction (60), Machine/Deep Learning (70) (it can be a prediction result determined by AI) can be
  • Soft Decision information has three types of information, and Soft Decision 1 may be data (which may be raw data) that has been collected through the wireless sensing (40) process.
  • Soft Decision 2 may be data that has gone through the wireless sensing (40) and signal pre-processing (50) processes (it may be in the form of noise removal through signal pre-processing).
  • Soft Decision 3 may be data (which may be a sensing prediction result through AI) that has gone through the wireless sensing (40), signal pre-processing (50), and feature selection/extraction (60) processes.
  • Decision method may have one or more of the above definitions according to device capabilities.
  • a device that supports AI can share hard decision and soft decision information with other devices, and a device that does not support AI can share only soft decision information with other devices.
  • FIG. 12 is a diagram illustrating a type of cooperative device.
  • Level 1 includes functions of Wireless PHY/MAC Driver (10), Device Discovery (20), Capabilities Negotiation (30), and Wireless Sensing (40).
  • Level 1 Device is defined as a device that can collect Wireless Sensing Raw data. Level 1 device can transmit soft decision (sensing raw data) to other devices and receive hard decision (prediction results) from other devices.
  • Level 2 includes the function of Signal Pre-processing (50) in the function of Level 1 Device.
  • Level 2 Device is defined as a device capable of noise removal and signal refinement through signal pre-processing of the collected sensing data.
  • Level 2 Device can deliver Soft Decision (Sensing Raw Data or Signal Pre-processed Data) to other devices, and can receive Hard Decision (prediction results) from other devices.
  • Level 3 includes the function of Feature Selection/Extraction (60) in the function of Level 2 Device.
  • Level 3 Device is defined as a device that can select and extract features from refined sensing data to generate input data for machine learning learning/prediction.
  • Level 3 Device can deliver Soft Decision (Sensing Raw Data, Signal Pre-processed Data, or Input Data for learning/prediction) to other devices, and can receive Hard Decision (prediction results) from other devices.
  • Level 4 includes the function of Deep/Machine Learning (70) in the function of Level 3 Device.
  • Level 4 Device is defined as a device that can perform machine learning learning/prediction by collecting and preprocessing sensing data.
  • Level 4 Device transmits Soft Decision (Sensing Raw Data, Signal Pre-processed Data, or Input Data for learning/prediction) or Hard Decision (prediction result) to other devices, and receives Hard Decision (prediction result) from other devices.
  • Soft Decision Sesing Raw Data, Signal Pre-processed Data, or Input Data for learning/prediction
  • Hard Decision prediction result
  • AI Cloud can play the role of receiving soft decisions from Level 1 ⁇ 4 devices and delivering hard decisions.
  • Signal Pre-Processing (50), Feature Selection / Extraction (60) can be included, and Machine/Deep Learning (70) must be included (to deliver Hard Decision).
  • both Level 1 to Level 4 devices can receive a hard decision from other devices (AI Cloud or Level 4 Device), and there is a difference in the soft decision information they deliver for each level (in the case of Level 4 devices, hard decision information is also forwardable).
  • 13 shows an example of a procedure in which wireless sensing devices cooperate to perform learning and prediction. 13 shows a protocol and signaling procedure of a cooperative architecture when Device B provides Soft Decision information to Device A. Referring to FIG.
  • Device A and Device B may search for and detect a device while transmitting and receiving Device Discovery Request/Response through the Device Discovery 20 and the information exchange network unit 80 . For example, when Device A transmits a Device Discovery Request to Device B, Device B may respond with a Device Discovery Response, and Device A may confirm that Device B exists.
  • Device A and Device B may perform Capabilities Negotiation while transmitting and receiving Capabilities Negotiation Request/Response/Confirm through the Capabilities Negotiation 30 and the information exchange network unit 80 .
  • Device A transmits a Capabilities Negotiation Request to Device B
  • Device B may respond with a Capabilities Negotiation Response
  • Device A may send a Capabilities Negotiation Confirm to Device B, thereby completing Capabilities Negotiation.
  • Device A and Device B can set 1) whether wireless sensing is supported, 2) decision method, and 3) representative device setting. That is, Capabilities information may be exchanged between devices through Capabilities Negotiation, and a representative device may be set.
  • the device that has completed device discovery transmits a Capabilities Negotiation Request to start Capabilities Negotiation between devices.
  • the device that received the Capabilities Negotiation Request transmits its capabilities (Wireless Sensing support, decision method, etc.) to the Capabilities Negotiation Response to the device that sent the Capabilities Negotiation Request.
  • the device receiving the Capabilities Negotiation Response compares its capabilities with its own capabilities, sets the representative device, and transmits its capabilities (Wireless Sensing support, decision method, and representative device setting) on the Capabilities Negotiation Confirm.
  • a device having capabilities of Level 1 or higher delivers information about supporting wireless sensing to the counterpart.
  • Decision method exchange delivers decision information matching one's own capabilities to the other party according to the decision method definition described above.
  • the AI-supported device When setting the representative device, the AI-supported device may be set as the representative device in preference to the non-AI-supported device.
  • the higher the level the higher the ranking can be set as a representative device.
  • the better the device's performance the more it can be set as a representative device.
  • the device connected to the AI Cloud 90 may be set as the representative device.
  • the device may perform an operation for each level according to the capabilities of each device. That is, Device A and Device B perform 1) transmission and reception of Wireless Sensing Data regardless of their capabilities, and perform one or more of 2) Signal Pre-Processing, 3) Feature Selection / Extraction, and 4) Deep/Machine Learning according to their capabilities. can be done
  • a device may transmit at least one of 5) Soft Decision or 6) Hard Decision information to another device for cooperation.
  • Soft Decision is one of 1) Sensing Data itself collected through Wireless Sensing Data, 2) Signal Data pre-processed through Signal Pre-Processing, and 3) Input Data pre-processed for machine learning learning/prediction through Feature Selection / Extraction.
  • can be delivered in the form of Hard Decision can be the result of predicting a device with AI Capabilities (Level 4 Device or Device connected to AI Cloud) by fusion of the data it collects or the data acquired in the process of cooperation.
  • the device may transmit 5) Soft Decision or 6) Hard Decision information through the information exchange network unit 80 .
  • the detailed information delivery method is as follows.
  • a device can deliver soft decision or hard decision information to a device having a higher level function or a device having the same level function according to the capabilities of each device.
  • Soft Decision 1 may be delivered to a device including Level 1, 2, 3, and 4 functions and to the AI Cloud 90 .
  • Soft Decisions 1 and 2 may be delivered to a device including Level 2, 3, and 4 functions and to the AI Cloud (90).
  • Soft Decisions 1, 2, and 3 may be delivered to devices including Level 3 and 4 functions and to the AI Cloud 90 .
  • Soft Decision 1, 2, 3, and Hard Decision information may be delivered to a device including a Level 4 function and the AI Cloud 90 .
  • a device or AI Cloud that includes a Level 4 function with Hard Decision can share Hard Decision information with connected devices.
  • the Level 1 Device may deliver Soft Decision 1 to Level 1, 2, 3, and 4 Devices and the AI Cloud 90 .
  • Level 2 Device may deliver Soft Decision 1 and 2 to Level 2, 3, 4 Device and AI Cloud 90.
  • Level 3 Device may deliver Soft Decision 1, 2, and 3 to Level 3, 4 Device and AI Cloud (90).
  • the Level 4 Device may deliver Soft Decision 1, 2, 3 and Hard Decision information to the Level 4 Device and the AI Cloud 90 .
  • Level 4 Device or AI Cloud can share Hard Decision information with connected devices.
  • an example of information delivery Fig. 1 shows an example of sharing a result (or decision) by predicting in AI Cloud.
  • Example of information delivery Figure 2 shows an example of sharing the result (or decision) by predicting from the representative device.
  • Example of information transfer Figure 3 shows an example of sharing the result (or decision) by predicting each device.
  • Example 15 is a diagram illustrating Example 1 in which predictions are made in AI Cloud and the results are shared.
  • the Level 2 Device performs peripheral device discovery ( 20 ), and when two Level 1 devices are detected, it performs Capabilities Negotiation ( 30 ) with the detected Device.
  • the devices determine 1) whether wireless sensing is supported, 2) a decision method, and 3) a representative device through the capabilities negotiation (30). In this case, a Level 2 Device with a higher Level ranking is determined as the representative Device.
  • a Level 2 Device or AP transmits a measurable signal including CSI Information to Level 1 Devices that have performed Capabilities Negotiation (30) (Wireless Sensing (40)).
  • Level 2 Device and Level 1 Devices measure a measurable signal including CSI information (Wireless Sensing (40)).
  • the received Level 2 Device and Level 1 Devices collect CSI information (40), and then transmit decision information through the information exchange network unit 80 according to the result of negotiation with the representative device (Level 2 Device).
  • the representative device processes the decision information received through the information exchange network unit 80 as data and delivers it to the AI Cloud 90 through the cloud information exchange network unit 100 .
  • the decision information delivered from the representative device is Soft Decision 1
  • the sensing raw data or CSI Information 40 collected through the wireless sensing data will be delivered to the AI Cloud 90 through the information exchange network unit 80.
  • the signal data (50) pre-processed through Signal Pre-Processing to the sensing raw data collected through Wireless Sensing Data or CSI Information (40) is transferred to AI Cloud (90) can be passed on to
  • the result (Hard Decision) is delivered to the representative device (Level 2 Device) through the cloud information exchange network unit 100, and the representative device (Level 2 Device) is the other device (Level 1 Devices) to share the result through the information exchange network unit 80 .
  • AI Cloud 100 when AI Cloud 100 received Soft Decision 1 from the representative device, AI Cloud 100 performs Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) processes. It can process the passed data and deliver it to the representative device.
  • AI Cloud(100) When AI Cloud(100) receives Soft Decision 2 from the representative device, AI Cloud(100) processes the data that has gone through the Feature Extraction(60), Machine/Deep Learning based training and prediction(70) process and sends the result to the representative device. can pass
  • Example 16 is a diagram illustrating Example 2 in which a representative device predicts and shares a result.
  • the Level 4 Device performs peripheral device discovery 20 , and when a Level 2 Device and a Level 1 Device are detected, the Level 4 Device and Capabilities Negotiation 30 are performed.
  • the devices determine 1) whether wireless sensing is supported, 2) a decision method, and 3) a representative device through the capabilities negotiation (30). At this time, a Level 4 Device with a higher Level ranking is determined as the representative Device.
  • the Level 4 Device or AP transmits a measurable signal including CSI Information to the Level 4 Device, Level 2 Device, and Level 1 Device that have performed Capabilities Negotiation (30) (Wireless Sensing (40)).
  • Level 4 Device, Level 2 Device, and Level 1 Device measure a measurable signal including CSI Information (Wireless Sensing (40)).
  • the received Level 4 Device, Level 2 Device, and Level 1 Device collect CSI information (40) and then transmit decision information (Soft Decision 1 or 2) to the information exchange network unit according to the result of negotiation with the representative device (Level 4 Device). pass through (80).
  • the representative device processes the decision information received through the information exchange network unit 80 and shares the result (Hard Decision) through the information exchange network unit 100 .
  • Level 2 Device the information delivery method is as follows. If the result of Negotiation for Level 2 Device is Soft Decision 1, Level 4 Device that received Soft Decision 1 is subjected to Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) It can process the data passed through and deliver the result to Level 2 Device. If the result of Negotiation for Level 2 Device is Soft Decision 2, the Level 4 Device that received Soft Decision 2 processes the data that has gone through the Feature Extraction (60), Machine/Deep Learning based training and prediction (70) process and The result can be sent to the device.
  • Level 1 Device the information delivery method is as follows. As the result of Negotiation for Level 1 Device will be Soft Decision 1, Level 4 Device that received Soft Decision 1 takes the course of Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) It can process the data that has been passed through and deliver the result to the Level 1 Device.
  • Example 17 is a diagram illustrating Example 3 in which predictions are made in each device and the results are shared.
  • the Level 4 Device performs peripheral device discovery ( 20 ), and when two Level 4 devices are detected, it performs Capabilities Negotiation ( 30 ) with the detected devices.
  • the devices determine 1) whether wireless sensing is supported, 2) a decision method, and 3) a representative device through the capabilities negotiation (30). Since the level ranking of each device is the same, a device with good performance, good traffic condition, connected to the AI Cloud 90, or performing Deep/Machine Learning can be a representative device. In this embodiment, it is assumed that the Level 4 Device (1) is determined as the representative device.
  • Level 4 Device(1) or AP transmits a measurable signal including CSI Information to Level 4 Device(1), Level 4 Device(2), Level 4 Device(3) that performed Capabilities Negotiation(30) (Wireless Sensing (40)).
  • Level 4 Device(1), Level 4 Device(2), and Level 4 Device(3) measure a measurable signal including CSI Information (Wireless Sensing(40)).
  • Received Level 4 Device(1), Level 4 Device(2), Level 4 Device(3) collect CSI information (40) (Hard Decision) is shared through the information exchange network unit (80). Alternatively, after exchanging and sharing decision information (Soft Decision) according to the status of Level 4 Device(1), Level 4 Device(2), Level 4 Device(3), the Deep/Machine Learning Device predicts the result to inform the decision information (Hard Decision).
  • FIG. 18 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
  • This embodiment proposes a method of identifying a user (or gesture) by performing learning and prediction through mutual cooperation when there are a plurality of devices based on wireless sensing.
  • IoT Internet of Things
  • step S1810 the first device (device) performs capability negotiation with the second device (Capabilities Negotiation).
  • step S1820 the first device receives first determination information from the second device based on the result of the capability negotiation.
  • step S1830 the first device transmits second decision information that is a result of processing the first decision information to the second device.
  • the first decision information is preliminary information required for identification based on wireless sensing (soft decision)
  • the second decision information is a result of identification based on the wireless sensing (hard decision).
  • the first device Before performing the capability negotiation, the first device may perform device discovery to find the second device.
  • the second device When the first device transmits a device discovery request to the second device, the second device may transmit a device discovery response to the first device, through which the first device confirms the existence of the second device can
  • the first device may exchange capability information with the second device and determine a representative device based on the capability negotiation.
  • the capability information may include whether the wireless sensing is supported and the first determination information.
  • the first decision information may be determined as one of first to third soft decisions based on the levels of the first and second devices.
  • the first soft decision may be raw data of the radio signal.
  • the second soft determination may be data obtained by pre-processing the radio signal.
  • the third soft determination may be input data extracted from data pre-processed on the radio signal.
  • the second decision information may be a result learned and predicted based on machine learning or deep learning for the first, second, or third soft decision.
  • the level of the first and second devices may be determined as one of the first to fourth levels based on the capability information.
  • the first determination information may be the first soft determination.
  • the first determination information may be determined as the first or second software.
  • the first determination information may be determined as the first or second or third software.
  • the first determination information may be determined as the first, second, or third software, or may be the second determination information. That is, the second device sends the first decision information (Soft Decision) or the second decision information (Hard Decision) to a device (the first device) having a function of a higher level or the same level based on the capability information. can pass
  • the representative device may be determined based on a device level, device performance, or whether it supports or is connected to an Artificial Intelligence Cloud (AI Cloud).
  • AI Cloud Artificial Intelligence Cloud
  • an information transfer process is as follows. If the first decision information is the first soft decision, the second decision information is data that has undergone a learning and prediction process based on signal preprocessing, feature extraction, and machine learning or deep learning in the first decision information. can When the first decision information is the second soft decision, the second decision information may be data that has undergone a learning and prediction process based on the feature extraction and the machine learning or the deep learning in the first decision information. . When the first decision information is the third soft decision, the second decision information may be data that has undergone a learning and prediction process based on the deep learning in the first decision information. That is, according to the received soft decision information, the first device may transmit result information (second decision information) to the second device through each data processing.
  • the first device may transmit the first determination information to the AI cloud.
  • the first device may receive the second determination information from the AI cloud.
  • the second determination information may be obtained by the AI cloud performing a learning and prediction process based on the machine learning or the deep learning on the first determination information. That is, in the above-described embodiment, after the AI cloud processes the decision information transmitted by the devices, result information (second decision information) may be delivered to the first device.
  • the first device may share result information received from the AI cloud with the second device.
  • each of the first and second devices acquires result information (second decision information) based on the result of the capability negotiation You can share it across devices.
  • the first device may transmit a radio signal including channel state information (CSI) information to the second device.
  • the second device may collect and measure the wireless signal.
  • CSI channel state information
  • the first and second devices may include a wireless PHY and MAC driver, a soft decision interface, and a hard decision interface.
  • the first decision information may be transmitted to the wireless PHY and MAC driver through the soft decision interface.
  • the second decision information may be transmitted to the wireless PHY and MAC driver through the hard decision interface. Accordingly, the first and second devices may transmit the first and second determination information to the PHY/MAC without going through an upper layer.
  • the device can know both the learning through cooperation and the predicted result.
  • the first and second devices may identify a user or a gesture based on the wireless sensing result.
  • 19 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
  • This embodiment proposes a method of identifying a user (or gesture) by performing learning and prediction through mutual cooperation when there are a plurality of devices based on wireless sensing.
  • IoT Internet of Things
  • step S1910 the second device (device) performs capability negotiation with the first device (Capabilities Negotiation).
  • step S1920 first determination information is transmitted to the first device based on the result of the capability negotiation.
  • step S1930 the second device receives second decision information, which is a result of processing the first decision information, from the first device.
  • the first decision information is preliminary information required for identification based on wireless sensing (soft decision)
  • the second decision information is a result of identification based on the wireless sensing (hard decision).
  • the first device Before performing the capability negotiation, the first device may perform device discovery to find the second device.
  • the second device When the first device transmits a device discovery request to the second device, the second device may transmit a device discovery response to the first device, through which the first device confirms the existence of the second device can
  • the first device may exchange capability information with the second device and determine a representative device based on the capability negotiation.
  • the capability information may include whether the wireless sensing is supported and the first determination information.
  • the first decision information may be determined as one of first to third soft decisions based on the levels of the first and second devices.
  • the first soft decision may be raw data of the radio signal.
  • the second soft determination may be data obtained by pre-processing the radio signal.
  • the third soft determination may be input data extracted from data pre-processed on the radio signal.
  • the second decision information may be a result learned and predicted based on machine learning or deep learning for the first, second, or third soft decision.
  • the level of the first and second devices may be determined as one of the first to fourth levels based on the capability information.
  • the first determination information may be the first soft determination.
  • the first determination information may be determined as the first or second software.
  • the first determination information may be determined as the first or second or third software.
  • the first determination information may be determined as the first, second, or third software, or may be the second determination information. That is, the second device sends the first decision information (Soft Decision) or the second decision information (Hard Decision) to a device (the first device) having a function of a higher level or the same level based on the capability information. can pass
  • the representative device may be determined based on a device level, device performance, or whether it supports or is connected to an Artificial Intelligence Cloud (AI Cloud).
  • AI Cloud Artificial Intelligence Cloud
  • an information transfer process is as follows. If the first decision information is the first soft decision, the second decision information is data that has undergone a learning and prediction process based on signal preprocessing, feature extraction, and machine learning or deep learning in the first decision information. can When the first decision information is the second soft decision, the second decision information may be data that has undergone a learning and prediction process based on the feature extraction and the machine learning or the deep learning in the first decision information. . When the first decision information is the third soft decision, the second decision information may be data that has undergone a learning and prediction process based on the deep learning in the first decision information. That is, according to the received soft decision information, the first device may transmit result information (second decision information) to the second device through respective data processing.
  • the first device may transmit the first determination information to the AI cloud.
  • the first device may receive the second determination information from the AI cloud.
  • the second determination information may be obtained by the AI cloud performing a learning and prediction process based on the machine learning or the deep learning on the first determination information. That is, in the above-described embodiment, after the AI cloud processes the decision information transmitted by the devices, result information (second decision information) may be transmitted to the first device.
  • the first device may share result information received from the AI cloud with the second device.
  • each of the first and second devices acquires result information (second decision information) based on the result of the capability negotiation You can share it across devices.
  • the first device may transmit a radio signal including channel state information (CSI) information to the second device.
  • the second device may collect and measure the wireless signal.
  • CSI channel state information
  • the first and second devices may include a wireless PHY and MAC driver, a soft decision interface, and a hard decision interface.
  • the first decision information may be transmitted to the wireless PHY and MAC driver through the soft decision interface.
  • the second decision information may be transmitted to the wireless PHY and MAC driver through the hard decision interface. Accordingly, the first and second devices may transmit the first and second determination information to the PHY/MAC without going through an upper layer.
  • the device can know both the learning through cooperation and the predicted result.
  • the first and second devices may identify a user or a gesture based on the wireless sensing result.
  • FIG. 20 shows a modified example of a transmitting apparatus and/or a receiving apparatus of the present specification.
  • Each device/STA of the sub-drawings (a)/(b) of FIG. 1 may be modified as shown in FIG. 20 .
  • the transceiver 630 of FIG. 20 may be the same as the transceivers 113 and 123 of FIG. 1 .
  • the transceiver 630 of FIG. 20 may include a receiver and a transmitter.
  • the processor 610 of FIG. 20 may be the same as the processors 111 and 121 of FIG. 1 . Alternatively, the processor 610 of FIG. 20 may be the same as the processing chips 114 and 124 of FIG. 1 .
  • the memory 150 of FIG. 20 may be the same as the memories 112 and 122 of FIG. 1 .
  • the memory 150 of FIG. 20 may be a separate external memory different from the memories 112 and 122 of FIG. 1 .
  • the power management module 611 manages power for the processor 610 and/or the transceiver 630 .
  • the battery 612 supplies power to the power management module 611 .
  • the display 613 outputs the result processed by the processor 610 .
  • Keypad 614 receives input to be used by processor 610 .
  • a keypad 614 may be displayed on the display 613 .
  • SIM card 615 may be an integrated circuit used to securely store an international mobile subscriber identity (IMSI) used to identify and authenticate subscribers in mobile phone devices, such as mobile phones and computers, and keys associated therewith. .
  • IMSI international mobile subscriber identity
  • the speaker 640 may output a sound related result processed by the processor 610 .
  • the microphone 641 may receive a sound related input to be used by the processor 610 .
  • the technical features of the present specification described above may be applied to various devices and methods.
  • the above-described technical features of the present specification may be performed/supported through the apparatus of FIGS. 1 and/or 20 .
  • the above-described technical features of the present specification may be applied only to a part of FIGS. 1 and/or 20 .
  • the technical features of the present specification described above are implemented based on the processing chips 114 and 124 of FIG. 1 , or implemented based on the processors 111 and 121 and the memories 112 and 122 of FIG. 1 , or , may be implemented based on the processor 610 and the memory 620 of FIG. 20 .
  • the device of the present specification is a device for identifying a user or a gesture based on wireless sensing, wherein the device includes a memory and a processor operatively coupled to the memory, wherein the processor includes a second device Performs capability negotiation with the second device, receives first decision information from the second device based on the result of the capability negotiation, and sends second decision information that is a result of processing the first decision information to the second device 2 to the device.
  • the first determination information is prior information required for identification based on wireless sensing
  • the second determination information is a result of identification based on the wireless sensing.
  • CRM computer readable medium
  • CRM proposed by the present specification is at least one computer readable medium including at least one computer readable medium including instructions based on being executed by at least one processor.
  • the CRM performing capability negotiation with the second device (Capabilities Negotiation); receiving first decision information from the second device based on a result of the capability negotiation; and transmitting second decision information, which is a result of processing the first decision information, to the second device.
  • the instructions stored in the CRM of the present specification may be executed by at least one processor.
  • At least one processor related to CRM in the present specification may be the processors 111 and 121 or the processing chips 114 and 124 of FIG. 1 , or the processor 610 of FIG. 20 .
  • the CRM of the present specification may be the memories 112 and 122 of FIG. 1 , the memory 620 of FIG. 20 , or a separate external memory/storage medium/disk.
  • Machine learning refers to a field that defines various problems dealt with in the field of artificial intelligence and studies methodologies to solve them. do.
  • Machine learning is also defined as an algorithm that improves the performance of a certain task through constant experience.
  • An artificial neural network is a model used in machine learning, and may refer to an overall model having problem-solving ability, which is composed of artificial neurons (nodes) that form a network by combining synapses.
  • An artificial neural network may be defined by a connection pattern between neurons of different layers, a learning process that updates model parameters, and an activation function that generates an output value.
  • the artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include neurons and synapses connecting neurons. In the artificial neural network, each neuron may output a function value of an activation function for input signals, weights, and biases input through synapses.
  • Model parameters refer to parameters determined through learning, and include the weight of synaptic connections and the bias of neurons.
  • the hyperparameter refers to a parameter that must be set before learning in a machine learning algorithm, and includes a learning rate, the number of iterations, a mini-batch size, an initialization function, and the like.
  • the purpose of learning the artificial neural network can be seen as determining the model parameters that minimize the loss function.
  • the loss function may be used as an index for determining optimal model parameters in the learning process of the artificial neural network.
  • Machine learning can be classified into supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning according to a learning method.
  • Supervised learning refers to a method of training an artificial neural network in a state in which a label for the training data is given, and the label is the correct answer (or result value) that the artificial neural network should infer when the training data is input to the artificial neural network.
  • Unsupervised learning may refer to a method of training an artificial neural network in a state where no labels are given for training data.
  • Reinforcement learning can refer to a learning method in which an agent defined in an environment learns to select an action or sequence of actions that maximizes the cumulative reward in each state.
  • machine learning implemented as a deep neural network (DNN) including a plurality of hidden layers is also called deep learning, and deep learning is a part of machine learning.
  • DNN deep neural network
  • machine learning is used in a sense including deep learning.
  • a robot can mean a machine that automatically handles or operates a task given by its own capabilities.
  • a robot having a function of recognizing an environment and performing an operation by self-judgment may be referred to as an intelligent robot.
  • Robots can be classified into industrial, medical, home, military, etc. depending on the purpose or field of use.
  • the robot may be provided with a driving unit including an actuator or a motor to perform various physical operations such as moving the robot joints.
  • the movable robot includes a wheel, a brake, a propeller, and the like in the driving unit, and may travel on the ground or fly in the air through the driving unit.
  • the extended reality is a generic term for virtual reality (VR), augmented reality (AR), and mixed reality (MR).
  • VR technology provides only CG images of objects or backgrounds in the real world
  • AR technology provides virtual CG images on top of images of real objects
  • MR technology is a computer that mixes and combines virtual objects in the real world. graphic technology.
  • MR technology is similar to AR technology in that it shows both real and virtual objects. However, there is a difference in that in AR technology, a virtual object is used in a form that complements a real object, whereas in MR technology, a virtual object and a real object are used with equal characteristics.
  • HMD Head-Mount Display
  • HUD Head-Up Display
  • mobile phone tablet PC, laptop, desktop, TV, digital signage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Electromagnetism (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Databases & Information Systems (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)

Abstract

Proposed are a method and an apparatus by which a wireless device cooperates with another device to perform wireless sensing on the basis of wireless sensing in a wireless LAN system. Particularly, a first device performs capabilities negotiation with a second device. The first device receives first determination information from the second device on the basis of the result of the capabilities negotiation. The first device transmits, to the second device, second determination information that is a result of processing the first determination information. The first determination information is prior information required for identification on the basis of wireless sensing. The second determination information is a result of identification on the basis of wireless sensing.

Description

무선 센싱을 기반으로 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 방법 및 장치A method and apparatus for a wireless device to perform wireless sensing in cooperation with other devices based on wireless sensing
본 명세서는 무선 센싱을 기반으로 사용자 또는 제스처를 식별하는 방법에 관한 것으로, 보다 상세하게는, 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 방법 및 장치에 관한 것이다.The present specification relates to a method of identifying a user or a gesture based on wireless sensing, and more particularly, to a method and apparatus in which a wireless device performs wireless sensing in cooperation with another device.
무선 기술과 센싱(sensing) 방법이 발전함에 따라 많은 연구에서 무선 신호 (예를 들어, WiFi)를 사용하여 사람의 활동을 감지하여 침입 감지, 일상 활동 인식, 보다 세밀한 모션 감지와 관련된 활력 징후 모니터링 및 사용자 식별에 대한 제스처 인식 등 다양한 응용 분야를 실현하는 데 성공했다. As wireless technologies and sensing methods advance, many studies are using wireless signals (e.g. WiFi) to detect human activity to detect intrusion, recognize daily activities, monitor vital signs associated with finer-grained motion detection, and It has succeeded in realizing various applications such as gesture recognition for user identification.
이러한 애플리케이션은 안전 보호, 웰빙 모니터링/관리, 스마트 헬스 케어 및 스마트 어플라이언스 상호 작용을 포함하여 스마트 홈 및 사무실 환경을 위한 다양한 도메인을 지원할 수 있다. These applications can support a variety of domains for smart home and office environments, including safety protection, wellness monitoring/management, smart healthcare, and smart appliance interactions.
인체의 움직임은 무선 신호 전파(예를 들어, 반사, 회절 및 산란)에 영향을 미치며, 수신된 무선 신호를 분석하여 인간의 움직임을 포착 할 수 있는 좋은 기회를 제공한다. 연구원들은 즉시 사용할 수 있는 신호 측정을 추출하거나 주파수 변조 신호를 채택하여 주파수 편이. 저비용 및 비 침입 감지 특성으로 인해 무선 기반의 인간 활동 감지는 상당한 관심을 끌었으며 지난 10년 동안 저명한 연구 분야가 되었다. Human movement affects radio signal propagation (eg reflection, diffraction, and scattering), providing a great opportunity to capture human movement by analyzing the received radio signal. Researchers can extract ready-to-use signal measurements, or employ frequency-modulated signals for frequency shifting. Due to its low cost and non-intrusion detection properties, wireless-based human activity detection has attracted considerable attention and has become a prominent research area in the past decade.
본 명세서는 기존 무선 감지 시스템을 기본 원리, 기술 및 시스템 구조 측면에서 조사한다. 특히, 침입 탐지, 회의실 점유율 모니터링, 일일 활동 인식, 제스처 인식, 활력 징후 모니터링, 사용자 식별 및 실내 위치 파악을 포함한 다양한 애플리케이션을 용이하게하기 위해 무선 신호를 어떻게 활용할 수 있는지 설명한다. 인간 활동 감지를 위해 무선 신호를 사용하는 미래의 연구 방향 및 제한 사항에 대해서도 설명한다.This specification examines the existing wireless sensing system in terms of basic principle, technology and system architecture. Specifically, it describes how wireless signals can be utilized to facilitate a variety of applications including intrusion detection, room occupancy monitoring, daily activity recognition, gesture recognition, vital sign monitoring, user identification and indoor location. Future research directions and limitations using radio signals for human activity detection are also discussed.
본 명세서는 무선 센싱을 기반으로 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 방법 및 장치를 제안한다.The present specification proposes a method and apparatus for a wireless device to perform wireless sensing in cooperation with another device based on wireless sensing.
본 명세서의 일례는 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 방법을 제안한다.An example of the present specification proposes a method for a wireless device to perform wireless sensing in cooperation with another device.
본 실시예는 무선 센싱(wireless sensing)을 기반으로 하는 기기가 다수 개 있는 경우, 상호 간 협업을 통한 학습 및 예측을 수행하여 사용자(또는 제스처)를 식별하는 방법을 제안한다. 본 실시예를 통해 사용자의 댁내 환경에 맞는 신호 패턴을 여러 기기가 공존해 있을 때에도 효율적으로 수집, 학습, 예측하는 시스템이 구현 가능해져서 새로운 패러다임의 IoT(Internet of Things) 미래 스마트홈 기기를 창출해 낼 수 있다는 새로운 효과가 있다. 후술하는 제1 및 제2 기기는 무선 센싱을 기반으로 하는 기기임을 가정한다.This embodiment proposes a method of identifying a user (or gesture) by performing learning and prediction through mutual cooperation when there are a plurality of devices based on wireless sensing. Through this embodiment, it is possible to implement a system that efficiently collects, learns, and predicts signal patterns suitable for the user's home environment even when multiple devices coexist, creating a new paradigm of IoT (Internet of Things) future smart home devices. There is a new effect that can be produced. It is assumed that first and second devices to be described below are devices based on wireless sensing.
제1 기기(device)는 제2 기기와 능력 협상(Capabilities Negotiation)을 수행한다.A first device (device) performs capabilities negotiation with a second device (Capabilities Negotiation).
상기 제1 기기는 상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신한다.The first device receives first decision information from the second device based on the result of the capability negotiation.
상기 제1 기기는 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송한다.The first device transmits second decision information that is a result of processing the first decision information to the second device.
이때, 상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보(Soft Decision)이고, 상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과(Hard Decision)이다.In this case, the first decision information is preliminary information required for identification based on wireless sensing (soft decision), and the second decision information is a result of identification based on the wireless sensing (hard decision).
본 명세서에서 제안된 실시예에 따르면, 무선 센싱(wireless sensing) 기반 기기간 협업을 통한 학습 및 예측 방법을 수행하여 사용자의 집안 환경에 맞는 신호 패턴을 여러 기기가 공존해 있을 때에도 효율적으로 수집, 학습, 예측하는 시스템이 구현 가능해져서 새로운 패러다임의 IoT(Internet of Things) 미래 스마트홈 기기를 창출해 낼 수 있다는 새로운 효과가 있다.According to the embodiment proposed in this specification, by performing a learning and prediction method through collaboration between devices based on wireless sensing, a signal pattern suitable for a user's home environment is efficiently collected, learned, There is a new effect of creating a new paradigm of IoT (Internet of Things) future smart home devices as a predictive system can be implemented.
도 1은 본 명세서의 송신 장치 및/또는 수신 장치의 일례를 나타낸다.1 shows an example of a transmitting apparatus and/or a receiving apparatus of the present specification.
도 2는 무선랜(WLAN)의 구조를 나타낸 개념도이다. 2 is a conceptual diagram illustrating the structure of a wireless LAN (WLAN).
도 3은 일반적인 링크 셋업(link setup) 과정을 설명하는 도면이다.3 is a view for explaining a general link setup process.
도 4는 WiFi 센싱의 절차 흐름도를 나타낸다.4 shows a flowchart of a WiFi sensing procedure.
도 5는 무선 신호를 통한 인간 활동 센싱의 일반적인 절차 흐름도를 나타낸다.5 shows a flow diagram of a general procedure for sensing human activity via a wireless signal.
도 6은 인간 걸음에 따른 CSI 스펙토그램(spectrogram)을 나타낸다.6 shows a CSI spectrogram according to a human gait.
도 7은 사용자 인증을 위한 딥러닝 아키텍쳐를 나타낸다.7 shows a deep learning architecture for user authentication.
도 8은 Wireless Sensing 기반 기기가 무선신호를 측정하고 처리 및 예측하는 절차를 단독으로 수행해서 생기는 문제점을 나타낸다.8 shows a problem that occurs when a wireless sensing-based device independently performs a procedure for measuring, processing, and predicting a wireless signal.
도 9는 Wireless Sensing 기기의 블록도를 나타낸다.9 shows a block diagram of a wireless sensing device.
도 10은 Wireless Sensing 기기의 기능부에 대한 블록도를 나타낸다.10 is a block diagram of a functional unit of a wireless sensing device.
도 11은 인터페이스를 포함한 Wireless Sensing 기기의 블록도를 나타낸다.11 is a block diagram of a wireless sensing device including an interface.
도 12는 협력 device의 유형을 도시한 도면이다.12 is a diagram illustrating a type of cooperative device.
도 13은 Wireless Sensing 기기가 협력하여 학습 및 예측을 수행하는 절차의 일례를 나타낸다.13 shows an example of a procedure in which a wireless sensing device cooperates to perform learning and prediction.
도 14는 정보 전달 방법의 다양한 일례를 나타낸다.14 shows various examples of an information delivery method.
도 15는 AI Cloud에서 예측하여 결과를 공유하는 예시 1을 도시한 도면이다.15 is a diagram illustrating Example 1 in which predictions are made in AI Cloud and the results are shared.
도 16은 대표 Device가 예측하여 결과를 공유하는 예시 2를 도시한 도면이다.16 is a diagram illustrating Example 2 in which a representative device predicts and shares a result.
도 17은 각 Device에서 예측하여 결과를 공유하는 예시 3을 도시한 도면이다.17 is a diagram illustrating Example 3 in which predictions are made in each device and the results are shared.
도 18은 본 실시예에 따른 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 절차를 도시한 흐름도이다.18 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
도 19는 본 실시예에 따른 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 절차를 도시한 흐름도이다.19 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
도 20은 본 명세서의 송신 장치 및/또는 수신 장치의 변형된 일례를 나타낸다.20 shows a modified example of a transmitting apparatus and/or a receiving apparatus of the present specification.
본 명세서에서 “A 또는 B(A or B)”는 “오직 A”, “오직 B” 또는 “A와 B 모두”를 의미할 수 있다. 달리 표현하면, 본 명세서에서 “A 또는 B(A or B)”는 “A 및/또는 B(A and/or B)”으로 해석될 수 있다. 예를 들어, 본 명세서에서 “A, B 또는 C(A, B or C)”는 “오직 A”, “오직 B”, “오직 C”또는 “A, B 및 C의 임의의 모든 조합(any combination of A, B and C)”를 의미할 수 있다.In this specification, “A or B (A or B)” may mean “only A”, “only B” or “both A and B”. In other words, in the present specification, “A or B (A or B)” may be interpreted as “A and/or B (A and/or B)”. For example, “A, B or C (A, B or C)” herein means “only A,” “only B,” “only C,” or “any and any combination of A, B and C. combination of A, B and C)”.
본 명세서에서 사용되는 슬래쉬(/)나 쉼표(comma)는 “및/또는(and/or)”을 의미할 수 있다. 예를 들어, “A/B”는 “및/또는 B”를 의미할 수 있다. 이에 따라 “A/B”는 “오직 A”, “오직 B”, 또는 “A와 B 모두”를 의미할 수 있다. 예를 들어, “A, B, C”는 “A, B 또는 C”를 의미할 수 있다.A slash (/) or a comma (comma) used herein may mean “and/or”. For example, “A/B” may mean “and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B, or C”.
본 명세서에서 “적어도 하나의 A 및 B(at least one of A and B)”는, “오직 A”“오직 B” 또는 “A와 B 모두”를 의미할 수 있다. 또한, 본 명세서에서 “적어도 하나의 A 또는 B(at least one of A or B)”나 “적어도 하나의 A 및/또는 B(at least one of A and/or B)”라는 표현은 “적어도 하나의 A 및 B(at least one of A and B)”와 동일하게 해석될 수 있다. As used herein, “at least one of A and B” may mean “only A”, “only B” or “both A and B”. In addition, in this specification, the expression “at least one of A or B” or “at least one of A and/or B” means “at least one It can be interpreted the same as “at least one of A and B”.
또한, 본 명세서에서 “적어도 하나의 A, B 및 C(at least one of A, B and C)”는, “오직 A”, “오직 B”, “오직 C”또는 “A, B 및 C의 임의의 모든 조합(any combination of A, B and C)”를 의미할 수 있다. 또한, “적어도 하나의 A, B 또는 C(at least one of A, B or C)”나 “적어도 하나의 A, B 및/또는 C(at least one of A, B and/or C)”는 “적어도 하나의 A, B 및 C(at least one of A, B and C)”를 의미할 수 있다. Also, in this specification, “at least one of A, B and C” means “only A”, “only B”, “only C” or “of A, B and C”. any combination of A, B and C”. Also, “at least one of A, B or C” or “at least one of A, B and/or C” means may mean “at least one of A, B and C”.
또한, 본 명세서에서 사용되는 괄호는 “예를 들어(for example)”를 의미할 수 있다. 구체적으로, “제어 정보(EHT-Signal)”로 표시된 경우, “제어 정보”의 일례로 “EHT-Signal”이 제안된 것일 수 있다. 달리 표현하면 본 명세서의 “제어 정보”는 “EHT-Signal”로 제한(limit)되지 않고, “EHT-Signal”이 “제어 정보”의 일례로 제안될 것일 수 있다. 또한, “제어 정보(즉, EHT-signal)”로 표시된 경우에도, “제어 정보”의 일례로 “EHT-Signal”가 제안된 것일 수 있다. In addition, parentheses used herein may mean “for example”. Specifically, when displayed as “control information (EHT-Signal)”, “EHT-Signal” may be proposed as an example of “control information”. In other words, “control information” of the present specification is not limited to “EHT-Signal”, and “EHT-Signal” may be proposed as an example of “control information”. Also, even when displayed as “control information (ie, EHT-signal)”, “EHT-Signal” may be proposed as an example of “control information”.
본 명세서에서 하나의 도면 내에서 개별적으로 설명되는 기술적 특징은, 개별적으로 구현될 수도 있고, 동시에 구현될 수도 있다.In this specification, technical features that are individually described within one drawing may be implemented individually or simultaneously.
본 명세서의 이하의 일례는 다양한 무선 통신시스템에 적용될 수 있다. 예를 들어, 본 명세서의 이하의 일례는 무선랜(wireless local area network, WLAN) 시스템에 적용될 수 있다. 예를 들어, 본 명세서는 IEEE 802.11a/g/n/ac의 규격이나, IEEE 802.11ax 규격에 적용될 수 있다. 또한 본 명세서는 새롭게 제안되는 EHT 규격 또는 IEEE 802.11be 규격에도 적용될 수 있다. 또한 본 명세서의 일례는 EHT 규격 또는 IEEE 802.11be를 개선(enhance)한 새로운 무선랜 규격에도 적용될 수 있다. 또한 본 명세서의 일례는 이동 통신 시스템에 적용될 수 있다. 예를 들어, 3GPP(3rd Generation Partnership Project) 규격에 기반하는 LTE(Long Term Evolution) 및 그 진화(evoluation)에 기반하는 이동 통신 시스템에 적용될 수 있다. 또한, 본 명세서의 일례는 3GPP 규격에 기반하는 5G NR 규격의 통신 시스템에 적용될 수 있다. The following examples of the present specification may be applied to various wireless communication systems. For example, the following example of the present specification may be applied to a wireless local area network (WLAN) system. For example, the present specification may be applied to the IEEE 802.11a/g/n/ac standard or the IEEE 802.11ax standard. In addition, this specification may be applied to the newly proposed EHT standard or IEEE 802.11be standard. In addition, an example of the present specification may be applied to the EHT standard or a new wireless LAN standard that is an enhancement of IEEE 802.11be. Also, an example of the present specification may be applied to a mobile communication system. For example, it may be applied to a mobile communication system based on Long Term Evolution (LTE) based on the 3rd Generation Partnership Project (3GPP) standard and its evolution. In addition, an example of the present specification may be applied to a communication system of the 5G NR standard based on the 3GPP standard.
이하 본 명세서의 기술적 특징을 설명하기 위해 본 명세서가 적용될 수 있는 기술적 특징을 설명한다. Hereinafter, technical features to which the present specification can be applied in order to describe the technical features of the present specification will be described.
도 1은 본 명세서의 송신 장치 및/또는 수신 장치의 일례를 나타낸다. 1 shows an example of a transmitting apparatus and/or a receiving apparatus of the present specification.
도 1의 일례는 이하에서 설명되는 다양한 기술적 특징을 수행할 수 있다. 도 1은 적어도 하나의 STA(station)에 관련된다. 예를 들어, 본 명세서의 STA(110, 120)은 이동 단말(mobile terminal), 무선 기기(wireless device), 무선 송수신 유닛(Wireless Transmit/Receive Unit; WTRU), 사용자 장비(User Equipment; UE), 이동국(Mobile Station; MS), 이동 가입자 유닛(Mobile Subscriber Unit) 또는 단순히 유저(user) 등의 다양한 명칭으로도 불릴 수 있다. 본 명세서의 STA(110, 120)은 네트워크, 기지국(Base Station), Node-B, AP(Access Point), 리피터, 라우터, 릴레이 등의 다양한 명칭으로 불릴 수 있다. 본 명세서의 STA(110, 120)은 수신 장치, 송신 장치, 수신 STA, 송신 STA, 수신 Device, 송신 Device 등의 다양한 명칭으로 불릴 수 있다. The example of FIG. 1 may perform various technical features described below. 1 relates to at least one STA (station). For example, the STAs 110 and 120 of the present specification are a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), It may also be called by various names such as a mobile station (MS), a mobile subscriber unit, or simply a user. The STAs 110 and 120 in the present specification may be referred to by various names such as a network, a base station, a Node-B, an access point (AP), a repeater, a router, and a relay. In the present specification, the STAs 110 and 120 may be referred to by various names such as a receiving device, a transmitting device, a receiving STA, a transmitting STA, a receiving device, and a transmitting device.
예를 들어, STA(110, 120)은 AP(access Point) 역할을 수행하거나 non-AP 역할을 수행할 수 있다. 즉, 본 명세서의 STA(110, 120)은 AP 및/또는 non-AP의 기능을 수행할 수 있다. 본 명세서에서 AP는 AP STA으로도 표시될 수 있다. For example, the STAs 110 and 120 may perform an access point (AP) role or a non-AP role. That is, the STAs 110 and 120 of the present specification may perform AP and/or non-AP functions. In this specification, the AP may also be indicated as an AP STA.
본 명세서의 STA(110, 120)은 IEEE 802.11 규격 이외의 다양한 통신 규격을 함께 지원할 수 있다. 예를 들어, 3GPP 규격에 따른 통신 규격(예를 들어, LTE, LTE-A, 5G NR 규격)등을 지원할 수 있다. 또한 본 명세서의 STA은 휴대 전화, 차량(vehicle), 개인용 컴퓨터 등의 다양한 장치로 구현될 수 있다. 또한, 본 명세서의 STA은 음성 통화, 영상 통화, 데이터 통신, 자율 주행(Self-Driving, Autonomous-Driving) 등의 다양한 통신 서비스를 위한 통신을 지원할 수 있다. The STAs 110 and 120 of the present specification may support various communication standards other than the IEEE 802.11 standard. For example, a communication standard (eg, LTE, LTE-A, 5G NR standard) according to the 3GPP standard may be supported. In addition, the STA of the present specification may be implemented in various devices such as a mobile phone, a vehicle, and a personal computer. In addition, the STA of the present specification may support communication for various communication services such as voice call, video call, data communication, and autonomous driving (Self-Driving, Autonomous-Driving).
본 명세서에서 STA(110, 120)은 IEEE 802.11 표준의 규정을 따르는 매체 접속 제어(medium access control, MAC)와 무선 매체에 대한 물리 계층(Physical Layer) 인터페이스를 포함할 수 있다. In this specification, the STAs 110 and 120 may include a medium access control (MAC) conforming to the IEEE 802.11 standard and a physical layer interface for a wireless medium.
도 1의 부도면 (a)를 기초로 STA(110, 120)을 설명하면 이하와 같다. The STAs 110 and 120 will be described based on the sub-view (a) of FIG. 1 as follows.
제1 STA(110)은 프로세서(111), 메모리(112) 및 트랜시버(113)를 포함할 수 있다. 도시된 프로세서, 메모리 및 트랜시버는 각각 별도의 칩으로 구현되거나, 적어도 둘 이상의 블록/기능이 하나의 칩을 통해 구현될 수 있다. The first STA 110 may include a processor 111 , a memory 112 , and a transceiver 113 . The illustrated processor, memory, and transceiver may each be implemented as separate chips, or at least two or more blocks/functions may be implemented through one chip.
제1 STA의 트랜시버(113)는 신호의 송수신 동작을 수행한다. 구체적으로, IEEE 802.11 패킷(예를 들어, IEEE 802.11a/b/g/n/ac/ax/be 등)을 송수신할 수 있다. The transceiver 113 of the first STA performs a signal transmission/reception operation. Specifically, IEEE 802.11 packets (eg, IEEE 802.11a/b/g/n/ac/ax/be, etc.) may be transmitted/received.
예를 들어, 제1 STA(110)은 AP의 의도된 동작을 수행할 수 있다. 예를 들어, AP의 프로세서(111)는 트랜시버(113)를 통해 신호를 수신하고, 수신 신호를 처리하고, 송신 신호를 생성하고, 신호 송신을 위한 제어를 수행할 수 있다. AP의 메모리(112)는 트랜시버(113)를 통해 수신된 신호(즉, 수신 신호)를 저장할 수 있고, 트랜시버를 통해 송신될 신호(즉, 송신 신호)를 저장할 수 있다. For example, the first STA 110 may perform an intended operation of the AP. For example, the processor 111 of the AP may receive a signal through the transceiver 113 , process the received signal, generate a transmission signal, and perform control for signal transmission. The memory 112 of the AP may store a signal (ie, a received signal) received through the transceiver 113 , and may store a signal to be transmitted through the transceiver (ie, a transmission signal).
예를 들어, 제2 STA(120)은 Non-AP STA의 의도된 동작을 수행할 수 있다. 예를 들어, non-AP의 트랜시버(123)는 신호의 송수신 동작을 수행한다. 구체적으로, IEEE 802.11 패킷(예를 들어, IEEE 802.11a/b/g/n/ac/ax/be 등)을 송수신할 수 있다. For example, the second STA 120 may perform an intended operation of a non-AP STA. For example, the transceiver 123 of the non-AP performs a signal transmission/reception operation. Specifically, IEEE 802.11 packets (eg, IEEE 802.11a/b/g/n/ac/ax/be, etc.) may be transmitted/received.
예를 들어, Non-AP STA의 프로세서(121)는 트랜시버(123)를 통해 신호를 수신하고, 수신 신호를 처리하고, 송신 신호를 생성하고, 신호 송신을 위한 제어를 수행할 수 있다. Non-AP STA의 메모리(122)는 트랜시버(123)를 통해 수신된 신호(즉, 수신 신호)를 저장할 수 있고, 트랜시버를 통해 송신될 신호(즉, 송신 신호)를 저장할 수 있다. For example, the processor 121 of the non-AP STA may receive a signal through the transceiver 123 , process the received signal, generate a transmission signal, and perform control for signal transmission. The memory 122 of the non-AP STA may store a signal (ie, a received signal) received through the transceiver 123 and may store a signal to be transmitted through the transceiver (ie, a transmission signal).
예를 들어, 이하의 명세서에서 AP로 표시된 장치의 동작은 제1 STA(110) 또는 제2 STA(120)에서 수행될 수 있다. 예를 들어 제1 STA(110)이 AP인 경우, AP로 표시된 장치의 동작은 제1 STA(110)의 프로세서(111)에 의해 제어되고, 제1 STA(110)의 프로세서(111)에 의해 제어되는 트랜시버(113)를 통해 관련된 신호가 송신되거나 수신될 수 있다. 또한, AP의 동작에 관련된 제어 정보나 AP의 송신/수신 신호는 제1 STA(110)의 메모리(112)에 저장될 수 있다. 또한, 제2 STA(110)이 AP인 경우, AP로 표시된 장치의 동작은 제2 STA(120)의 프로세서(121)에 의해 제어되고, 제2 STA(120)의 프로세서(121)에 의해 제어되는 트랜시버(123)를 통해 관련된 신호가 송신되거나 수신될 수 있다. 또한, AP의 동작에 관련된 제어 정보나 AP의 송신/수신 신호는 제2 STA(110)의 메모리(122)에 저장될 수 있다.For example, an operation of a device indicated as an AP in the following specification may be performed by the first STA 110 or the second STA 120 . For example, when the first STA 110 is an AP, the operation of the device marked as AP is controlled by the processor 111 of the first STA 110 , and is controlled by the processor 111 of the first STA 110 . Relevant signals may be transmitted or received via the controlled transceiver 113 . In addition, control information related to an operation of the AP or a transmission/reception signal of the AP may be stored in the memory 112 of the first STA 110 . In addition, when the second STA 110 is an AP, the operation of the device indicated by the AP is controlled by the processor 121 of the second STA 120 and controlled by the processor 121 of the second STA 120 . A related signal may be transmitted or received via the transceiver 123 that is used. In addition, control information related to an operation of the AP or a transmission/reception signal of the AP may be stored in the memory 122 of the second STA 110 .
예를 들어, 이하의 명세서에서 non-AP(또는 User-STA)로 표시된 장치의 동작은 제 STA(110) 또는 제2 STA(120)에서 수행될 수 있다. 예를 들어 제2 STA(120)이 non-AP인 경우, non-AP로 표시된 장치의 동작은 제2 STA(120)의 프로세서(121)에 의해 제어되고, 제2 STA(120)의 프로세서(121)에 의해 제어되는 트랜시버(123)를 통해 관련된 신호가 송신되거나 수신될 수 있다. 또한, non-AP의 동작에 관련된 제어 정보나 AP의 송신/수신 신호는 제2 STA(120)의 메모리(122)에 저장될 수 있다. 예를 들어 제1 STA(110)이 non-AP인 경우, non-AP로 표시된 장치의 동작은 제1 STA(110)의 프로세서(111)에 의해 제어되고, 제1 STA(120)의 프로세서(111)에 의해 제어되는 트랜시버(113)를 통해 관련된 신호가 송신되거나 수신될 수 있다. 또한, non-AP의 동작에 관련된 제어 정보나 AP의 송신/수신 신호는 제1 STA(110)의 메모리(112)에 저장될 수 있다. For example, an operation of a device indicated as a non-AP (or User-STA) in the following specification may be performed by the first STA 110 or the second STA 120 . For example, when the second STA 120 is a non-AP, the operation of the device marked as non-AP is controlled by the processor 121 of the second STA 120, and the processor ( A related signal may be transmitted or received via the transceiver 123 controlled by 121 . In addition, control information related to the operation of the non-AP or the AP transmit/receive signal may be stored in the memory 122 of the second STA 120 . For example, when the first STA 110 is a non-AP, the operation of the device marked as non-AP is controlled by the processor 111 of the first STA 110 , and the processor ( Related signals may be transmitted or received via transceiver 113 controlled by 111 . In addition, control information related to the operation of the non-AP or the AP transmission/reception signal may be stored in the memory 112 of the first STA 110 .
이하의 명세서에서 (송신/수신) STA, 제1 STA, 제2 STA, STA1, STA2, AP, 제1 AP, 제2 AP, AP1, AP2, (송신/수신) Terminal, (송신/수신) device, (송신/수신) apparatus, 네트워크 등으로 불리는 장치는 도 1의 STA(110, 120)을 의미할 수 있다. 예를 들어, 구체적인 도면 부호 없이 (송신/수신) STA, 제1 STA, 제2 STA, STA1, STA2, AP, 제1 AP, 제2 AP, AP1, AP2, (송신/수신) Terminal, (송신/수신) device, (송신/수신) apparatus, 네트워크 등으로 표시된 장치도 도 1의 STA(110, 120)을 의미할 수 있다. 예를 들어, 이하의 일례에서 다양한 STA이 신호(예를 들어, PPPDU)를 송수신하는 동작은 도 1의 트랜시버(113, 123)에서 수행되는 것일 수 있다. 또한, 이하의 일례에서 다양한 STA이 송수신 신호를 생성하거나 송수신 신호를 위해 사전에 데이터 처리나 연산을 수행하는 동작은 도 1의 프로세서(111, 121)에서 수행되는 것일 수 있다. 예를 들어, 송수신 신호를 생성하거나 송수신 신호를 위해 사전에 데이터 처리나 연산을 수행하는 동작의 일례는, 1) PPDU 내에 포함되는 서브 필드(SIG, STF, LTF, Data) 필드의 비트 정보를 결정/획득/구성/연산/디코딩/인코딩하는 동작, 2) PPDU 내에 포함되는 서브 필드(SIG, STF, LTF, Data) 필드를 위해 사용되는 시간 자원이나 주파수 자원(예를 들어, 서브캐리어 자원) 등을 결정/구성/회득하는 동작, 3) PPDU 내에 포함되는 서브 필드(SIG, STF, LTF, Data) 필드를 위해 사용되는 특정한 시퀀스(예를 들어, 파일럿 시퀀스, STF/LTF 시퀀스, SIG에 적용되는 엑스트라 시퀀스) 등을 결정/구성/회득하는 동작, 4) STA에 대해 적용되는 전력 제어 동작 및/또는 파워 세이빙 동작, 5) ACK 신호의 결정/획득/구성/연산/디코딩/인코딩 등에 관련된 동작을 포함할 수 있다. 또한, 이하의 일례에서 다양한 STA이 송수신 신호의 결정/획득/구성/연산/디코딩/인코딩을 위해 사용하는 다양한 정보(예를 들어, 필드/서브필드/제어필드/파라미터/파워 등에 관련된 정보)는 도 1의 메모리(112, 122)에 저장될 수 있다. In the following specification (transmission / reception) STA, first STA, second STA, STA1, STA2, AP, first AP, second AP, AP1, AP2, (transmission / reception) Terminal, (transmission / reception) device , (transmitting/receiving) apparatus, a device called a network, etc. may refer to the STAs 110 and 120 of FIG. 1 . For example, without specific reference numerals (transmitting/receiving) STA, first STA, second STA, STA1, STA2, AP, first AP, second AP, AP1, AP2, (transmitting/receiving) Terminal, (transmitting) A device indicated by a /receiver) device, a (transmit/receive) apparatus, and a network may also refer to the STAs 110 and 120 of FIG. 1 . For example, in the following example, an operation in which various STAs transmit and receive signals (eg, PPPDUs) may be performed by the transceivers 113 and 123 of FIG. 1 . In addition, in the following example, the operations of the various STAs generating transmission/reception signals or performing data processing or calculation in advance for the transmission/reception signals may be performed by the processors 111 and 121 of FIG. 1 . For example, an example of an operation of generating a transmission/reception signal or performing data processing or operation in advance for a transmission/reception signal is 1) Determining bit information of a subfield (SIG, STF, LTF, Data) field included in a PPDU /Acquisition/configuration/computation/decoding/encoding operation, 2) time resource or frequency resource (eg, subcarrier resource) used for the subfield (SIG, STF, LTF, Data) field included in the PPDU, etc. operation of determining / configuring / obtaining, 3) a specific sequence (eg, pilot sequence, STF / LTF sequence, SIG) used for the subfield (SIG, STF, LTF, Data) field included in the PPDU operation of determining / configuring / acquiring an extra sequence), etc., 4) a power control operation and / or a power saving operation applied to the STA, 5) an operation related to determination / acquisition / configuration / operation / decoding / encoding of an ACK signal may include In addition, in the following example, various information (eg, field/subfield/control field/parameter/power related information) used by various STAs for determination/acquisition/configuration/computation/decoding/encoding of transmit/receive signals is may be stored in the memories 112 and 122 of FIG. 1 .
상술한 도 1의 부도면 (a)의 장치/STA는 도 1의 부도면 (b)와 같이 변형될 수 있다. 이하 도 1의 부도면 (b)을 기초로, 본 명세서의 STA(110, 120)을 설명한다. The device/STA of the sub-view (a) of FIG. 1 described above may be modified as shown in the sub-view (b) of FIG. 1 . Hereinafter, the STAs 110 and 120 of the present specification will be described based on the sub-drawing (b) of FIG. 1 .
예를 들어, 도 1의 부도면 (b)에 도시된 트랜시버(113, 123)는 상술한 도 1의 부도면 (a)에 도시된 트랜시버와 동일한 기능을 수행할 수 있다. 예를 들어, 도 1의 부도면 (b)에 도시된 프로세싱 칩(114, 124)은 프로세서(111, 121) 및 메모리(112, 122)를 포함할 수 있다. 도 1의 부도면 (b)에 도시된 프로세서(111, 121) 및 메모리(112, 122)는 상술한 도 1의 부도면 (a)에 도시된 프로세서(111, 121) 및 메모리(112, 122)와 동일한 기능을 수행할 수 있다. For example, the transceivers 113 and 123 illustrated in (b) of FIG. 1 may perform the same function as the transceivers illustrated in (a) of FIG. 1 . For example, the processing chips 114 and 124 illustrated in (b) of FIG. 1 may include processors 111 and 121 and memories 112 and 122 . The processors 111 and 121 and the memories 112 and 122 illustrated in (b) of FIG. 1 are the processors 111 and 121 and the memories 112 and 122 illustrated in (a) of FIG. ) can perform the same function.
이하에서 설명되는, 이동 단말(mobile terminal), 무선 기기(wireless device), 무선 송수신 유닛(Wireless Transmit/Receive Unit; WTRU), 사용자 장비(User Equipment; UE), 이동국(Mobile Station; MS), 이동 가입자 유닛(Mobile Subscriber Unit), 유저(user), 유저 STA, 네트워크, 기지국(Base Station), Node-B, AP(Access Point), 리피터, 라우터, 릴레이, 수신 장치, 송신 장치, 수신 STA, 송신 STA, 수신 Device, 송신 Device, 수신 Apparatus, 및/또는 송신 Apparatus는, 도 1의 부도면 (a)/(b)에 도시된 STA(110, 120)을 의미하거나, 도 1의 부도면 (b)에 도시된 프로세싱 칩(114, 124)을 의미할 수 있다. 즉, 본 명세서의 기술적 특징은, 도 1의 부도면 (a)/(b)에 도시된 STA(110, 120)에 수행될 수도 있고, 도 1의 부도면 (b)에 도시된 프로세싱 칩(114, 124)에서만 수행될 수도 있다. 예를 들어, 송신 STA가 제어 신호를 송신하는 기술적 특징은, 도 1의 부도면 (a)/(b)에 도시된 프로세서(111, 121)에서 생성된 제어 신호가 도 1의 부도면 (a)/(b)에 도시된 트랜시버(113, 123)을 통해 송신되는 기술적 특징으로 이해될 수 있다. 또는, 송신 STA가 제어 신호를 송신하는 기술적 특징은, 도 1의 부도면 (b)에 도시된 프로세싱 칩(114, 124)에서 트랜시버(113, 123)로 전달될 제어 신호가 생성되는 기술적 특징으로 이해될 수 있다. As described below, a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile Mobile Subscriber Unit, user, user STA, network, base station, Node-B, access point (AP), repeater, router, relay, receiving device, transmitting device, receiving STA, transmitting STA, Receiving Device, Transmitting Device, Receiving Apparatus, and/or Transmitting Apparatus means the STAs 110 and 120 shown in the sub-drawings (a)/(b) of FIG. ) may mean the processing chips 114 and 124 shown in FIG. That is, the technical features of the present specification may be performed on the STAs 110 and 120 shown in (a)/(b) of FIG. 1, and the processing chip ( 114 and 124). For example, a technical feature in which a transmitting STA transmits a control signal is that the control signals generated by the processors 111 and 121 shown in the sub-drawings (a)/(b) of FIG. 1 are (a) of FIG. ) / (b) can be understood as a technical feature transmitted through the transceivers 113 and 123 shown in (b). Alternatively, the technical feature in which the transmitting STA transmits the control signal is a technical feature in which a control signal to be transmitted to the transceivers 113 and 123 is generated from the processing chips 114 and 124 shown in the sub-view (b) of FIG. can be understood
예를 들어, 수신 STA가 제어 신호를 수신하는 기술적 특징은, 도 1의 부도면 (a)에 도시된 트랜시버(113, 123)에 의해 제어 신호가 수신되는 기술적 특징으로 이해될 수 있다. 또는, 수신 STA가 제어 신호를 수신하는 기술적 특징은, 도 1의 부도면 (a)에 도시된 트랜시버(113, 123)에 수신된 제어 신호가 도 1의 부도면 (a)에 도시된 프로세서(111, 121)에 의해 획득되는 기술적 특징으로 이해될 수 있다. 또는, 수신 STA가 제어 신호를 수신하는 기술적 특징은, 도 1의 부도면 (b)에 도시된 트랜시버(113, 123)에 수신된 제어 신호가 도 1의 부도면 (b)에 도시된 프로세싱 칩(114, 124)에 의해 획득되는 기술적 특징으로 이해될 수 있다. For example, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal is received by the transceivers 113 and 123 shown in the sub-drawing (a) of FIG. 1 . Alternatively, the technical feature that the receiving STA receives the control signal is that the control signal received by the transceivers 113 and 123 shown in the sub-drawing (a) of FIG. 1 is the processor shown in (a) of FIG. 111, 121) can be understood as a technical feature obtained by. Alternatively, the technical feature for the receiving STA to receive the control signal is that the control signal received by the transceivers 113 and 123 shown in the sub-view (b) of FIG. 1 is the processing chip shown in the sub-view (b) of FIG. It can be understood as a technical feature obtained by (114, 124).
도 1의 부도면 (b)을 참조하면, 메모리(112, 122) 내에 소프트웨어 코드(115, 125)가 포함될 수 있다. 소프트웨어 코드(115, 125)는 프로세서(111, 121)의 동작을 제어하는 instruction이 포함될 수 있다. 소프트웨어 코드(115, 125)는 다양한 프로그래밍 언어로 포함될 수 있다. Referring to (b) of FIG. 1 , software codes 115 and 125 may be included in the memories 112 and 122 . The software codes 115 and 125 may include instructions for controlling the operations of the processors 111 and 121 . Software code 115, 125 may be included in a variety of programming languages.
도 1에 도시된 프로세서(111, 121) 또는 프로세싱 칩(114, 124)은 ASIC(application-specific integrated circuit), 다른 칩셋, 논리 회로 및/또는 데이터 처리 장치를 포함할 수 있다. 프로세서는 AP(application processor)일 수 있다. 예를 들어, 도 1에 도시된 프로세서(111, 121) 또는 프로세싱 칩(114, 124)은 DSP(digital signal processor), CPU(central processing unit), GPU(graphics processing unit), 모뎀(Modem; modulator and demodulator) 중 적어도 하나를 포함할 수 있다. 예를 들어, 도 1에 도시된 프로세서(111, 121) 또는 프로세싱 칩(114, 124)은 Qualcomm®에 의해 제조된 SNAPDRAGONTM 시리즈 프로세서, Samsung®에 의해 제조된 EXYNOSTM 시리즈 프로세서, Apple®에 의해 제조된 A 시리즈 프로세서, MediaTek®에 의해 제조된 HELIOTM 시리즈 프로세서, INTEL®에 의해 제조된 ATOMTM 시리즈 프로세서 또는 이를 개선(enhance)한 프로세서일 수 있다.The processors 111 and 121 or the processing chips 114 and 124 shown in FIG. 1 may include an application-specific integrated circuit (ASIC), other chipsets, logic circuits, and/or data processing devices. The processor may be an application processor (AP). For example, the processors 111 and 121 or the processing chips 114 and 124 illustrated in FIG. 1 may include a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), and a modem (Modem). and demodulator). For example, the processors 111 and 121 or the processing chips 114 and 124 shown in FIG. 1 are a SNAPDRAGON™ series processor manufactured by Qualcomm®, an EXYNOSTM series processor manufactured by Samsung®, and a processor manufactured by Apple®. It may be an A series processor, a HELIOTM series processor manufactured by MediaTek®, an ATOMTM series processor manufactured by INTEL®, or a processor enhanced therewith.
본 명세서에서 상향링크는 non-AP STA로부터 AP STA으로의 통신을 위한 링크를 의미할 수 있고 상향링크를 통해 상향링크 PPDU/패킷/신호 등이 송신될 수 있다. 또한, 본 명세서에서 하향링크는 AP STA로부터 non-AP STA으로의 통신을 위한 링크를 의미할 수 있고 하향링크를 통해 하향링크 PPDU/패킷/신호 등이 송신될 수 있다.In this specification, uplink may mean a link for communication from a non-AP STA to an AP STA, and an uplink PPDU/packet/signal may be transmitted through the uplink. In addition, in this specification, downlink may mean a link for communication from an AP STA to a non-AP STA, and a downlink PPDU/packet/signal may be transmitted through the downlink.
도 2는 무선랜(WLAN)의 구조를 나타낸 개념도이다.2 is a conceptual diagram illustrating the structure of a wireless LAN (WLAN).
도 2의 상단은 IEEE(institute of electrical and electronic engineers) 802.11의 인프라스트럭쳐 BSS(basic service set)의 구조를 나타낸다.The upper part of FIG. 2 shows the structure of an infrastructure basic service set (BSS) of the Institute of Electrical and Electronic Engineers (IEEE) 802.11.
도 2의 상단을 참조하면, 무선랜 시스템은 하나 또는 그 이상의 인프라스트럭쳐 BSS(200, 205)(이하, BSS)를 포함할 수 있다. BSS(200, 205)는 성공적으로 동기화를 이루어서 서로 통신할 수 있는 AP(access point, 225) 및 STA1(Station, 200-1)과 같은 AP와 STA의 집합으로서, 특정 영역을 가리키는 개념은 아니다. BSS(205)는 하나의 AP(230)에 하나 이상의 결합 가능한 STA(205-1, 205-2)을 포함할 수도 있다.Referring to the upper part of FIG. 2 , a wireless LAN system may include one or more infrastructure BSSs 200 and 205 (hereinafter, BSSs). The BSSs 200 and 205 are a set of APs and STAs such as an access point (AP) 225 and a station 200-1 (STA1) that can communicate with each other through successful synchronization, and are not a concept indicating a specific area. The BSS 205 may include one or more combinable STAs 205 - 1 and 205 - 2 to one AP 230 .
BSS는 적어도 하나의 STA, 분산 서비스(distribution Service)를 제공하는 AP(225, 230) 및 다수의 AP를 연결시키는 분산 시스템(distribution System, DS, 210)을 포함할 수 있다.The BSS may include at least one STA, the APs 225 and 230 providing a distribution service, and a distribution system (DS) 210 connecting a plurality of APs.
분산 시스템(210)은 여러 BSS(200, 205)를 연결하여 확장된 서비스 셋인 ESS(extended service set, 240)를 구현할 수 있다. ESS(240)는 하나 또는 여러 개의 AP가 분산 시스템(210)을 통해 연결되어 이루어진 하나의 네트워크를 지시하는 용어로 사용될 수 있다. 하나의 ESS(240)에 포함되는 AP는 동일한 SSID(service set identification)를 가질 수 있다.The distributed system 210 may implement an extended service set (ESS) 240 that is an extended service set by connecting several BSSs 200 and 205 . The ESS 240 may be used as a term indicating one network in which one or several APs are connected through the distributed system 210 . APs included in one ESS 240 may have the same service set identification (SSID).
포털(portal, 220)은 무선랜 네트워크(IEEE 802.11)와 다른 네트워크(예를 들어, 802.X)와의 연결을 수행하는 브리지 역할을 수행할 수 있다.The portal 220 may serve as a bridge connecting a wireless LAN network (IEEE 802.11) and another network (eg, 802.X).
도 2의 상단과 같은 BSS에서는 AP(225, 230) 사이의 네트워크 및 AP(225, 230)와 STA(200-1, 205-1, 205-2) 사이의 네트워크가 구현될 수 있다. 하지만, AP(225, 230)가 없이 STA 사이에서도 네트워크를 설정하여 통신을 수행하는 것도 가능할 수 있다. AP(225, 230)가 없이 STA 사이에서도 네트워크를 설정하여 통신을 수행하는 네트워크를 애드-혹 네트워크(Ad-Hoc network) 또는 독립 BSS(independent basic service set, IBSS)라고 정의한다.In the BSS as shown in the upper part of FIG. 2 , a network between the APs 225 and 230 and a network between the APs 225 and 230 and the STAs 200 - 1 , 205 - 1 and 205 - 2 may be implemented. However, it may be possible to establish a network and perform communication even between STAs without the APs 225 and 230 . A network that establishes a network and performs communication even between STAs without the APs 225 and 230 is defined as an ad-hoc network or an independent basic service set (IBSS).
도 2의 하단은 IBSS를 나타낸 개념도이다.The lower part of FIG. 2 is a conceptual diagram illustrating the IBSS.
도 2의 하단을 참조하면, IBSS는 애드-혹 모드로 동작하는 BSS이다. IBSS는 AP를 포함하지 않기 때문에 중앙에서 관리 기능을 수행하는 개체(centralized management entity)가 없다. 즉, IBSS에서 STA(250-1, 250-2, 250-3, 255-4, 255-5)들은 분산된 방식(distributed manner)으로 관리된다. IBSS에서는 모든 STA(250-1, 250-2, 250-3, 255-4, 255-5)이 이동 STA으로 이루어질 수 있으며, 분산 시스템으로의 접속이 허용되지 않아서 자기 완비적 네트워크(self-contained network)를 이룬다.Referring to the lower part of FIG. 2 , the IBSS is a BSS operating in an ad-hoc mode. Since IBSS does not include an AP, there is no centralized management entity that performs a centralized management function. That is, in the IBSS, the STAs 250-1, 250-2, 250-3, 255-4, and 255-5 are managed in a distributed manner. In IBSS, all STAs (250-1, 250-2, 250-3, 255-4, 255-5) can be mobile STAs, and access to a distributed system is not allowed, so a self-contained network network) is formed.
도 3은 일반적인 링크 셋업(link setup) 과정을 설명하는 도면이다. 3 is a view for explaining a general link setup process.
도시된 S310 단계에서 STA은 네트워크 발견 동작을 수행할 수 있다. 네트워크 발견 동작은 STA의 스캐닝(scanning) 동작을 포함할 수 있다. 즉, STA이 네트워크에 액세스하기 위해서는 참여 가능한 네트워크를 찾아야 한다. STA은 무선 네트워크에 참여하기 전에 호환 가능한 네트워크를 식별하여야 하는데, 특정 영역에 존재하는 네트워크 식별과정을 스캐닝이라고 한다. 스캐닝 방식에는 능동적 스캐닝(active scanning)과 수동적 스캐닝(passive scanning)이 있다.In the illustrated step S310, the STA may perform a network discovery operation. The network discovery operation may include a scanning operation of the STA. That is, in order for the STA to access the network, it is necessary to find a network in which it can participate. An STA must identify a compatible network before participating in a wireless network. The process of identifying a network existing in a specific area is called scanning. Scanning methods include active scanning and passive scanning.
도 3에서는 예시적으로 능동적 스캐닝 과정을 포함하는 네트워크 발견 동작을 도시한다. 능동적 스캐닝에서 스캐닝을 수행하는 STA은 채널들을 옮기면서 주변에 어떤 AP가 존재하는지 탐색하기 위해 프로브 요청 프레임(probe request frame)을 전송하고 이에 대한 응답을 기다린다. 응답자(responder)는 프로브 요청 프레임을 전송한 STA에게 프로브 요청 프레임에 대한 응답으로 프로브 응답 프레임(probe response frame)을 전송한다. 여기에서, 응답자는 스캐닝되고 있는 채널의 BSS에서 마지막으로 비콘 프레임(beacon frame)을 전송한 STA일 수 있다. BSS에서는 AP가 비콘 프레임을 전송하므로 AP가 응답자가 되며, IBSS에서는 IBSS 내의 STA들이 돌아가면서 비콘 프레임을 전송하므로 응답자가 일정하지 않다. 예를 들어, 1번 채널에서 프로브 요청 프레임을 전송하고 1번 채널에서 프로브 응답 프레임을 수신한 STA은, 수신한 프로브 응답 프레임에 포함된 BSS 관련 정보를 저장하고 다음 채널(예를 들어, 2번 채널)로 이동하여 동일한 방법으로 스캐닝(즉, 2번 채널 상에서 프로브 요청/응답 송수신)을 수행할 수 있다.3 exemplarily illustrates a network discovery operation including an active scanning process. In active scanning, an STA performing scanning transmits a probe request frame to discover which APs exist nearby while moving channels, and waits for a response. A responder transmits a probe response frame to the STA that has transmitted the probe request frame in response to the probe request frame. Here, the responder may be an STA that last transmitted a beacon frame in the BSS of the channel being scanned. In the BSS, since the AP transmits a beacon frame, the AP becomes the responder. In the IBSS, the STAs in the IBSS rotate and transmit the beacon frame, so the responder is not constant. For example, an STA that transmits a probe request frame on channel 1 and receives a probe response frame on channel 1 stores BSS-related information included in the received probe response frame and channel) to perform scanning (ie, probe request/response transmission/reception on channel 2) in the same way.
도 3의 일례에는 표시되지 않았지만, 스캐닝 동작은 수동적 스캐닝 방식으로 수행될 수도 있다. 수동적 스캐닝을 기초로 스캐닝을 수행하는 STA은 채널들을 옮기면서 비콘 프레임을 기다릴 수 있다. 비콘 프레임은 IEEE 802.11에서 관리 프레임(management frame) 중 하나로서, 무선 네트워크의 존재를 알리고, 스캐닝을 수행하는 STA으로 하여금 무선 네트워크를 찾아서, 무선 네트워크에 참여할 수 있도록 주기적으로 전송된다. BSS에서 AP가 비콘 프레임을 주기적으로 전송하는 역할을 수행하고, IBSS에서는 IBSS 내의 STA들이 돌아가면서 비콘 프레임을 전송한다. 스캐닝을 수행하는 STA은 비콘 프레임을 수신하면 비콘 프레임에 포함된 BSS에 대한 정보를 저장하고 다른 채널로 이동하면서 각 채널에서 비콘 프레임 정보를 기록한다. 비콘 프레임을 수신한 STA은, 수신한 비콘 프레임에 포함된 BSS 관련 정보를 저장하고 다음 채널로 이동하여 동일한 방법으로 다음 채널에서 스캐닝을 수행할 수 있다.Although not shown in the example of FIG. 3 , the scanning operation may be performed in a passive scanning manner. An STA performing scanning based on passive scanning may wait for a beacon frame while moving channels. The beacon frame is one of the management frames in IEEE 802.11, and is periodically transmitted to inform the existence of a wireless network, and to allow a scanning STA to search for a wireless network and participate in the wireless network. In the BSS, the AP plays a role of periodically transmitting a beacon frame, and in the IBSS, the STAs in the IBSS rotate and transmit the beacon frame. When the STA performing scanning receives the beacon frame, it stores information on the BSS included in the beacon frame and records beacon frame information in each channel while moving to another channel. Upon receiving the beacon frame, the STA may store BSS-related information included in the received beacon frame, move to the next channel, and perform scanning on the next channel in the same manner.
네트워크를 발견한 STA은, 단계 SS320를 통해 인증 과정을 수행할 수 있다. 이러한 인증 과정은 후술하는 단계 S340의 보안 셋업 동작과 명확하게 구분하기 위해서 첫 번째 인증(first authentication) 과정이라고 칭할 수 있다. S320의 인증 과정은, STA이 인증 요청 프레임(authentication request frame)을 AP에게 전송하고, 이에 응답하여 AP가 인증 응답 프레임(authentication response frame)을 STA에게 전송하는 과정을 포함할 수 있다. 인증 요청/응답에 사용되는 인증 프레임(authentication frame)은 관리 프레임에 해당한다.The STA discovering the network may perform an authentication process through step SS320. This authentication process may be referred to as a first authentication process in order to clearly distinguish it from the security setup operation of step S340 to be described later. The authentication process of S320 may include a process in which the STA transmits an authentication request frame to the AP, and in response thereto, the AP transmits an authentication response frame to the STA. An authentication frame used for an authentication request/response corresponds to a management frame.
인증 프레임은 인증 알고리즘 번호(authentication algorithm number), 인증 트랜잭션 시퀀스 번호(authentication transaction sequence number), 상태 코드(status code), 검문 텍스트(challenge text), RSN(Robust Security Network), 유한 순환 그룹(Finite Cyclic Group) 등에 대한 정보를 포함할 수 있다. The authentication frame includes an authentication algorithm number, an authentication transaction sequence number, a status code, a challenge text, a Robust Security Network (RSN), and a Finite Cyclic Group), etc. may be included.
STA은 인증 요청 프레임을 AP에게 전송할 수 있다. AP는 수신된 인증 요청 프레임에 포함된 정보에 기초하여, 해당 STA에 대한 인증을 허용할지 여부를 결정할 수 있다. AP는 인증 처리의 결과를 인증 응답 프레임을 통하여 STA에게 제공할 수 있다.The STA may transmit an authentication request frame to the AP. The AP may determine whether to allow authentication for the corresponding STA based on information included in the received authentication request frame. The AP may provide the result of the authentication process to the STA through the authentication response frame.
성공적으로 인증된 STA은 단계 S330을 기초로 연결 과정을 수행할 수 있다. 연결 과정은 STA이 연결 요청 프레임(association request frame)을 AP에게 전송하고, 이에 응답하여 AP가 연결 응답 프레임(association response frame)을 STA에게 전송하는 과정을 포함한다. 예를 들어, 연결 요청 프레임은 다양한 능력(capability)에 관련된 정보, 비콘 청취 간격(listen interval), SSID(service set identifier), 지원 레이트(supported rates), 지원 채널(supported channels), RSN, 이동성 도메인, 지원 오퍼레이팅 클래스(supported operating classes), TIM 방송 요청(Traffic Indication Map Broadcast request), 상호동작(interworking) 서비스 능력 등에 대한 정보를 포함할 수 있다. 예를 들어, 연결 응답 프레임은 다양한 능력에 관련된 정보, 상태 코드, AID(Association ID), 지원 레이트, EDCA(Enhanced Distributed Channel Access) 파라미터 세트, RCPI(Received Channel Power Indicator), RSNI(Received Signal to Noise Indicator), 이동성 도메인, 타임아웃 간격(연관 컴백 시간(association comeback time)), 중첩(overlapping) BSS 스캔 파라미터, TIM 방송 응답, QoS 맵 등의 정보를 포함할 수 있다.The successfully authenticated STA may perform a connection process based on step S330. The association process includes a process in which the STA transmits an association request frame to the AP, and in response, the AP transmits an association response frame to the STA. For example, the connection request frame includes information related to various capabilities, a beacon listening interval, a service set identifier (SSID), supported rates, supported channels, RSN, and a mobility domain. , supported operating classes, TIM broadcast request (Traffic Indication Map Broadcast request), may include information on interworking service capability, and the like. For example, the connection response frame includes information related to various capabilities, status codes, Association IDs (AIDs), support rates, Enhanced Distributed Channel Access (EDCA) parameter sets, Received Channel Power Indicator (RCPI), Received Signal to Noise (RSNI). indicator), mobility domain, timeout interval (association comeback time), overlapping BSS scan parameters, TIM broadcast response, QoS map, and the like.
이후 S340 단계에서, STA은 보안 셋업 과정을 수행할 수 있다. 단계 S340의 보안 셋업 과정은, 예를 들어, EAPOL(Extensible Authentication Protocol over LAN) 프레임을 통한 4-웨이(way) 핸드쉐이킹을 통해서, 프라이빗 키 셋업(private key setup)을 하는 과정을 포함할 수 있다. Thereafter, in step S340, the STA may perform a security setup process. The security setup process of step S340 may include, for example, a process of private key setup through 4-way handshaking through an Extensible Authentication Protocol over LAN (EAPOL) frame. .
무선 데이터 트래픽에 대한 수요가 높아짐에 따라 WiFi 네트워크는 높은 처리량을 제공하고 쉽게 배포할 수 있으므로 매우 빠르게 성장한다. 최근 WiFi 네트워크로 측정된 CSI(Channel State Information)는 다양한 센싱 목적으로 널리 사용된다. 기존 WiFi 센싱 기술과 향후 WiFi 센싱 추세를 더 잘 이해하기 위해 본 명세서는 CSI를 사용한 WiFi 센싱의 신호 처리 기술, 알고리즘, 응용 프로그램 및 성능 결과를 종합적으로 검토한다. 다양한 WiFi 센싱 알고리즘 및 신호 처리 기술에는 고유한 장점과 제한이 있으며 다른 WiFi 센싱 응용 프로그램에 적합하다. 본 명세서는 CSI 기반 WiFi 감지 애플리케이션을 출력이 이진/멀티 클래스 분류인지 아니면 수치 인지에 따라 센싱, 인식 및 추정의 세 가지 범주로 분류한다. 새로운 WiFi 기술의 개발 및 배포로 인해 대상이 인간에서 환경, 동물 및 물체로 넘어갈 수 있는 더 많은 WiFi 센싱 기회가 있을 것이다.As the demand for wireless data traffic increases, WiFi networks grow very rapidly as they offer high throughput and are easy to deploy. Recently, CSI (Channel State Information) measured by a WiFi network is widely used for various sensing purposes. In order to better understand the existing WiFi sensing technology and the future WiFi sensing trend, this specification comprehensively reviews the signal processing technology, algorithm, application, and performance results of WiFi sensing using CSI. Different WiFi sensing algorithms and signal processing technologies have their own advantages and limitations and are suitable for different WiFi sensing applications. This specification classifies CSI-based WiFi sensing applications into three categories: sensing, recognition, and estimation according to whether the output is binary/multi-class classification or numeric. With the development and deployment of new WiFi technologies, there will be more WiFi sensing opportunities where objects can move from humans to the environment, animals and objects.
본 명세서는 WiFi 센싱에 있어 세 가지 과제, 즉 견고성(robustness) 및 일반화(generalization), 개인 정보 보호 및 보안, WiFi 센싱 및 네트워킹의 공존을 강조한다. 또한, 본 명세서는 기존의 WiFi 센싱 기능을 향상시키고 새로운 WiFi 센싱 기회를 가능하게 하기 위해 계층 간 네트워크 정보 통합, 다중 장치 협력 및 다른 센서의 융합과 같은 3가지 미래의 WiFi 센싱 트렌드를 제안한다.This specification emphasizes the coexistence of three challenges in WiFi sensing: robustness and generalization, privacy and security, and WiFi sensing and networking. In addition, this specification proposes three future WiFi sensing trends: inter-layer network information integration, multi-device cooperation, and fusion of different sensors to enhance the existing WiFi sensing function and enable new WiFi sensing opportunities.
무선 장치의 인기가 높아짐에 따라 WiFi는 매우 빠르게 성장하고 있다. WiFi의 성공을 위한 중요한 기술 중 하나는 MIMO(Multiple-Input Multiple-Output)로, 이는 증가하는 무선 데이터 트래픽 요구를 충족시키기 위해 높은 처리량을 제공한다. OFDM(Orthogonal Frequency-Division Multiplexing)과 함께 MIMO는 각 반송파 주파수에서 각 송수신 안테나 쌍에 대해 채널 상태 정보(CSI)를 제공한다. 최근 WiFi 시스템의 CSI 측정은 다양한 센싱 목적으로 사용된다. WiFi 센싱은 무선 통신에 사용되는 인프라를 재사용하므로 배포가 쉽고 비용이 저렴하다. 또한 센서 기반 및 비디오 기반 솔루션과 달리 WiFi 센싱은 조명 조건(lightning condition)에 방해가 되지 않는다.With the growing popularity of wireless devices, WiFi is growing very rapidly. One of the key technologies for WiFi's success is Multiple-Input Multiple-Output (MIMO), which provides high throughput to meet the growing demands of wireless data traffic. MIMO together with OFDM (Orthogonal Frequency-Division Multiplexing) provides channel state information (CSI) for each transmit/receive antenna pair at each carrier frequency. Recently, CSI measurement of WiFi systems is used for various sensing purposes. WiFi sensing reuses the infrastructure used for wireless communication, making deployment easy and low cost. Also, unlike sensor-based and video-based solutions, WiFi sensing does not interfere with lighting conditions.
CSI는 무선 경로가 여러 경로를 따라 특정 반송파 주파수에서 송신기에서 수신기로 전파되는 방식을 나타낸다. MIMO-OFDM이 있는 WiFi 시스템의 경우 CSI는 다중 경로 WiFi 채널의 진폭 감쇠 및 위상 변이를 나타내는 복잡한 값의 3D 매트릭스이다.CSI refers to how a radio path propagates from a transmitter to a receiver at a specific carrier frequency along multiple paths. For WiFi systems with MIMO-OFDM, CSI is a 3D matrix of complex values representing the amplitude attenuation and phase shift of a multipath WiFi channel.
CSI 측정의 시계열은 무선 신호가 시간, 주파수 및 공간 영역에서 주변 물체와 사람을 통해 이동하는 방식을 캡처하여 다른 무선 센싱 애플리케이션에 사용할 수 있다. 예를 들어, 시간 영역에서의 CSI 진폭 변동은 인간 존재 감지, 낙상 감지, 움직임 감지, 활동 인식, 제스처 인식 및 인간 식별/인증(identification/authentication)에 사용될 수 있는 인간, 활동, 제스처 등에 따라 다른 패턴을 갖는다.Time series of CSI measurements can be used for other wireless sensing applications by capturing how radio signals travel through surrounding objects and people in time, frequency, and spatial domains. For example, CSI amplitude fluctuations in the time domain can be used for human presence detection, fall detection, motion detection, activity recognition, gesture recognition, and human, activity, gesture, etc. different patterns that can be used for human identification/authentication. has
공간 및 주파수 영역, 즉 송신/수신 안테나 및 반송파 주파수에서의 CSI 위상 편이는 신호 전송 지연 및 방향과 관련되어 있으며, 이는 인간 위치 및 추적에 사용될 수 있다. 시간 영역에서의 CSI 위상 변이는 호흡 속도를 추정하는데 사용될 수 있는 다른 주된 주파수 성분을 가질 수 있다. 다양한 WiFi 센싱 애플리케이션에는 신호 처리 기술 및 분류/추정 알고리즘에 대한 특정 요구 사항이 있다.CSI phase shift in spatial and frequency domains, i.e., transmit/receive antenna and carrier frequencies, is related to signal transmission delay and direction, which can be used for human location and tracking. The CSI phase shift in the time domain can have other dominant frequency components that can be used to estimate the respiration rate. Various WiFi sensing applications have specific requirements for signal processing techniques and classification/estimation algorithms.
본 명세서는 기존 WiFi 센싱 기술에 대한 이해를 높이고 향후 WiFi 센싱 방향에 대한 통찰력을 얻기 위해 신호 처리 기술, 알고리즘, 응용 프로그램, 성능 결과, 과제 및 CSI를 통한 WiFi 센싱의 향후 추세를 제안한다.This specification proposes signal processing technologies, algorithms, applications, performance results, challenges, and future trends of WiFi sensing through CSI to increase understanding of existing WiFi sensing technologies and gain insight into future WiFi sensing directions.
도 4는 WiFi 센싱의 절차 흐름도를 나타낸다.4 shows a flowchart of a WiFi sensing procedure.
수학 모델, 측정 절차, 실제 WiFi 모델, 기본 처리 원리 및 실험 플랫폼을 포함한 WiFi 신호(예를 들어, CSI 측정 값)는 Input 단(410)에서 입력된다. Raw CSI 측정은 Signal Precessing 단(420)에 표시된대로 노이즈 감소, 신호 변환 및/또는 신호 추출을 위해 신호 처리 모듈로 공급된다.A WiFi signal (eg, CSI measurement value) including a mathematical model, a measurement procedure, an actual WiFi model, a basic processing principle, and an experimental platform is input from the Input stage 410 . Raw CSI measurements are fed to a signal processing module for noise reduction, signal conversion and/or signal extraction as indicated by the Signal Processing stage 420 .
사전 처리된 CSI 추적은 Algorithm 단(430)와 같이 모델링 기반, 학습 기반 또는 하이브리드 알고리즘으로 공급되어 다양한 WiFi 센싱 목적으로 출력을 얻는다. 출력 유형에 따라 WiFi 센싱은 세 가지 범주로 분류될 수 있다. Application 단(440)에서 탐지/인식 응용 프로그램은 이진/멀티 클래스 분류 문제를 해결하려고 시도하고 추정 응용 프로그램은 다른 작업의 수량 값을 얻으려고 한다.The pre-processed CSI tracking is supplied as a modeling-based, learning-based or hybrid algorithm, such as the Algorithm stage 430, to obtain an output for various WiFi sensing purposes. Depending on the output type, WiFi sensing can be classified into three categories. In the Application stage 440, the detection/recognition application tries to solve the binary/multi-class classification problem, and the estimation application tries to obtain the quantity value of another task.
도 5는 무선 신호를 통한 인간 활동 센싱의 일반적인 절차 흐름도를 나타낸다.5 shows a flow diagram of a general procedure for sensing human activity via a wireless signal.
구체적으로, 센싱 시스템은 상이한 센싱 방법(예를 들어, RSSI (Received Signal Strength Indicator), CSI (Channel State Information), FMCW (Frequency Modulated Carrier Wave) 및 도플러 편이(Doppler shift))에 기초하여 인간 활동과 관련된 신호 변화를 먼저 추출한다. 다음으로 간섭, 주변 소음 및 시스템 오프셋의 영향을 완화하기 위해 일련의 신호 전처리 절차(예를 들어, 필터링, 노이즈 제거 및 교정)가 채택된다. 마지막으로 고유한 기능이 추출되어 기계 학습 모델로 제공되어 인간 활동 감지 및 인식을 수행한다.Specifically, the sensing system is a different sensing method (eg, Received Signal Strength Indicator (RSSI), Channel State Information (CSI), Frequency Modulated Carrier Wave (FMCW), and Doppler shift) based on human activity and The related signal change is first extracted. A series of signal preprocessing procedures (eg, filtering, denoising, and correction) are then employed to mitigate the effects of interference, ambient noise, and system offsets. Finally, unique features are extracted and served as machine learning models to perform human activity detection and recognition.
즉, 도 5의 인간 활동 센싱의 절차는 다음과 같다.That is, the human activity sensing procedure of FIG. 5 is as follows.
1) Measurements: Input 값으로 RSSI, CSI, Doppler shift 등 측정1) Measurements: Measure RSSI, CSI, Doppler shift, etc. as input values
2) Derived Metrics with Human movements: Signal strength variations, Channel condition variations, Frequency shift associated with human body depth, Frequency shift associated with human moving speed2) Derived Metrics with Human movements: Signal strength variations, Channel condition variations, Frequency shift associated with human body depth, Frequency shift associated with human moving speed
3) Signal Pre-processing: Noise reduction, Signal Time-Frequency Transform, Signal Extraction3) Signal Pre-processing: Noise reduction, Signal Time-Frequency Transform, Signal Extraction
4) Feature Extraction: 걸음걸이 주기, 몸통 속도, Human Activity 활용하여 User ID의 feature를 추출4) Feature Extraction: Extracts user ID features using gait cycle, body speed, and human activity
5) Prediction via Machine/Deep learning: 알고리즘5) Prediction via Machine/Deep learning: Algorithms
6) Application: 사용자 식별 예측 모델을 Detection, Recognition, Estimation(Intrusion detection, Room occupancy monitoring, Daily activity recognition, Gesture recognition, Vital signs monitoring, User identification, Indoor localization & tracking)6) Application: User identification prediction model Detection, Recognition, Estimation (Intrusion detection, Room occupancy monitoring, Daily activity recognition, Gesture recognition, Vital signs monitoring, User identification, Indoor localization & tracking)
1. Wireless Sensing, Wi-Fi, Machine Learning1. Wireless Sensing, Wi-Fi, Machine Learning
<발명배경><Background of invention>
IoT 미래 스마트 홈 시장은 기기 연결 중심에서 서비스 중심으로 변화 하고 있으며, 이로 인해 인공지능 기기 기반 개인화, 자동화 서비스의 필요성이 증대되고 있다. 인공지능 기기의 IoT 서비스를 위한 요소 기술중 하나인 Wireless Sensing 기반 기술 개발이 활발히 이루어 지고 있으며, 이 중에서도 Wi-Fi와 같은 무선 신호의 변화가 사람의 걸음걸이나 행동에 따라 고유한 특성을 가지는 것을 이용하여 이 신호의 패턴을 학습하여 사용자 식별(User Identification)을 하는 연구가 활발히 진행 중이다. The IoT future smart home market is changing from device connection-centric to service-centric, and as a result, the need for AI device-based personalization and automation services is increasing. Wireless sensing-based technology, which is one of the element technologies for IoT service of artificial intelligence devices, is being actively developed. Research on user identification by learning the pattern of this signal using
<배경기술 및 문제점><Background technology and problems>
Wireless Sensing 기반 사용자 식별(User Identification) 기술을 상용 제품에 탑재하기 위해서 사전 학습(Machine Learning에서 수집 Data의 예측을 위한 모델을 사전에 학습하여 배포(예를 들어, 개와 고양이 예측하는 모델을 사전에 학습하여 배포하고 학습에 사용되지 않은 새로운 이미지 예측)하는 것이 어렵다. Wireless Signal은 환경에 따라 동일 사용자일지라도 사용자 움직임 영향에 따른 신호 패턴이 달라짐에 따라 일반적인 모델을 생성해서 사전 배포할 수 없기 때문에 상용 제품 탑재를 위해서는 각 환경에 맞는 학습을 통한 모델 생성이 필요하나, 기존 연구에서 사용되는 지도 학습(supervised) 을 이용한 사전 학습은 학습 데이터의 수집 및 Labeling(데이터의 정답 matching)을 위한 사용자의 참여가 필요하여 상용화 관점의 실용성이 떨어진다.In order to mount Wireless Sensing-based User Identification technology on commercial products, pre-training and distributing a model for prediction of data collected in Machine Learning (e.g., learning a model to predict dogs and cats in advance) It is difficult to distribute and predict new images that are not used for training.) Wireless Signal is a commercial product because it is impossible to create and pre-distribute a general model as the signal pattern according to the influence of user movement changes even for the same user depending on the environment. For this purpose, it is necessary to create a model through learning suitable for each environment, but pre-learning using supervised used in existing research requires user participation for the collection and labeling of learning data. The practicality of commercialization is low.
따라서, 본 명세서는 Wireless Sensing 기반 사용자 식별(User Identification)을 위한 사후 학습 자동화 방식을 제안한다. Therefore, the present specification proposes a post-learning automation method for wireless sensing-based user identification.
각 환경에 맞는 Wireless sensing 신호 패턴을 학습할 때 사용자 기기(Personal Electronic Device - PED)의 개인 식별 정보를 이용하여 학습을 위한 정답(예를 들어, Label) 수집을 통해 사후 학습이 가능하도록 한다. 사후 학습을 위한 학습 방식은 비지도 학습(unsupervised), 지도 학습(supervised), 준 지도 학습(semi-supervised), 비지도/지도 융합 학습과 같이 여러 방식에 적용될 수 있다When learning the wireless sensing signal pattern suitable for each environment, the personal electronic device (PED) personal identification information is used to enable post-learning by collecting the correct answer (eg, label) for learning. Learning methods for post-learning can be applied to several methods such as unsupervised, supervised, semi-supervised, and unsupervised/supervised fusion learning.
본 실시예를 통해 사용자의 댁내 환경에 맞는 신호 패턴을 학습하여 예측하는 시스템 구현이 가능해져서 사람을 식별하는 인공지능 기기와 같은 새로운 패러다임의 IoT 미래 스마트홈 기기를 창출해 낼 수 있다.Through this embodiment, it is possible to implement a system for predicting by learning a signal pattern suitable for the user's home environment, thereby creating a new paradigm of IoT future smart home devices such as artificial intelligence devices that identify people.
<Wi-Fi CSI 기반 User Identification 연구의 예> <Example of Wi-Fi CSI-based User Identification Study>
Wi-Fi CSI 이용한 무선 신호 정제, Feature 추출, Machine Learning을 이용하여 학습/예측하는 연구의 일례는 다음과 같다.An example of a study for learning/prediction using wireless signal refinement, feature extraction, and machine learning using Wi-Fi CSI is as follows.
1) Signal Pre-processing1) Signal Pre-processing
-> CSI measurement 수집 - 20MHz bandwidth 기준 30~52개 subcarrier의 CSI측정값을 TX/RX 안테나 개수만큼 수집한다.-> CSI measurement collection - Collect CSI measurement values of 30~52 subcarriers based on 20MHz bandwidth as many as the number of TX/RX antennas.
-> Denoising - PCA(Principal Component Analysis), phase unwrapping, band-pass butterworth filter와 같은 algorithm을 사용하여 신호의 noise를 제거한다.-> Denoising - Removes noise from signals using algorithms such as PCA (Principal Component Analysis), phase unwrapping, and band-pass Butterworth filter.
-> Time-Frequency domain으로 변환 - STFT(Shot-Time Fourier Transform)을 이용하여 Spectrogram 생성(도 6 참조) -> Denoising된 waveform에 사람의 몸 부분의 반사 형태가 섞여 있으며 이는 주파수 별로 구분될 수 있다.-> Transform to Time-Frequency domain - Spectrogram generation using STFT (Shot-Time Fourier Transform) (refer to Fig. 6) -> The denoising waveform is mixed with the reflection form of the human body, and it can be classified by frequency .
도 6은 인간 걸음에 따른 CSI 스펙토그램(spectrogram)을 나타낸다.6 shows a CSI spectrogram according to a human gait.
도 6을 참조하면, 몸통 반사(torso reflection)와 다리 반사(leg reflection)가 CSI 스펙토그램을 시간/주파수 영역에서 도시된다. 이때, CSI 스펙토그램은 일정 주기 시간을 가진다.Referring to FIG. 6 , torso reflection and leg reflection are illustrated in a CSI spectogram in a time/frequency domain. In this case, the CSI spectogram has a certain cycle time.
2) Feature Extraction2) Feature Extraction
-> User Identification 학습 및 예측을 위한 feature를 추출하는 과정-> The process of extracting features for user identification learning and prediction
-> 걸음걸이 주기(Gait Cycle Time), 몸통 속도(Movement(or Torso) Speed), Human Activity등을 활용함-> Use Gait Cycle Time, Movement (or Torso) Speed, Human Activity, etc.
-> 걸음걸이 주기는 사람 별로 고유하다는 이론에서 착안하여 User Identification의 feature로 활용-> Based on the theory that the gait cycle is unique to each person, it is used as a feature of User Identification
-> 몸통 속도 추정 방법의 예: Doppler Radar에서 사용하는 percentile method 사용-> Example of body velocity estimation method: using the percentile method used in Doppler Radar
-> Human Activity 추정 방법의 예: CSI의 Low level feature인 time domain features(max, min, mean, skewness, kurtiosis, std)를 이용하여 사람의 움직임과 윤곽을, frequency domain features(spcetrogram energy, percentile frequency component, spectrogram energy difference)를 이용하여 몸통 및 다리의 움직임 속도를 예측하고, 이 feature들을 이용하여 walking or stationary activities를 표현한다.-> Example of Human Activity estimation method: Using time domain features (max, min, mean, skewness, kurtiosis, std), which are low level features of CSI, human movements and contours, frequency domain features (spcetrogram energy, percentile frequency) component, spectrogram energy difference) to predict the movement speed of the torso and legs, and express walking or stationary activities using these features.
3) Machine/Deep Learning based training and prediction3) Machine/Deep Learning based training and prediction
-> 여러 가지의 Machine/Deep Learning 기반 알고리즘을 통해 학습 및 예측-> Learning and prediction through various machine/deep learning-based algorithms
-> 대표 알고리즘-> Representative Algorithm
i) 지도 학습(Supervised Learning) : decision tree-based machine learning classifier, SVM(Support Vector Machine), Softmax classifier 등의 Machine Learning, Deep Learning 학습 알고리즘을 사용i) Supervised Learning: Using machine learning and deep learning learning algorithms such as decision tree-based machine learning classifier, SVM (Support Vector Machine), Softmax classifier, etc.
i)-1 예측 모델은 지도 학습(Supervised learning)으로만 생성 되며 비지도 학습(unsupervised learning) algorithm은 Supervised learning 모델의 layer를 구성하는 용도로 사용(일부 연구)i)-1 The predictive model is created only by supervised learning, and the unsupervised learning algorithm is used to construct the layers of the supervised learning model (some studies)
-> 학습 방법-> Learning method
i) 사람 별로 특정 환경 조건에서 data를 수집하여 특정 비율로 Training/Evaluation data를 선정(예를 들어, Training data : Evaluation data = 8:2) -> 홀드아웃 검증i) Select Training/Evaluation data at a specific ratio by collecting data under specific environmental conditions for each person (eg, Training data : Evaluation data = 8:2) -> Holdout verification
ii) Training data는 각각의 사람별 정답(e.g. Label)을 manual하게 mapping하고 Machine/Deep learning model의 input으로 사용하여 학습 시킴ii) Training data is trained by manually mapping the correct answer (e.g. Label) for each person and using it as an input for the Machine/Deep learning model.
iii) 일부 연구들에서는 data 수집 환경의 자유도를 높이기 위해 비지도 학습을 이용하여 auto feature extraction, clustering등을 수행하고 이후 지도학습 모델(예를 들어, Softmax classifier)을 이용하여 User Identification을 수행iii) In some studies, auto feature extraction and clustering are performed using unsupervised learning to increase the degree of freedom in the data collection environment, and then user identification is performed using a supervised learning model (eg, Softmax classifier).
비지도 학습은 답(label)을 가르쳐주지 않고 문제만 공부시키는 학습 방법이다. 비지도 학습에 따르면, 변수들 간의 관계를 기반으로 클러스터링(clustering, 비지도 학습의 대표적인 예) 등을 하여 정답을 찾는다(예를 들어, 유트부 추천, 동물 구분).Unsupervised learning is a learning method in which only the problem is studied without teaching the answer (label). According to unsupervised learning, a correct answer is found by clustering (a typical example of unsupervised learning) based on the relationship between variables (eg, recommending a YouTuber, classifying animals).
이에 반해, 지도 학습은 답을 가르쳐주고 공부시키는 학습 방법이다. 지도 학습은 회귀(regression)와 분류(classification)로 나뉜다. 회귀는 연속적인 데이터 범위 내에서 결과를 예측하는 학습 방법이다(예를 들어, 나이 0~100세 맞추기). 분류는 이산적으로 분리된 데이터 범위 내에서 결과를 예측하는 학습 방법이다(예를 들어, 종양이 악성인지 양성인지)In contrast, supervised learning is a learning method that teaches and studies answers. Supervised learning is divided into regression and classification. Regression is a learning method that predicts outcomes within a range of continuous data (eg, age 0-100). Classification is a learning method that predicts outcomes within a range of discretely separated data (for example, whether a tumor is malignant or benign).
또한, 준 지도 학습은 답이 있는 데이터와 답이 없는 데이터를 동시에 학습하는 방법으로, 답이 없는 수많은 데이터들도 버리지 않고 공부시키는 학습 방법이다.In addition, semi-supervised learning is a method of simultaneously learning data with answers and data without answers, and it is a learning method that studies a lot of data without answers without discarding them.
도 7은 사용자 인증을 위한 딥러닝 아키텍쳐를 나타낸다.7 shows a deep learning architecture for user authentication.
도 7의 딥 러닝 아키텍쳐는 각 숨은 레이어(hidden layer) 별로 autoencoder를 이용하여 auto feature extraction을 하고, 분류를 위해 소프트맥스 분류(softmax classification)을 이용한 일례이다. The deep learning architecture of FIG. 7 is an example in which auto feature extraction is performed using an autoencoder for each hidden layer, and softmax classification is used for classification.
도 7을 참조하면, 지도 학습 모델이 각 숨은 레이어를 구성하고, 비지도 학습 모델은 해당 레이어를 구성하는 용도로만 사용된다. 도 7의 Activity Separation, Activity Recognition, User Authentication은 모두 auto feature extraction으로 획득하는 특징이다.Referring to FIG. 7 , the supervised learning model constitutes each hidden layer, and the unsupervised learning model is used only for constructing the corresponding layer. Activity Separation, Activity Recognition, and User Authentication of FIG. 7 are all characteristics obtained by auto feature extraction.
1. Wireless Sensing, Wi-Fi, Machine Learning1. Wireless Sensing, Wi-Fi, Machine Learning
<발명 배경><Invention Background>
IoT 미래 스마트 홈 시장은 기기 연결 중심에서 서비스 중심으로 변화 하고 있으며, 이로 인해 인공지능 기기 기반 개인화, 자동화 서비스의 필요성이 증대되고 있다. 인공지능 기기의 IoT 서비스를 위한 요소 기술중 하나인 Wireless Sensing 기반 기술 개발이 활발히 이루어 지고 있으며, 이 중에서도 Wi-Fi와 같은 무선 신호의 변화가 사람의 걸음걸이나 행동에 따라 고유한 특성을 가지는 것을 이용하여 이 신호의 패턴을 학습하여 사람 인지 및 사용자 식별 / 제스처 식별을 하는 연구가 활발히 진행 중이다. 이 때 Wireless Sensing 기반 기기가 Wi-Fi CSI(channel State Information)와 같은 무선 신호를 측정하고 처리 및 예측하는 절차를 단독으로 수행하게 된다.The IoT future smart home market is changing from device connection-centric to service-centric, and as a result, the need for AI device-based personalization and automation services is increasing. Wireless sensing-based technology, which is one of the element technologies for IoT service of artificial intelligence devices, is being actively developed. Research on human recognition and user identification / gesture identification by learning the pattern of this signal using At this time, the wireless sensing-based device independently performs the procedure of measuring, processing, and predicting a wireless signal such as Wi-Fi CSI (channel state information).
<종래기술 및 문제점><Prior art and problems>
Wireless Sensing 기반 Device 여러 대가 함께 있을 경우 협업을 통한 학습 및 예측을 통해서 성능 향상을 할 수 있는 프로토콜의 필요성이 증가하고 있다. 아울러, Resource 등 Wireless Sensing에 제약이 있는 기기들을 위한 Cooperative sensing을 위한 협업의 필요성도 증가하고 있다.When several wireless sensing-based devices are together, the need for a protocol that can improve performance through collaborative learning and prediction is increasing. In addition, the need for collaboration for cooperative sensing for devices with restrictions on wireless sensing such as resources is increasing.
또한, Wireless Sensing 기기들이 수집하는 다양한 Information들의 상호 공유하는 절차와 활용하는 절차를 정의할 필요가 있다. Device 종류에 따라 지원하는 Capabilities가 다를 수 있으며(예를 들어 정수기는 무선 신호만 수집 가능하고 냉장고는 무선 신호를 수집하여 학습/예측이 가능 - Resource 등의 제약이 있음) 이로 인해 상위 Capabilities를 지원하지 않는 Device(Legacy Device 포함)는 상위 Capabilities를 지원하는 주변기기의 도움을 받아서 학습 및 예측할 필요가 있다.In addition, it is necessary to define a procedure for mutual sharing and utilization of various information collected by wireless sensing devices. Capabilities supported may differ depending on the device type (for example, a water purifier can collect only wireless signals, and a refrigerator can learn/predict by collecting wireless signals - there are restrictions such as resources) Devices that do not (including legacy devices) need to learn and predict with the help of peripherals that support higher capabilities.
따라서, 본 명세서는 Wireless Sensing 기반 협력 아키텍쳐 프로토콜과 시그널링(Cooperation Architecture Protocol & Signaling) 방법을 제안한다. 간략하게 협력 아케텍쳐 프로토콜 및 시그널링을 설명하면, 먼저, 1) Wireless Sensing 기기가 여러 대 있을 때 상호 Negotiation을 통해 Capabilities 정보를 서로 교환할 수 있다. 2) Hard Decision, Soft Decision 정보 교환을 위해 Negotiation 과정에 따라 대표 기기가 선정될 수 있다. 3) Wireless Sensing 기기가 여러 대 있을 때 기기의 Capabilities에 따라(또는 device의 상태, burden 등을 고려할 수도 있다) Hard Decision, Soft Decision 정보를 상호 교환할 수 있다. 이때, Hard Decision 정보는 Wireless Sensing 기반 식별 Decision 결과이고, Soft Decision 정보는 Wireless Sensing 기반 식별에 필요한 사전 정보(예를 들어, Signal Raw Data, Signal의 전처리 과정을 거친 Data, 학습용 Input Data 등과 같은 학습 및 예측을 위한 사전 Data)이다. 4) Cooperation을 위한 기기간 정보를 주고 받기 위해 Wireless PHY/MAC과 Application 간 인터페이스를 정의할 수 있다(이로써, 상위 계층으로 갈 필요 없이 Hard Decision, Soft Decision 정보를 주고받을 수 있다).Accordingly, the present specification proposes a wireless sensing-based cooperative architecture protocol and signaling (Cooperation Architecture Protocol & Signaling) method. Briefly describing the cooperative architecture protocol and signaling, first, 1) When there are several Wireless Sensing devices, Capabilities information can be exchanged with each other through mutual negotiation. 2) For exchanging Hard Decision and Soft Decision information, a representative device may be selected according to the negotiation process. 3) When there are multiple Wireless Sensing devices, Hard Decision and Soft Decision information can be exchanged according to the capabilities of the device (or the device status, burden, etc. may be considered). At this time, the hard decision information is the wireless sensing-based identification decision result, and the soft decision information is the prior information required for wireless sensing-based identification (eg, signal raw data, data that has undergone signal preprocessing, learning and input data for learning, etc.) prior data for prediction). 4) An interface between Wireless PHY/MAC and an application can be defined to exchange information between devices for cooperation (by doing so, hard decision and soft decision information can be exchanged without going to the upper layer).
본 명세서에서 제안하는 방법을 통해 사용자의 댁내 환경에 맞는 신호 패턴을 여러 Device가 공존해 있을 때에도 효율적으로 수집, 학습, 예측하는 시스템이 구현 가능해져서 새로운 패러다임의 IoT 미래 스마트홈 기기를 창출해 낼 수 있다.Through the method proposed in this specification, it is possible to implement a system that efficiently collects, learns, and predicts signal patterns suitable for the user's home environment even when multiple devices coexist, thereby creating a new paradigm of IoT future smart home devices. there is.
Wireless Sensing 기반 기존 프로토콜 및 기존 동작 방식을 설명하면 아래와 같다. 1) 송신 Device에서 Wi-Fi CSI(Channel State Information)와 같은 측정이 가능한 신호를 송신한다. 2) 수신 Device에서는 송신 Device에서 보낸 CSI 무선 신호를 측정한다. 3) 송수신 Device는 Wireless Signal Pre-processing을 수행하여 수집된 Signal을 정제한다. 4) 송수신 Device는 학습 및 예측을 위한 Feature를 추출하는 과정(Feature Extraction)을 수행한다. 5) 송수신 Device는 Wireless Signal Pre-processing, Feature Extraction을 거친 data set를 적정한 비율(예를 들어, 8:2)로 나누어 큰 비율을 학습용 데이터 Input으로 사용하고 나머지 데이터를 학습 모델의 평가를 위해 사용한다.The existing protocols based on Wireless Sensing and the existing operation methods will be described as follows. 1) The transmitting device transmits a measurable signal such as Wi-Fi CSI (Channel State Information). 2) The receiving device measures the CSI radio signal sent from the transmitting device. 3) The transmitting/receiving device performs wireless signal pre-processing to refine the collected signal. 4) The transmitting/receiving device performs a process of extracting features for learning and prediction (Feature Extraction). 5) The sending/receiving device divides the data set that has undergone Wireless Signal Pre-processing and Feature Extraction into an appropriate ratio (eg 8:2) and uses the large ratio as the data input for learning, and the remaining data is used for evaluation of the learning model. do.
도 8은 Wireless Sensing 기반 기기가 무선신호를 측정하고 처리 및 예측하는 절차를 단독으로 수행해서 생기는 문제점을 나타낸다.8 shows a problem that occurs when a wireless sensing-based device independently performs a procedure for measuring, processing, and predicting a wireless signal.
기존 방식에서 Wireless Sensing 기반 기기는 Wi-Fi CSI(Channel State information)와 같은 무선신호를 측정하고 처리 및 예측하는 절차를 단독으로 수행한다. 이때, Device 종류에 따라 지원하는 Capabilities가 다를 수 있다. 예를 들어, 무선 신호 수집만 가능한 Device가 있을 수 있고 무선 신호를 수집하여 학습/예측이 가능한 Device가 있을 시 무선 신호 수집만 가능한 Device의 경우 학습/예측이 불가능하다(Legacy 제품의 호환성 문제). 또한, 무선 신호를 수집하여 학습/예측이 가능한 Device의 경우 무선 신호 수집만 가능한 Device의 Capability를 모를뿐더러 수집한 무선 신호를 받을 수 없다는 문제가 있다.In the existing method, a wireless sensing-based device independently performs a procedure of measuring, processing, and predicting a wireless signal such as Wi-Fi CSI (Channel State Information). In this case, supported capabilities may be different depending on the device type. For example, there may be devices that can only collect wireless signals, and when there are devices that can learn/predict by collecting wireless signals, learning/prediction is impossible for devices that can only collect wireless signals (compatibility problem with legacy products). In addition, in the case of a device capable of learning/predicting by collecting wireless signals, there is a problem that the capability of the device capable of only collecting wireless signals is not known, and that the collected wireless signals cannot be received.
도 8은 기존 방식이 가지고 있는 문제점을 구체적인 실시예로 설명하고 있다. 먼저, (1) AP는 Wi-Fi CSI와 같은 무선신호를 송신한다. (2) TV와 에어컨은 이 무선 신호를 측정한다. (3) TV는 AI(Artificial Intelligence) 기능이 포함 되어 있어 학습/예측 가능하지만 에어컨은 AI 기능이 포함 되어 있지 않아 학습/예측이 불가능하다. (4) 이때, Paul이 AP와 TV/에어컨 사이를 지나간다고 가정한다. (5) TV는 CSI와 같은 무선 신호를 측정, 학습/예측 하여 Paul이 TV 앞을 지나가는 것을 인지 하지만 에어컨은 인지하지 못한다. (6) TV에서 에어컨으로 Paul의 학습 정보 결과 전달되지 않고, (7) 에어컨에서 TV로 Paul의 무선 신호 전달되지 않는다. 즉, Device가 자신의 Capabilities에 따른 동작만 할 수 있고, Device 상호 간에 협력하지 않아 주변기기의 도움을 받아 학습 및 예측을 할 수 없다.8 illustrates a problem with the existing method as a specific embodiment. First, (1) the AP transmits a wireless signal such as Wi-Fi CSI. (2) TV and air conditioner measure this radio signal. (3) TV includes AI (Artificial Intelligence) function and can learn/predict, but air conditioner does not include AI function, so learning/prediction is impossible. (4) At this time, it is assumed that Paul passes between the AP and the TV/air conditioner. (5) TV measures, learns/predicts radio signals such as CSI and recognizes Paul passing in front of the TV, but the air conditioner does not. (6) The result of Paul's learning information is not transmitted from the TV to the air conditioner, and (7) Paul's wireless signal is not transmitted from the air conditioner to the TV. That is, the device can only operate according to its own capabilities, and since the devices do not cooperate with each other, it is impossible to learn and predict with the help of peripheral devices.
도 9는 Wireless Sensing 기기의 블록도를 나타낸다.9 shows a block diagram of a wireless sensing device.
도 9는 구체적으로, Wireless Sensing 기반 협력 아키텍쳐 프로토콜 및 정보(Cooperation Architecture Protocol & Information)를 공유하는 절차를 담당하는 기능부를 도시한다.9 specifically illustrates a functional unit responsible for a wireless sensing-based cooperative architecture protocol and a procedure for sharing information (Cooperation Architecture Protocol & Information).
도 10은 Wireless Sensing 기기의 기능부에 대한 블록도를 나타낸다.10 is a block diagram of a functional unit of a wireless sensing device.
도 10은 구체적으로, Wireless Sensing Architecture 및 Negotiation과 Signal 교환을 위한 Function Block을 도시한다.10 specifically shows a Function Block for Wireless Sensing Architecture and Negotiation and Signal exchange.
도 9 및 도 10에서 도시하는 기능부는 다음과 같이 정의할 수 있다.The functional units shown in FIGS. 9 and 10 may be defined as follows.
먼저, Wireless PHY / MAC Driver 블록(10)은 Wireless Sensing 기기의 PHY/MAC 계층으로 정보를 주고받는 역할을 한다. Device Discovery 블록(20)은 주변 기기를 Discovery하는 역할을 한다. Capabilities Negotiation 블록(30)은 Discovery한 주변 기기들이 Wireless Sensing이 되는지 안 되는지의 여부, 대표 Deivce 설정, Decision 방법들을 Device들 간 Negotiation하는 역할을 한다. Wireless Sensing 블록(40)은 Wi-Fi CSI(Channel State Information)와 같은 무선 신호를 송신하고 수집하는 역할을 한다. Signal Pre-Processing 블록(50)은 CSI Measurement, Phase Offset Calibration, De-Noising 등을 할 수 있는 역할을 한다. Feature Selection & Extraction 블록(60)은 학습 및 예측을 위한 Feature를 선택하고 추출하는 역할을 한다. Machine/Deep Learning 블록(70)은 여러가지 Machine/Deep Learning 기반 알고리즘을 통해 Training & Prediction을 하는 역할을 한다. 정보 교환 네트워크 부(80)는 Device Discovery 블록(20), Capabilities Negotiation 블록(30), Wireless Sensing 블록(40)의 정보들을 전달하고 받는 무선 네트워크이다. AI Cloud(90)는 Deep/Machine Learning(70) 기능은 반드시 포함하며 연결되는 Device 구성에 따라 Signal Pre-processing(50), Feature Selection/Extraction(60) 기능의 일부 혹은 전체를 수행할 수 있는 Cloud 서버이다. 클라우드 정보 교환 네트워크 부(100)는 클라우드와 Wireless Sensing 기기 간 정보들을 주고 받는 네트워크이다. First, the Wireless PHY/MAC Driver block 10 serves to exchange information with the PHY/MAC layer of the wireless sensing device. The device discovery block 20 serves to discover peripheral devices. The Capabilities Negotiation block 30 serves to negotiate whether the discovered peripheral devices are wireless sensing or not, representative device settings, and decision methods between devices. The Wireless Sensing block 40 serves to transmit and collect wireless signals such as Wi-Fi CSI (Channel State Information). The Signal Pre-Processing block 50 serves to perform CSI Measurement, Phase Offset Calibration, De-Noising, and the like. The Feature Selection & Extraction block 60 serves to select and extract features for learning and prediction. The Machine/Deep Learning block 70 serves to perform Training & Prediction through various Machine/Deep Learning-based algorithms. The information exchange network unit 80 is a wireless network that transmits and receives information of the Device Discovery block 20 , the Capabilities Negotiation block 30 , and the Wireless Sensing block 40 . AI Cloud (90) must include Deep/Machine Learning (70) function, and a Cloud that can perform some or all of Signal Pre-processing (50) and Feature Selection/Extraction (60) functions depending on the connected device configuration is the server The cloud information exchange network unit 100 is a network for exchanging information between the cloud and the wireless sensing device.
도 11은 인터페이스를 포함한 Wireless Sensing 기기의 블록도를 나타낸다.11 is a block diagram of a wireless sensing device including an interface.
도 11은 Wireless PHY / MAC Driver와의 정보 교환을 위한 인터페이스를 정의한다. Soft Decision Interface(110)는 Wireless Sensing 기기 내 Wireless Sensing(40), Signal Pre-Processing(50), Feature Selection/Extraction(60) 정보들을 Wireless PHY/MAC Driver(10)와 주고 받기 위한 PHY/MAC과 Application 간 Interface이다. Hard Decision Interface(120)는 Wireless Sensing 기기 내 Deep/Machine Learning(70) 정보들을 Wireless PHY/MAC Driver(10)과 주고 받기 위한 PHY/MAC과 Application 간 Interface이다.11 defines an interface for information exchange with the Wireless PHY / MAC Driver. The Soft Decision Interface (110) communicates with the Wireless PHY/MAC Driver (10) PHY/MAC and the Wireless Sensing (40), Signal Pre-Processing (50), Feature Selection/Extraction (60) information in the Wireless Sensing device. It is an interface between applications. The Hard Decision Interface 120 is an interface between the PHY/MAC and the Application for exchanging the Deep/Machine Learning 70 information in the wireless sensing device with the Wireless PHY/MAC Driver 10 .
이하에서는 Decision 방법을 정의한다.Decision method is defined below.
Hard Decision 정보는 Wireless Sensing(40) Data 수집, Signal Pre-processing(50), Feature Selection/Extraction(60), Machine/Deep Learning(70) 과정을 거친 Data(AI로 판단한 예측 결과가 될 수 있다)일 수 있다.Hard Decision information is data that has gone through Wireless Sensing (40) Data collection, Signal Pre-processing (50), Feature Selection/Extraction (60), Machine/Deep Learning (70) (it can be a prediction result determined by AI) can be
Soft Decision 정보는 3가지 정보를 가지는데, Soft Decision 1은 수집한 Wireless Sensing(40) 과정을 거친 Data(Raw data가 될 수 있다)일 수 있다. Soft Decision 2는 Wireless Sensing (40), Signal Pre-processing(50) 과정을 거친 Data (신호 전처리를 통해 Noise제거 등의 형태가 될 수 있다)일 수 있다. Soft Decision 3은 Wireless Sensing(40), Signal Pre-processing(50), Feature Selection/Extraction(60) 과정을 거친 Data(AI를 통한 Sensing 예측 결과가 될 수 있다)일 수 있다.Soft Decision information has three types of information, and Soft Decision 1 may be data (which may be raw data) that has been collected through the wireless sensing (40) process. Soft Decision 2 may be data that has gone through the wireless sensing (40) and signal pre-processing (50) processes (it may be in the form of noise removal through signal pre-processing). Soft Decision 3 may be data (which may be a sensing prediction result through AI) that has gone through the wireless sensing (40), signal pre-processing (50), and feature selection/extraction (60) processes.
Decision 방법은 Device의 Capabilities에 따라 위 정의 중 하나 이상을 가질 수 있다. 예를 들어 AI를 지원하는 기기는 Hard Decision, Soft Decision 정보를 다른 기기에게 공유할 수 있으며, AI를 지원하지 않는 기기는 Soft Decision 정보만 다른 기기에게 공유 할 수 있다.Decision method may have one or more of the above definitions according to device capabilities. For example, a device that supports AI can share hard decision and soft decision information with other devices, and a device that does not support AI can share only soft decision information with other devices.
이하에서는, 협력 device의 유형을 정의한다.Hereinafter, the type of cooperative device is defined.
도 12는 협력 device의 유형을 도시한 도면이다.12 is a diagram illustrating a type of cooperative device.
Level 1은 Wireless PHY/MAC Driver(10), Device Discovery(20), Capabilities Negotiation(30), Wireless Sensing(40)의 기능들을 포함하고 있다. Level 1 Device는 Wireless Sensing Raw data를 수집할 수 있는 기기로 정의한다. Level 1 Device는 Soft Decision(Sensing Raw data)을 다른기기에게 전달하고, 다른 기기로부터 Hard Decision (예측 결과)를 전달 받을 수 있다. Level 1 includes functions of Wireless PHY/MAC Driver (10), Device Discovery (20), Capabilities Negotiation (30), and Wireless Sensing (40). Level 1 Device is defined as a device that can collect Wireless Sensing Raw data. Level 1 device can transmit soft decision (sensing raw data) to other devices and receive hard decision (prediction results) from other devices.
Level 2는 Level 1 Device의 기능에 Signal Pre-processing(50)의 기능을 포함 하고 있다. Level 2 Device는 수집한 Sensing Data를 Signal Pre-processing을 통한 Noise 제거 및 Signal 정제 등을 할 수 있는 기기로 정의한다. Level 2 Device는 Soft Decision (Sensing Raw Data or Signal Pre-processed Data)을 다른기기에게 전달하고, 다른 기기로부터 Hard Decision (예측 결과)를 전달 받을 수 있다. Level 2 includes the function of Signal Pre-processing (50) in the function of Level 1 Device. Level 2 Device is defined as a device capable of noise removal and signal refinement through signal pre-processing of the collected sensing data. Level 2 Device can deliver Soft Decision (Sensing Raw Data or Signal Pre-processed Data) to other devices, and can receive Hard Decision (prediction results) from other devices.
Level 3는 Level 2 Device의 기능에 Feature Selection/Extraction(60)의 기능을 포함하고 있다. Level 3 Device는 정제된 Sensing Data에서 Machine Learning 학습/예측 용 Input Data 생성을 위한 Feature의 선택 및 추출을 할 수 있는 기기로 정의한다. Level 3 Device는 Soft Decision (Sensing Raw Data, Signal Pre-processed Data or 학습/예측용 Input Data)을 다른기기에게 전달하고, 다른 기기로부터 Hard Decision (예측 결과)를 전달 받을 수 있다. Level 3 includes the function of Feature Selection/Extraction (60) in the function of Level 2 Device. Level 3 Device is defined as a device that can select and extract features from refined sensing data to generate input data for machine learning learning/prediction. Level 3 Device can deliver Soft Decision (Sensing Raw Data, Signal Pre-processed Data, or Input Data for learning/prediction) to other devices, and can receive Hard Decision (prediction results) from other devices.
Level 4는 Level 3 Device의 기능에 Deep/Machine Learning(70)의 기능을 포함하고 있다. Level 4 Device는 Sensing Data를 수집하고 전처리하여 Machine Learning 학습/예측을 수행할 수 있는 기기로 정의한다. Level 4 Device는 Soft Decision(Sensing Raw Data, Signal Pre-processed Data 또는 학습/예측용 Input Data) 혹은 Hard Decision(예측 결과)를 다른기기에게 전달하고, 다른 기기로부터 Hard Decision (예측 결과)를 전달 받을 수 있다. Level 4 includes the function of Deep/Machine Learning (70) in the function of Level 3 Device. Level 4 Device is defined as a device that can perform machine learning learning/prediction by collecting and preprocessing sensing data. Level 4 Device transmits Soft Decision (Sensing Raw Data, Signal Pre-processed Data, or Input Data for learning/prediction) or Hard Decision (prediction result) to other devices, and receives Hard Decision (prediction result) from other devices. can
AI Cloud는 Level 1~4 기기로부터 Soft Decision을 전달받고 Hard Decision을 전달해주는 역할을 수행할 수 있다. Signal Pre-Processing(50), Feature Selection / Extraction(60)을 포함할 수 있으며 Machine/Deep Learning(70)는 반드시 포함해야 한다(Hard Decision을 전달하기 위해).AI Cloud can play the role of receiving soft decisions from Level 1~4 devices and delivering hard decisions. Signal Pre-Processing (50), Feature Selection / Extraction (60) can be included, and Machine/Deep Learning (70) must be included (to deliver Hard Decision).
즉, Level 1 내지 Level 4 Device 모두 다른 기기(AI Cloud 또는 Level 4 Device)로부터 Hard Decision을 받을 수 있고, Level 별로 자기가 전달하는 Soft Decision 정보에 차이가 있다(Level 4 Device의 경우 Hard Decision 정보도 전달 가능). That is, both Level 1 to Level 4 devices can receive a hard decision from other devices (AI Cloud or Level 4 Device), and there is a difference in the soft decision information they deliver for each level (in the case of Level 4 devices, hard decision information is also forwardable).
도 13은 Wireless Sensing 기기가 협력하여 학습 및 예측을 수행하는 절차의 일례를 나타낸다. 도 13은 Device B가 Device A에게 Soft Decision 정보를 제공하는 경우의 협력 아키텍쳐 프로토콜과 시그널링 절차를 도시한다.13 shows an example of a procedure in which wireless sensing devices cooperate to perform learning and prediction. 13 shows a protocol and signaling procedure of a cooperative architecture when Device B provides Soft Decision information to Device A. Referring to FIG.
도 13을 참조하면, Device A와 Device B는 Device Discovery(20) 및 정보 교환 네트워크 부(80)를 통해 Device Discovery Request/Response를 송수신하면서 Device를 찾고 감지할 수 있다. 예를 들어, Device A가 Device B로 Device Discovery Request를 전송하면, Device B가 Device Discovery Response로 응답할 수 있고, Device A는 Device B가 존재함을 확인할 수 있다.Referring to FIG. 13 , Device A and Device B may search for and detect a device while transmitting and receiving Device Discovery Request/Response through the Device Discovery 20 and the information exchange network unit 80 . For example, when Device A transmits a Device Discovery Request to Device B, Device B may respond with a Device Discovery Response, and Device A may confirm that Device B exists.
디스커버리 절차 이후에, Device A와 Device B는 Capabilities Negotiation(30) 및 정보 교환 네트워크 부(80)를 통해 Capabilities Negotiation Request/Response/Confirm을 송수신하면서 Capabilities Negotiation을 수행할 수 있다. 예를 들어, Device A가 Device B로 Capabilities Negotiation Request를 전송하면, Device B가 Capabilities Negotiation Response로 응답할 수 있고, Device A는 Device B로 Capabilities Negotiation Confirm을 전송함으로써, Capabilities Negotiation을 마칠 수 있다. After the discovery procedure, Device A and Device B may perform Capabilities Negotiation while transmitting and receiving Capabilities Negotiation Request/Response/Confirm through the Capabilities Negotiation 30 and the information exchange network unit 80 . For example, when Device A transmits a Capabilities Negotiation Request to Device B, Device B may respond with a Capabilities Negotiation Response, and Device A may send a Capabilities Negotiation Confirm to Device B, thereby completing Capabilities Negotiation.
Capabilities Negotiation을 통해 Device A와 Device B는 1) Wireless Sensing 지원 여부, 2) Decision 방법, 3) 대표 Device 설정을 할 수 있다. 즉, Capabilities Negotiation을 통해 Device 간에 Capabilities 정보가 교환되고, 대표 Device가 설정될 수 있다.Through Capabilities Negotiation, Device A and Device B can set 1) whether wireless sensing is supported, 2) decision method, and 3) representative device setting. That is, Capabilities information may be exchanged between devices through Capabilities Negotiation, and a representative device may be set.
구체적으로, Device Discovery를 마친 Device는 Capabilities Negotiation Request를 전송하여 Device간의 Capabilities Negotiation을 시작한다. Capabilities Negotiation Request를 받은 Device는 자신의 Capabilities(Wireless Sensing 지원 여부, Decision 방법 등)을 Capabilities Negotiation Response에 실어 Capabilities Negotiation Request를 보낸 Device에게 전송한다. Capabilities Negotiation Response를 받은 Device는 자신의 Capabilities와 비교하여 대표 Device를 설정하고 자신의 Capabilities(Wireless Sensing 지원 여부, Decision 방법, 대표 Device 설정)을 Capabilities Negotiation Confirm에 실어 전송한다.Specifically, the device that has completed device discovery transmits a Capabilities Negotiation Request to start Capabilities Negotiation between devices. The device that received the Capabilities Negotiation Request transmits its capabilities (Wireless Sensing support, decision method, etc.) to the Capabilities Negotiation Response to the device that sent the Capabilities Negotiation Request. The device receiving the Capabilities Negotiation Response compares its capabilities with its own capabilities, sets the representative device, and transmits its capabilities (Wireless Sensing support, decision method, and representative device setting) on the Capabilities Negotiation Confirm.
Capabilities 정보 교환에서, Wireless Sensing 지원 여부는 Level 1 이상의 Capabilities를 가지고 있는 기기는 Wireless Sensing을 지원하는 것에 대한 정보를 상대방에게 전달한다. Decision 방법 교환은 앞서 설명한 Decision 방법 정의에 따라 자신의 Capabilities와 매칭되는 Decision 정보를 상대방에게 전달한다.In the capability information exchange, whether wireless sensing is supported or not, a device having capabilities of Level 1 or higher delivers information about supporting wireless sensing to the counterpart. Decision method exchange delivers decision information matching one's own capabilities to the other party according to the decision method definition described above.
대표 Device를 설정할 때, AI가 지원되는 Device는 AI가 지원되지 않는 Device보다 우선적으로 대표 Device로 설정될 수 있다. 또한, Level 순위가 높을수록 대표 Device로 설정 될 수 있다. 동일 Level일 경우 Device의 성능(Device의 상태, burden 등)이 좋을수록 대표 Device로 설정될 수 있다. AI가 지원되지 않는 기기 중 동일 Level일 경우 AI Cloud(90)에 연결되어 있는 기기가 대표 Device로 설정될 수 있다.When setting the representative device, the AI-supported device may be set as the representative device in preference to the non-AI-supported device. In addition, the higher the level, the higher the ranking can be set as a representative device. In the case of the same level, the better the device's performance (device state, burden, etc.), the more it can be set as a representative device. Among devices that do not support AI, if the device is at the same level, the device connected to the AI Cloud 90 may be set as the representative device.
Capabilities Negotiation 이후, Device는 기기별 Capabilities에 따라서 Level 별 동작을 수행할 수 있다. 즉, Device A와 Device B는 Capabilities에 상관없이 1) Wireless Sensing Data의 송수신을 수행하고, Capabilities에 따라 2) Signal Pre-Processing, 3) Feature Selection / Extraction, 4) Deep/Machine Learning 중 하나 이상을 수행할 수 있다.After Capabilities Negotiation, the device may perform an operation for each level according to the capabilities of each device. That is, Device A and Device B perform 1) transmission and reception of Wireless Sensing Data regardless of their capabilities, and perform one or more of 2) Signal Pre-Processing, 3) Feature Selection / Extraction, and 4) Deep/Machine Learning according to their capabilities. can be done
Device는 협력(Cooperation)을 위해 다른 기기에게 5) Soft Decision 혹은 6) Hard Decision 정보 중 최소 하나를 전달할 수 있다. Soft Decision은 1) Wireless Sensing Data을 통해 수집한 Sensing Data 자체, 2) Signal Pre-Processing를 통해 전처리된 Signal Data, 3) Feature Selection / Extraction을 통해 Machine Learning 학습/예측을 위해 전처리된 Input Data 중 하나의 형태로 전달될 수 있다. Hard Decision은 AI Capabilities를 가진 Device(Level 4 Device 또는 AI Cloud와 연결된 Device)가 자신이 수집한 Data 혹은 협력 과정에서 획득한 Data를 융합하여 예측한 결과가 될 수 있다.A device may transmit at least one of 5) Soft Decision or 6) Hard Decision information to another device for cooperation. Soft Decision is one of 1) Sensing Data itself collected through Wireless Sensing Data, 2) Signal Data pre-processed through Signal Pre-Processing, and 3) Input Data pre-processed for machine learning learning/prediction through Feature Selection / Extraction. can be delivered in the form of Hard Decision can be the result of predicting a device with AI Capabilities (Level 4 Device or Device connected to AI Cloud) by fusion of the data it collects or the data acquired in the process of cooperation.
Device는 정보 교환 네트워크 부(80)를 통해 5) Soft Decision 혹은 6) Hard Decision 정보를 전달할 수 있다. 구체적인 정보 전달 방법은 다음과 같다.The device may transmit 5) Soft Decision or 6) Hard Decision information through the information exchange network unit 80 . The detailed information delivery method is as follows.
Device는 기기 별 Capabilities에 따라서 상위 Level의 기능을 포함하고 있는 기기나 동일 Level의 기능을 포함하고 있는 기기에게 Soft decision 혹은 Hard decision 정보를 전달할 수 있다.A device can deliver soft decision or hard decision information to a device having a higher level function or a device having the same level function according to the capabilities of each device.
Level 1 기능을 포함하고 있는 기기의 경우 Level 1, 2, 3, 4 기능을 포함하고 있는 기기 및 AI Cloud(90)에게 Soft Decision 1을 전달해 줄 수 있다.In the case of a device including a Level 1 function, Soft Decision 1 may be delivered to a device including Level 1, 2, 3, and 4 functions and to the AI Cloud 90 .
Level 2 기능을 포함하고 있는 기기의 경우 Level 2, 3, 4 기능을 포함하고 있는 기기 및 AI Cloud(90)에게 Soft Decision 1, 2를 전달해 줄 수 있다.In the case of a device including a Level 2 function, Soft Decisions 1 and 2 may be delivered to a device including Level 2, 3, and 4 functions and to the AI Cloud (90).
Level 3 기능을 포함하고 있는 기기의 경우 Level 3, 4 기능을 포함하고 있는 기기 및 AI Cloud(90)에게 Soft Decision 1, 2, 3을 전달해 줄 수 있다.In the case of devices including Level 3 functions, Soft Decisions 1, 2, and 3 may be delivered to devices including Level 3 and 4 functions and to the AI Cloud 90 .
Level 4 기능을 포함하고 있는 기기의 경우 Level 4 기능을 포함하고 있는 기기 및 AI Cloud(90)에게 Soft Decision 1, 2, 3 및 Hard Decision 정보를 전달해 줄 수 있다.In the case of a device including a Level 4 function, Soft Decision 1, 2, 3, and Hard Decision information may be delivered to a device including a Level 4 function and the AI Cloud 90 .
Hard Decision을 한 Level 4 기능을 포함하고 있는 기기나 AI Cloud는 연결되어 있는 Device들에게 Hard Decision 정보를 공유할 수 있다.A device or AI Cloud that includes a Level 4 function with Hard Decision can share Hard Decision information with connected devices.
도 14는 정보 전달 방법의 다양한 일례를 나타낸다.14 shows various examples of an information delivery method.
도 14의 좌측도에 따르면, Level 1 Device는 Level 1, 2, 3, 4 Device 및 AI Cloud(90)에게 Soft Decision 1을 전달해 줄 수 있다. Level 2 Device는 Level 2, 3, 4 Device 및 AI Cloud(90)에게 Soft Decision 1, 2를 전달해 줄 수 있다. Level 3 Device는 Level 3, 4 Device 및 AI Cloud(90)에게 Soft Decision 1, 2, 3을 전달해 줄 수 있다. Level 4 Device는 Level 4 Device 및 AI Cloud(90)에게 Soft Decision 1, 2, 3 및 Hard Decision 정보를 전달해 줄 수 있다. Level 4 Device나 AI Cloud는 연결되어 있는 Device들에게 Hard Decision 정보를 공유할 수 있다.According to the left view of FIG. 14 , the Level 1 Device may deliver Soft Decision 1 to Level 1, 2, 3, and 4 Devices and the AI Cloud 90 . Level 2 Device may deliver Soft Decision 1 and 2 to Level 2, 3, 4 Device and AI Cloud 90. Level 3 Device may deliver Soft Decision 1, 2, and 3 to Level 3, 4 Device and AI Cloud (90). The Level 4 Device may deliver Soft Decision 1, 2, 3 and Hard Decision information to the Level 4 Device and the AI Cloud 90 . Level 4 Device or AI Cloud can share Hard Decision information with connected devices.
도 14의 우측도에 따르면, 정보 전달 예시 그림 1은 AI Cloud에서 예측하여 결과(또는 Decision)를 공유하는 일례를 나타낸다. 정보 전달 예시 그림 2는 대표 Device에서 예측하여 결과(또는 Decision)를 공유하는 일례를 나타낸다. 정보 전달 예시 그림 3은 각 Device에서 예측하여 결과(또는 Decision)를 공유하는 일례를 나타낸다.According to the right view of Fig. 14, an example of information delivery Fig. 1 shows an example of sharing a result (or decision) by predicting in AI Cloud. Example of information delivery Figure 2 shows an example of sharing the result (or decision) by predicting from the representative device. Example of information transfer Figure 3 shows an example of sharing the result (or decision) by predicting each device.
도 15는 AI Cloud에서 예측하여 결과를 공유하는 예시 1을 도시한 도면이다.15 is a diagram illustrating Example 1 in which predictions are made in AI Cloud and the results are shared.
도 15를 참조하면, Level 2 Device는 주변 기기 Discovery(20)를 수행하고, Level 1 Device 2개가 감지되면 감지된 Device와 Capabilities Negotiation(30)를 수행한다. 상기 Device들은 Capabilities Negotiation(30)를 통해 1) Wireless Sensing 지원 여부 2) Decision 방법 3) 대표 Device를 결정한다. 이때, Level 순위가 더 높은 Level 2 Device가 대표 Device로 결정된다.Referring to FIG. 15 , the Level 2 Device performs peripheral device discovery ( 20 ), and when two Level 1 devices are detected, it performs Capabilities Negotiation ( 30 ) with the detected Device. The devices determine 1) whether wireless sensing is supported, 2) a decision method, and 3) a representative device through the capabilities negotiation (30). In this case, a Level 2 Device with a higher Level ranking is determined as the representative Device.
Level 2 Device나 AP에서는 Capabilities Negotiation(30)를 수행한 Level 1 Device들에게 CSI Information이 포함 되어 있는 측정이 가능한 신호를 송신한다(Wireless Sensing(40)). Level 2 Device, Level 1 Device들은 CSI Information이 포함되어 있는 측정 가능한 신호를 측정(Wireless Sensing(40))한다.A Level 2 Device or AP transmits a measurable signal including CSI Information to Level 1 Devices that have performed Capabilities Negotiation (30) (Wireless Sensing (40)). Level 2 Device and Level 1 Devices measure a measurable signal including CSI information (Wireless Sensing (40)).
수신한 Level 2 Device와 Level 1 Device들은 CSI 정보를 수집(40)한 뒤 대표 Device(Level2 Device)와 Negotiation한 결과에 따라 Decision 정보를 정보 교환 네트워크 부(80)를 통해 전달한다.The received Level 2 Device and Level 1 Devices collect CSI information (40), and then transmit decision information through the information exchange network unit 80 according to the result of negotiation with the representative device (Level 2 Device).
대표 Device는 정보 교환 네트워크 부(80)를 통해 받은 Decision 정보를 Data 처리하여 클라우드 정보 교환 네트워크 부(100)를 통해 AI Cloud(90)에게 전달한다. 이때, 대표 Device로부터 전달된 Decision 정보가 Soft Decision 1인 경우, Wireless Sensing Data을 통해 수집한 Sensing Raw Data 또는 CSI Information(40)이 정보 교환 네트워크부(80)를 통해 AI Cloud(90)에게 전달될 수 있다.The representative device processes the decision information received through the information exchange network unit 80 as data and delivers it to the AI Cloud 90 through the cloud information exchange network unit 100 . At this time, if the decision information delivered from the representative device is Soft Decision 1, the sensing raw data or CSI Information 40 collected through the wireless sensing data will be delivered to the AI Cloud 90 through the information exchange network unit 80. can
대표 Device로부터 전달된 Decision 정보가 Soft Decision 2인 경우, Wireless Sensing Data을 통해 수집한 Sensing Raw Data 또는 CSI Information(40)에 Signal Pre-Processing를 통해 전처리된 Signal Data(50)이 AI Cloud(90)에게 전달될 수 있다.If the decision information delivered from the representative device is Soft Decision 2, the signal data (50) pre-processed through Signal Pre-Processing to the sensing raw data collected through Wireless Sensing Data or CSI Information (40) is transferred to AI Cloud (90) can be passed on to
AI Cloud(100)에서는 Device들의 Decision 정보들을 처리 후 클라우드 정보 교환 네트워크 부(100)을 통해 대표 Device(Level2 Device)에게 결과(Hard Decision)를 전달하고, 대표 Device(Level2 Device)는 다른 Device(Level 1 Devices)들에게 정보 교환 네트워크 부(80)을 통해 결과를 공유한다.After processing the decision information of the devices in the AI Cloud 100, the result (Hard Decision) is delivered to the representative device (Level 2 Device) through the cloud information exchange network unit 100, and the representative device (Level 2 Device) is the other device (Level 1 Devices) to share the result through the information exchange network unit 80 .
앞서, AI Cloud(100)가 대표 Device로부터 Soft Decision 1을 받은 경우, AI Cloud(100)는 Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) 과정을 거친 Data를 처리하고 대표 Device에게 전달할 수 있다. Earlier, when AI Cloud 100 received Soft Decision 1 from the representative device, AI Cloud 100 performs Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) processes. It can process the passed data and deliver it to the representative device.
AI Cloud(100)가 대표 Device로부터 Soft Decision 2를 받은 경우, AI Cloud(100)는 Feature Extraction(60), Machine/Deep Learning based training and prediction(70) 과정을 거친 Data를 처리하고 대표 Device에게 결과를 전달할 수 있다. When AI Cloud(100) receives Soft Decision 2 from the representative device, AI Cloud(100) processes the data that has gone through the Feature Extraction(60), Machine/Deep Learning based training and prediction(70) process and sends the result to the representative device. can pass
도 16은 대표 Device가 예측하여 결과를 공유하는 예시 2를 도시한 도면이다.16 is a diagram illustrating Example 2 in which a representative device predicts and shares a result.
도 16을 참조하면, Level 4 Device는 주변 기기 Discovery(20)를 수행하고, Level 2 Device와 Level 1 Device가 감지되면 감지된 Device와 Capabilities Negotiation(30)를 수행한다. 상기 Device들은 Capabilities Negotiation(30)를 통해 1) Wireless Sensing 지원 여부 2) Decision 방법 3) 대표 Device를 결정한다. 이때, Level 순위가 더 높은 Level 4 Device가 대표 Device로 결정된다.Referring to FIG. 16 , the Level 4 Device performs peripheral device discovery 20 , and when a Level 2 Device and a Level 1 Device are detected, the Level 4 Device and Capabilities Negotiation 30 are performed. The devices determine 1) whether wireless sensing is supported, 2) a decision method, and 3) a representative device through the capabilities negotiation (30). At this time, a Level 4 Device with a higher Level ranking is determined as the representative Device.
Level 4 Device나 AP에서는 Capabilities Negotiation(30)를 수행한 Level 4 Device, Level 2 Device, Level 1 Device에게 CSI Information이 포함되어 있는 측정이 가능한 신호를 송신한다(Wireless Sensing(40)). Level 4 Device, Level 2 Device, Level 1 Device은 CSI Information이 포함되어 있는 측정 가능한 신호를 측정(Wireless Sensing(40))한다.The Level 4 Device or AP transmits a measurable signal including CSI Information to the Level 4 Device, Level 2 Device, and Level 1 Device that have performed Capabilities Negotiation (30) (Wireless Sensing (40)). Level 4 Device, Level 2 Device, and Level 1 Device measure a measurable signal including CSI Information (Wireless Sensing (40)).
수신한 Level 4 Device, Level 2 Device, Level 1 Device은 CSI 정보를 수집(40)한 뒤 대표 Device(Level 4 Device)와 Negotiation한 결과에 따라 Decision 정보(Soft Decision 1 또는 2)를 정보 교환 네트워크 부(80)를 통해 전달한다.The received Level 4 Device, Level 2 Device, and Level 1 Device collect CSI information (40) and then transmit decision information (Soft Decision 1 or 2) to the information exchange network unit according to the result of negotiation with the representative device (Level 4 Device). pass through (80).
대표 Device는 정보 교환 네트워크 부(80)를 통해 받은 Decision 정보를 Data 처리하여 결과(Hard Decision)를 정보 교환 네트워크 부(100)를 통해 공유한다.The representative device processes the decision information received through the information exchange network unit 80 and shares the result (Hard Decision) through the information exchange network unit 100 .
Level 2 Device의 경우 정보 전달 방법은 다음과 같다. Level 2 Device에 대한 Negotiation한 결과가 Soft Decision 1인 경우, Soft Decision 1을 받은 Level 4 Device는 Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) 과정을 거친 Data를 처리하고 Level 2 Device에게 결과를 전달할 수 있다. Level 2 Device에 대한 Negotiation한 결과가 Soft Decision 2인 경우, Soft Decision 2를 받은 Level 4 Device는 Feature Extraction(60), Machine/Deep Learning based training and prediction(70) 과정을 거친 Data를 처리하고 Level 2 Device에게 결과를 전달할 수 있다.In the case of Level 2 Device, the information delivery method is as follows. If the result of Negotiation for Level 2 Device is Soft Decision 1, Level 4 Device that received Soft Decision 1 is subjected to Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) It can process the data passed through and deliver the result to Level 2 Device. If the result of Negotiation for Level 2 Device is Soft Decision 2, the Level 4 Device that received Soft Decision 2 processes the data that has gone through the Feature Extraction (60), Machine/Deep Learning based training and prediction (70) process and The result can be sent to the device.
Level 1 Device의 경우 정보 전달 방법은 다음과 같다. Level 1 Device에 대한 Negotiation한 결과가 Soft Decision 1일 것이므로, Soft Decision 1을 받은 Level 4 Device는 Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) 과정을 거친 Data를 처리하고 Level 1 Device에게 결과를 전달할 수 있다.In the case of Level 1 Device, the information delivery method is as follows. As the result of Negotiation for Level 1 Device will be Soft Decision 1, Level 4 Device that received Soft Decision 1 takes the course of Signal Pre-processing(50), Feature Extraction(60), Machine/Deep Learning based training and prediction(70) It can process the data that has been passed through and deliver the result to the Level 1 Device.
도 17은 각 Device에서 예측하여 결과를 공유하는 예시 3을 도시한 도면이다.17 is a diagram illustrating Example 3 in which predictions are made in each device and the results are shared.
도 17을 참조하면, Level 4 Device는 주변 기기 Discovery(20)를 수행하고, Level 4 Device 2개가 감지되면 감지된 Device들과 Capabilities Negotiation(30)를 수행한다. 상기 Device들은 Capabilities Negotiation(30)를 통해 1) Wireless Sensing 지원 여부 2) Decision 방법 3) 대표 Device를 결정한다. 각 Device의 Level 순위가 모두 동일하므로, 성능이 좋다거나, 트래픽 상태가 좋다거나, AI Cloud(90)에 연결되어 있거나, Deep/Machine Learning을 하는 Device가 대표 Device가 될 수 있다. 본 실시예에서는 Level 4 Device(1)이 대표 Device로 결정된다고 가정한다.Referring to FIG. 17 , the Level 4 Device performs peripheral device discovery ( 20 ), and when two Level 4 devices are detected, it performs Capabilities Negotiation ( 30 ) with the detected devices. The devices determine 1) whether wireless sensing is supported, 2) a decision method, and 3) a representative device through the capabilities negotiation (30). Since the level ranking of each device is the same, a device with good performance, good traffic condition, connected to the AI Cloud 90, or performing Deep/Machine Learning can be a representative device. In this embodiment, it is assumed that the Level 4 Device (1) is determined as the representative device.
Level 4 Device(1)나 AP에서는 Capabilities Negotiation(30)를 수행한 Level 4 Device(1), Level 4 Device(2), Level 4 Device(3)에게 CSI Information이 포함되어 있는 측정이 가능한 신호를 송신한다(Wireless Sensing(40)). Level 4 Device(1), Level 4 Device(2), Level 4 Device(3)는 CSI Information이 포함되어 있는 측정 가능한 신호를 측정(Wireless Sensing(40))한다. Level 4 Device(1) or AP transmits a measurable signal including CSI Information to Level 4 Device(1), Level 4 Device(2), Level 4 Device(3) that performed Capabilities Negotiation(30) (Wireless Sensing (40)). Level 4 Device(1), Level 4 Device(2), and Level 4 Device(3) measure a measurable signal including CSI Information (Wireless Sensing(40)).
수신한 Level 4 Device(1), Level 4 Device(2), Level 4 Device(3)는 CSI 정보를 수집(40)한 뒤 대표 Device(Level 4 Device(1))와 Negotiation한 결과에 따라 Decision 정보(Hard Decision)를 정보 교환 네트워크 부(80)를 통해 공유한다. 또는, Level 4 Device(1), Level 4 Device(2), Level 4 Device(3)의 상태에 따라 Decision 정보(Soft Decision)를 주고 받고 공유한 후에, Deep/Machine Learning을 하는 Device가 결과를 예측하여 Decision 정보(Hard Decision)를 알려줄 수도 있다.Received Level 4 Device(1), Level 4 Device(2), Level 4 Device(3) collect CSI information (40) (Hard Decision) is shared through the information exchange network unit (80). Alternatively, after exchanging and sharing decision information (Soft Decision) according to the status of Level 4 Device(1), Level 4 Device(2), Level 4 Device(3), the Deep/Machine Learning Device predicts the result to inform the decision information (Hard Decision).
이하에서는, 도 1 내지 도 17을 참조하여, 상술한 실시예를 설명한다.Hereinafter, the above-described embodiment will be described with reference to FIGS. 1 to 17 .
도 18은 본 실시예에 따른 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 절차를 도시한 흐름도이다. 18 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
본 실시예는 무선 센싱(wireless sensing)을 기반으로 하는 기기가 다수 개 있는 경우, 상호 간 협업을 통한 학습 및 예측을 수행하여 사용자(또는 제스처)를 식별하는 방법을 제안한다. 본 실시예를 통해 사용자의 댁내 환경에 맞는 신호 패턴을 여러 기기가 공존해 있을 때에도 효율적으로 수집, 학습, 예측하는 시스템이 구현 가능해져서 새로운 패러다임의 IoT(Internet of Things) 미래 스마트홈 기기를 창출해 낼 수 있다는 새로운 효과가 있다. 후술하는 제1 및 제2 기기는 무선 센싱을 기반으로 하는 기기임을 가정한다.This embodiment proposes a method of identifying a user (or gesture) by performing learning and prediction through mutual cooperation when there are a plurality of devices based on wireless sensing. Through this embodiment, it is possible to implement a system that efficiently collects, learns, and predicts signal patterns suitable for the user's home environment even when multiple devices coexist, creating a new paradigm of IoT (Internet of Things) future smart home devices. There is a new effect that can be produced. It is assumed that first and second devices to be described below are devices based on wireless sensing.
S1810 단계에서, 제1 기기(device)는 제2 기기와 능력 협상(Capabilities Negotiation)을 수행한다.In step S1810, the first device (device) performs capability negotiation with the second device (Capabilities Negotiation).
S1820 단계에서, 상기 제1 기기는 상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신한다.In step S1820, the first device receives first determination information from the second device based on the result of the capability negotiation.
S1830 단계에서, 상기 제1 기기는 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송한다.In step S1830, the first device transmits second decision information that is a result of processing the first decision information to the second device.
이때, 상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보(Soft Decision)이고, 상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과(Hard Decision)이다. In this case, the first decision information is preliminary information required for identification based on wireless sensing (soft decision), and the second decision information is a result of identification based on the wireless sensing (hard decision).
상기 능력 협상을 수행하기 이전에, 상기 제1 기기는 기기 디스커버리(device discovery)를 수행하여 상기 제2 기기를 찾을 수 있다. 상기 제1 기기는 상기 제2 기기로 기기 디스커버리 요청을 전송하면, 상기 제2 기기가 기기 디스커버리 응답을 상기 제1 기기로 전송할 수 있고, 이를 통해 상기 제1 기기는 상기 제2 기기의 존재를 확인할 수 있다.Before performing the capability negotiation, the first device may perform device discovery to find the second device. When the first device transmits a device discovery request to the second device, the second device may transmit a device discovery response to the first device, through which the first device confirms the existence of the second device can
상기 제1 기기는 상기 능력 협상을 기반으로, 상기 제2 기기와 능력 정보를 교환하고 대표 기기를 결정할 수 있다. 상기 능력 정보는 상기 무선 센싱의 지원 여부 및 상기 제1 결정 정보를 포함할 수 있다.The first device may exchange capability information with the second device and determine a representative device based on the capability negotiation. The capability information may include whether the wireless sensing is supported and the first determination information.
상기 제1 결정 정보는 상기 제1 및 제2 기기의 레벨을 기반으로 제1 내지 제3 소프트 결정(Soft Decision) 중 하나로 결정될 수 있다. 상기 제1 소프트 결정은 상기 무선 신호의 로우 데이터(raw data)일 수 있다. 상기 제2 소프트 결정은 상기 무선 신호에 신호 전 처리(pre-precessing)된 데이터일 수 있다. 상기 제3 소프트 결정은 상기 무선 신호에 신호 전 처리된 데이터에서 추출된 입력 데이터일 수 있다. The first decision information may be determined as one of first to third soft decisions based on the levels of the first and second devices. The first soft decision may be raw data of the radio signal. The second soft determination may be data obtained by pre-processing the radio signal. The third soft determination may be input data extracted from data pre-processed on the radio signal.
상기 제2 결정 정보는 상기 제1, 제2 또는 제3 소프트 결정에 머신 러닝(Machine Learning) 또는 딥 러닝(Deep Learning)을 기반으로 학습 및 예측된 결과일 수 있다.The second decision information may be a result learned and predicted based on machine learning or deep learning for the first, second, or third soft decision.
상기 제1 및 제2 기기의 레벨은 상기 능력 정보를 기반으로 제1 내지 제4 레벨 중 하나로 결정될 수 있다.The level of the first and second devices may be determined as one of the first to fourth levels based on the capability information.
상기 제2 기기의 레벨이 상기 제1 레벨이면, 상기 제1 결정 정보는 상기 제1 소프트 결정될 수 있다. 상기 제2 기기의 레벨이 상기 제2 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 소프트로 결정될 수 있다. 상기 제2 기기의 레벨이 상기 제3 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정될 수 있다. 상기 제2 기기의 레벨이 상기 제4 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정되거나 상기 제2 결정 정보가 될 수 있다. 즉, 상기 제2 기기는 상기 능력 정보를 기반으로 상위 레벨 또는 동일한 레벨의 기능을 포함하는 기기(상기 제1 기기)에게 상기 제1 결정 정보(Soft Decision) 또는 상기 제2 결정 정보(Hard Decision)를 전달할 수 있다.When the level of the second device is the first level, the first determination information may be the first soft determination. When the level of the second device is the second level, the first determination information may be determined as the first or second software. When the level of the second device is the third level, the first determination information may be determined as the first or second or third software. When the level of the second device is the fourth level, the first determination information may be determined as the first, second, or third software, or may be the second determination information. That is, the second device sends the first decision information (Soft Decision) or the second decision information (Hard Decision) to a device (the first device) having a function of a higher level or the same level based on the capability information. can pass
상기 대표 기기는 기기 레벨, 기기 성능 또는 AI 클라우드(Artificial Intelligence Cloud)를 지원하거나 연결되어 있는지 여부를 기반으로 결정될 수 있다.The representative device may be determined based on a device level, device performance, or whether it supports or is connected to an Artificial Intelligence Cloud (AI Cloud).
상기 제1 기기가 상기 대표 기기로 결정된 경우, 정보 전달 과정은 이하와 같다. 상기 제1 결정 정보가 상기 제1 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 신호 전처리, 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 될 수 있다. 상기 제1 결정 정보가 상기 제2 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 될 수 있다. 상기 제1 결정 정보가 상기 제3 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 될 수 있다. 즉, 수신한 소프트 결정 정보가 무엇인지에 따라, 상기 제1 기기는 각각의 데이터 처리를 거쳐 결과 정보(제2 결정 정보)를 상기 제2 기기로 전달할 수 있다. When the first device is determined as the representative device, an information transfer process is as follows. If the first decision information is the first soft decision, the second decision information is data that has undergone a learning and prediction process based on signal preprocessing, feature extraction, and machine learning or deep learning in the first decision information. can When the first decision information is the second soft decision, the second decision information may be data that has undergone a learning and prediction process based on the feature extraction and the machine learning or the deep learning in the first decision information. . When the first decision information is the third soft decision, the second decision information may be data that has undergone a learning and prediction process based on the deep learning in the first decision information. That is, according to the received soft decision information, the first device may transmit result information (second decision information) to the second device through each data processing.
상기 제1 기기가 상기 AI 클라우드와 연결되어 있는 경우, 정보 전달 과정은 이하와 같다. 상기 제1 기기는 상기 AI 클라우드에게 상기 제1 결정 정보를 전송할 수 있다. 상기 제1 기기는 상기 AI 클라우드로부터 상기 제2 결정 정보를 수신할 수 있다. 상기 제2 결정 정보는, 상기 AI 클라우드가 상기 제1 결정 정보에 대해 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정이 수행하여 획득될 수 있다. 즉, 상술한 실시예는 상기 AI 클라우드가 기기들이 전송한 결정 정보를 처리한 후 결과 정보(제2 결정 정보)를 상기 제1 기기로 전달할 수 있다. 상기 제1 기기는 상기 AI 클라우드로부터 받은 결과 정보를 상기 제2 기기로 공유할 수 있다.When the first device is connected to the AI cloud, the information delivery process is as follows. The first device may transmit the first determination information to the AI cloud. The first device may receive the second determination information from the AI cloud. The second determination information may be obtained by the AI cloud performing a learning and prediction process based on the machine learning or the deep learning on the first determination information. That is, in the above-described embodiment, after the AI cloud processes the decision information transmitted by the devices, result information (second decision information) may be delivered to the first device. The first device may share result information received from the AI cloud with the second device.
다른 예로, 상기 제1 및 제2 기기의 레벨이 모두 제4 레벨로 동일한 경우, 상기 제1 및 제2 기기 각각이 상기 능력 협상의 결과를 기반으로 결과 정보(제2 결정 정보)를 획득하여 다른 기기에게 공유할 수 있다.As another example, when both the first and second devices have the same level as the fourth level, each of the first and second devices acquires result information (second decision information) based on the result of the capability negotiation You can share it across devices.
상기 능력 협상을 마치고 난 후에, 상기 제1 기기는 CSI(Channel State Information) 정보가 포함된 무선 신호를 상기 제2 기기에게 전송할 수 있다. 상기 제2 기기는 상기 무선 신호를 수집하고 측정을 할 수 있다.After completing the capability negotiation, the first device may transmit a radio signal including channel state information (CSI) information to the second device. The second device may collect and measure the wireless signal.
상기 제1 및 제2 기기는 무선 PHY 및 MAC 드라이버(Wireless PHY and MAC driver), 소프트 결정 인터페이스(Soft Decision Interface) 및 하드 결정 인터페이스(Hard Decision Interface)를 포함할 수 있다.The first and second devices may include a wireless PHY and MAC driver, a soft decision interface, and a hard decision interface.
상기 제1 결정 정보는 상기 소프트 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달될 수 있다. 상기 제2 결정 정보는 상기 하드 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달될 수 있다. 이로써, 상기 제1 및 제2 기기는 상위 계층(layer)를 거치지 않고도 상기 제1 및 제2 결정 정보를 PHY/MAC으로 전달할 수 있다.The first decision information may be transmitted to the wireless PHY and MAC driver through the soft decision interface. The second decision information may be transmitted to the wireless PHY and MAC driver through the hard decision interface. Accordingly, the first and second devices may transmit the first and second determination information to the PHY/MAC without going through an upper layer.
상기 제1 기기가 상기 제2 기기로 상기 제2 결정 정보를 공유함으로써, 협력을 통한 학습 및 예측된 결과를 기기가 모두 알 수 있다. 상기 제1 및 제2 기기는 무선 센싱을 기반으로 상기 결과를 통해 사용자 또는 제스처를 식별할 수 있다.When the first device shares the second decision information with the second device, the device can know both the learning through cooperation and the predicted result. The first and second devices may identify a user or a gesture based on the wireless sensing result.
도 19는 본 실시예에 따른 무선 기기가 다른 기기와 협력하여 무선 센싱을 수행하는 절차를 도시한 흐름도이다. 19 is a flowchart illustrating a procedure in which a wireless device performs wireless sensing in cooperation with another device according to the present embodiment.
본 실시예는 무선 센싱(wireless sensing)을 기반으로 하는 기기가 다수 개 있는 경우, 상호 간 협업을 통한 학습 및 예측을 수행하여 사용자(또는 제스처)를 식별하는 방법을 제안한다. 본 실시예를 통해 사용자의 댁내 환경에 맞는 신호 패턴을 여러 기기가 공존해 있을 때에도 효율적으로 수집, 학습, 예측하는 시스템이 구현 가능해져서 새로운 패러다임의 IoT(Internet of Things) 미래 스마트홈 기기를 창출해 낼 수 있다는 새로운 효과가 있다. 후술하는 제1 및 제2 기기는 무선 센싱을 기반으로 하는 기기임을 가정한다.This embodiment proposes a method of identifying a user (or gesture) by performing learning and prediction through mutual cooperation when there are a plurality of devices based on wireless sensing. Through this embodiment, it is possible to implement a system that efficiently collects, learns, and predicts signal patterns suitable for the user's home environment even when multiple devices coexist, creating a new paradigm of IoT (Internet of Things) future smart home devices. There is a new effect that can be produced. It is assumed that first and second devices to be described below are devices based on wireless sensing.
S1910 단계에서, 제2 기기(device)는 제1 기기와 능력 협상(Capabilities Negotiation)을 수행한다.In step S1910, the second device (device) performs capability negotiation with the first device (Capabilities Negotiation).
S1920 단계에서, 상기 능력 협상의 결과를 기반으로 상기 제1 기기에게 제1 결정 정보를 전송한다.In step S1920, first determination information is transmitted to the first device based on the result of the capability negotiation.
S1930 단계에서, 상기 제2 기기는 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제1 기기로부터 수신한다.In step S1930, the second device receives second decision information, which is a result of processing the first decision information, from the first device.
이때, 상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보(Soft Decision)이고, 상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과(Hard Decision)이다.In this case, the first decision information is preliminary information required for identification based on wireless sensing (soft decision), and the second decision information is a result of identification based on the wireless sensing (hard decision).
상기 능력 협상을 수행하기 이전에, 상기 제1 기기는 기기 디스커버리(device discovery)를 수행하여 상기 제2 기기를 찾을 수 있다. 상기 제1 기기는 상기 제2 기기로 기기 디스커버리 요청을 전송하면, 상기 제2 기기가 기기 디스커버리 응답을 상기 제1 기기로 전송할 수 있고, 이를 통해 상기 제1 기기는 상기 제2 기기의 존재를 확인할 수 있다.Before performing the capability negotiation, the first device may perform device discovery to find the second device. When the first device transmits a device discovery request to the second device, the second device may transmit a device discovery response to the first device, through which the first device confirms the existence of the second device can
상기 제1 기기는 상기 능력 협상을 기반으로, 상기 제2 기기와 능력 정보를 교환하고 대표 기기를 결정할 수 있다. 상기 능력 정보는 상기 무선 센싱의 지원 여부 및 상기 제1 결정 정보를 포함할 수 있다.The first device may exchange capability information with the second device and determine a representative device based on the capability negotiation. The capability information may include whether the wireless sensing is supported and the first determination information.
상기 제1 결정 정보는 상기 제1 및 제2 기기의 레벨을 기반으로 제1 내지 제3 소프트 결정(Soft Decision) 중 하나로 결정될 수 있다. 상기 제1 소프트 결정은 상기 무선 신호의 로우 데이터(raw data)일 수 있다. 상기 제2 소프트 결정은 상기 무선 신호에 신호 전 처리(pre-precessing)된 데이터일 수 있다. 상기 제3 소프트 결정은 상기 무선 신호에 신호 전 처리된 데이터에서 추출된 입력 데이터일 수 있다. The first decision information may be determined as one of first to third soft decisions based on the levels of the first and second devices. The first soft decision may be raw data of the radio signal. The second soft determination may be data obtained by pre-processing the radio signal. The third soft determination may be input data extracted from data pre-processed on the radio signal.
상기 제2 결정 정보는 상기 제1, 제2 또는 제3 소프트 결정에 머신 러닝(Machine Learning) 또는 딥 러닝(Deep Learning)을 기반으로 학습 및 예측된 결과일 수 있다.The second decision information may be a result learned and predicted based on machine learning or deep learning for the first, second, or third soft decision.
상기 제1 및 제2 기기의 레벨은 상기 능력 정보를 기반으로 제1 내지 제4 레벨 중 하나로 결정될 수 있다.The level of the first and second devices may be determined as one of the first to fourth levels based on the capability information.
상기 제2 기기의 레벨이 상기 제1 레벨이면, 상기 제1 결정 정보는 상기 제1 소프트 결정될 수 있다. 상기 제2 기기의 레벨이 상기 제2 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 소프트로 결정될 수 있다. 상기 제2 기기의 레벨이 상기 제3 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정될 수 있다. 상기 제2 기기의 레벨이 상기 제4 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정되거나 상기 제2 결정 정보가 될 수 있다. 즉, 상기 제2 기기는 상기 능력 정보를 기반으로 상위 레벨 또는 동일한 레벨의 기능을 포함하는 기기(상기 제1 기기)에게 상기 제1 결정 정보(Soft Decision) 또는 상기 제2 결정 정보(Hard Decision)를 전달할 수 있다.When the level of the second device is the first level, the first determination information may be the first soft determination. When the level of the second device is the second level, the first determination information may be determined as the first or second software. When the level of the second device is the third level, the first determination information may be determined as the first or second or third software. When the level of the second device is the fourth level, the first determination information may be determined as the first, second, or third software, or may be the second determination information. That is, the second device sends the first decision information (Soft Decision) or the second decision information (Hard Decision) to a device (the first device) having a function of a higher level or the same level based on the capability information. can pass
상기 대표 기기는 기기 레벨, 기기 성능 또는 AI 클라우드(Artificial Intelligence Cloud)를 지원하거나 연결되어 있는지 여부를 기반으로 결정될 수 있다.The representative device may be determined based on a device level, device performance, or whether it supports or is connected to an Artificial Intelligence Cloud (AI Cloud).
상기 제1 기기가 상기 대표 기기로 결정된 경우, 정보 전달 과정은 이하와 같다. 상기 제1 결정 정보가 상기 제1 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 신호 전처리, 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 될 수 있다. 상기 제1 결정 정보가 상기 제2 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 될 수 있다. 상기 제1 결정 정보가 상기 제3 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 될 수 있다. 즉, 수신한 소프트 결정 정보가 무엇인지에 따라, 상기 제1 기기는 각각의 데이터 처리를 거쳐 결과 정보(제2 결정 정보)를 상기 제2 기기로 전달할 수 있다. When the first device is determined as the representative device, an information transfer process is as follows. If the first decision information is the first soft decision, the second decision information is data that has undergone a learning and prediction process based on signal preprocessing, feature extraction, and machine learning or deep learning in the first decision information. can When the first decision information is the second soft decision, the second decision information may be data that has undergone a learning and prediction process based on the feature extraction and the machine learning or the deep learning in the first decision information. . When the first decision information is the third soft decision, the second decision information may be data that has undergone a learning and prediction process based on the deep learning in the first decision information. That is, according to the received soft decision information, the first device may transmit result information (second decision information) to the second device through respective data processing.
상기 제1 기기가 상기 AI 클라우드와 연결되어 있는 경우, 정보 전달 과정은 이하와 같다. 상기 제1 기기는 상기 AI 클라우드에게 상기 제1 결정 정보를 전송할 수 있다. 상기 제1 기기는 상기 AI 클라우드로부터 상기 제2 결정 정보를 수신할 수 있다. 상기 제2 결정 정보는, 상기 AI 클라우드가 상기 제1 결정 정보에 대해 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정이 수행하여 획득될 수 있다. 즉, 상술한 실시예는 상기 AI 클라우드가 기기들이 전송한 결정 정보를 처리한 후 결과 정보(제2 결정 정보)를 상기 제1 기기로 전달할 수 있다. 상기 제1 기기는 상기 AI 클라우드로부터 받은 결과 정보를 상기 제2 기기로 공유할 수 있다.When the first device is connected to the AI cloud, the information delivery process is as follows. The first device may transmit the first determination information to the AI cloud. The first device may receive the second determination information from the AI cloud. The second determination information may be obtained by the AI cloud performing a learning and prediction process based on the machine learning or the deep learning on the first determination information. That is, in the above-described embodiment, after the AI cloud processes the decision information transmitted by the devices, result information (second decision information) may be transmitted to the first device. The first device may share result information received from the AI cloud with the second device.
다른 예로, 상기 제1 및 제2 기기의 레벨이 모두 제4 레벨로 동일한 경우, 상기 제1 및 제2 기기 각각이 상기 능력 협상의 결과를 기반으로 결과 정보(제2 결정 정보)를 획득하여 다른 기기에게 공유할 수 있다.As another example, when both the first and second devices have the same level as the fourth level, each of the first and second devices acquires result information (second decision information) based on the result of the capability negotiation You can share it across devices.
상기 능력 협상을 마치고 난 후에, 상기 제1 기기는 CSI(Channel State Information) 정보가 포함된 무선 신호를 상기 제2 기기에게 전송할 수 있다. 상기 제2 기기는 상기 무선 신호를 수집하고 측정을 할 수 있다.After completing the capability negotiation, the first device may transmit a radio signal including channel state information (CSI) information to the second device. The second device may collect and measure the wireless signal.
상기 제1 및 제2 기기는 무선 PHY 및 MAC 드라이버(Wireless PHY and MAC driver), 소프트 결정 인터페이스(Soft Decision Interface) 및 하드 결정 인터페이스(Hard Decision Interface)를 포함할 수 있다.The first and second devices may include a wireless PHY and MAC driver, a soft decision interface, and a hard decision interface.
상기 제1 결정 정보는 상기 소프트 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달될 수 있다. 상기 제2 결정 정보는 상기 하드 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달될 수 있다. 이로써, 상기 제1 및 제2 기기는 상위 계층(layer)를 거치지 않고도 상기 제1 및 제2 결정 정보를 PHY/MAC으로 전달할 수 있다.The first decision information may be transmitted to the wireless PHY and MAC driver through the soft decision interface. The second decision information may be transmitted to the wireless PHY and MAC driver through the hard decision interface. Accordingly, the first and second devices may transmit the first and second determination information to the PHY/MAC without going through an upper layer.
상기 제1 기기가 상기 제2 기기로 상기 제2 결정 정보를 공유함으로써, 협력을 통한 학습 및 예측된 결과를 기기가 모두 알 수 있다. 상기 제1 및 제2 기기는 무선 센싱을 기반으로 상기 결과를 통해 사용자 또는 제스처를 식별할 수 있다.When the first device shares the second decision information with the second device, the device can know both the learning through cooperation and the predicted result. The first and second devices may identify a user or a gesture based on the wireless sensing result.
2. 장치 구성2. Device configuration
도 20은 본 명세서의 송신 장치 및/또는 수신 장치의 변형된 일례를 나타낸다. 20 shows a modified example of a transmitting apparatus and/or a receiving apparatus of the present specification.
도 1의 부도면 (a)/(b)의 각 장치/STA은 도 20와 같이 변형될 수 있다. 도 20의 트랜시버(630)는 도 1의 트랜시버(113, 123)와 동일할 수 있다. 도 20의 트랜시버(630)는 수신기(receiver) 및 송신기(transmitter)를 포함할 수 있다. Each device/STA of the sub-drawings (a)/(b) of FIG. 1 may be modified as shown in FIG. 20 . The transceiver 630 of FIG. 20 may be the same as the transceivers 113 and 123 of FIG. 1 . The transceiver 630 of FIG. 20 may include a receiver and a transmitter.
도 20의 프로세서(610)는 도 1의 프로세서(111, 121)과 동일할 수 있다. 또는, 도 20의 프로세서(610)는 도 1의 프로세싱 칩(114, 124)과 동일할 수 있다.The processor 610 of FIG. 20 may be the same as the processors 111 and 121 of FIG. 1 . Alternatively, the processor 610 of FIG. 20 may be the same as the processing chips 114 and 124 of FIG. 1 .
도 20의 메모리(150)는 도 1의 메모리(112, 122)와 동일할 수 있다. 또는, 도 20의 메모리(150)는 도 1의 메모리(112, 122)와는 상이한 별도의 외부 메모리일 수 있다. The memory 150 of FIG. 20 may be the same as the memories 112 and 122 of FIG. 1 . Alternatively, the memory 150 of FIG. 20 may be a separate external memory different from the memories 112 and 122 of FIG. 1 .
도 20를 참조하면, 전력 관리 모듈(611)은 프로세서(610) 및/또는 트랜시버(630)에 대한 전력을 관리한다. 배터리(612)는 전력 관리 모듈(611)에 전력을 공급한다. 디스플레이(613)는 프로세서(610)에 의해 처리된 결과를 출력한다. 키패드(614)는 프로세서(610)에 의해 사용될 입력을 수신한다. 키패드(614)는 디스플레이(613) 상에 표시될 수 있다. SIM 카드(615)는 휴대 전화 및 컴퓨터와 같은 휴대 전화 장치에서 가입자를 식별하고 인증하는 데에 사용되는 IMSI(international mobile subscriber identity) 및 그와 관련된 키를 안전하게 저장하기 위하여 사용되는 집적 회로일 수 있다. Referring to FIG. 20 , the power management module 611 manages power for the processor 610 and/or the transceiver 630 . The battery 612 supplies power to the power management module 611 . The display 613 outputs the result processed by the processor 610 . Keypad 614 receives input to be used by processor 610 . A keypad 614 may be displayed on the display 613 . SIM card 615 may be an integrated circuit used to securely store an international mobile subscriber identity (IMSI) used to identify and authenticate subscribers in mobile phone devices, such as mobile phones and computers, and keys associated therewith. .
도 20를 참조하면, 스피커(640)는 프로세서(610)에 의해 처리된 소리 관련 결과를 출력할 수 있다. 마이크(641)는 프로세서(610)에 의해 사용될 소리 관련 입력을 수신할 수 있다.Referring to FIG. 20 , the speaker 640 may output a sound related result processed by the processor 610 . The microphone 641 may receive a sound related input to be used by the processor 610 .
상술한 본 명세서의 기술적 특징은 다양한 장치 및 방법에 적용될 수 있다. 예를 들어, 상술한 본 명세서의 기술적 특징은 도 1 및/또는 도 20의 장치를 통해 수행/지원될 수 있다. 예를 들어, 상술한 본 명세서의 기술적 특징은, 도 1 및/또는 도 20의 일부에만 적용될 수 있다. 예를 들어, 상술한 본 명세서의 기술적 특징은, 도 1의 프로세싱 칩(114, 124)을 기초로 구현되거나, 도 1의 프로세서(111, 121)와 메모리(112, 122)를 기초로 구현되거나, 도 20의 프로세서(610)와 메모리(620)를 기초로 구현될 수 있다. 예를 들어, 본 명세서의 장치는, 무선 센싱을 기반으로 사용자 또는 제스처 등을 식별하는 장치이고, 상기 장치는 메모리 및 상기 메모리와 동작 가능하게 결합된 프로세서를 포함하되, 상기 프로세서는, 제2 기기와 능력 협상(Capabilities Negotiation)을 수행하고, 상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신하고, 및 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송한다. 상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과이다.The technical features of the present specification described above may be applied to various devices and methods. For example, the above-described technical features of the present specification may be performed/supported through the apparatus of FIGS. 1 and/or 20 . For example, the above-described technical features of the present specification may be applied only to a part of FIGS. 1 and/or 20 . For example, the technical features of the present specification described above are implemented based on the processing chips 114 and 124 of FIG. 1 , or implemented based on the processors 111 and 121 and the memories 112 and 122 of FIG. 1 , or , may be implemented based on the processor 610 and the memory 620 of FIG. 20 . For example, the device of the present specification is a device for identifying a user or a gesture based on wireless sensing, wherein the device includes a memory and a processor operatively coupled to the memory, wherein the processor includes a second device Performs capability negotiation with the second device, receives first decision information from the second device based on the result of the capability negotiation, and sends second decision information that is a result of processing the first decision information to the second device 2 to the device. The first determination information is prior information required for identification based on wireless sensing, and the second determination information is a result of identification based on the wireless sensing.
본 명세서의 기술적 특징은 CRM(computer readable medium)을 기초로 구현될 수 있다. 예를 들어, 본 명세서에 의해 제안되는 CRM은 적어도 하나의 프로세서(processor)에 의해 실행됨을 기초로 하는 명령어(instruction)를 포함하는 적어도 하나의 컴퓨터로 읽을 수 있는 기록매체(computer readable medium)이다The technical features of the present specification may be implemented based on a CRM (computer readable medium). For example, CRM proposed by the present specification is at least one computer readable medium including at least one computer readable medium including instructions based on being executed by at least one processor.
상기 CRM은, 제2 기기와 능력 협상(Capabilities Negotiation)을 수행하는 단계; 상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신하는 단계; 및 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송하는 단계를 포함하는 동작(operations)을 수행하는 명령어(instructions)를 저장할 수 있다. 본 명세서의 CRM 내에 저장되는 명령어는 적어도 하나의 프로세서에 의해 실행(execute)될 수 있다. 본 명세서의 CRM에 관련된 적어도 하나의 프로세서는 도 1의 프로세서(111, 121) 또는 프로세싱 칩(114, 124)이거나, 도 20의 프로세서(610)일 수 있다. 한편, 본 명세서의 CRM은 도 1의 메모리(112, 122)이거나 도 20의 메모리(620)이거나, 별도의 외부 메모리/저장매체/디스크 등일 수 있다. The CRM, performing capability negotiation with the second device (Capabilities Negotiation); receiving first decision information from the second device based on a result of the capability negotiation; and transmitting second decision information, which is a result of processing the first decision information, to the second device. The instructions stored in the CRM of the present specification may be executed by at least one processor. At least one processor related to CRM in the present specification may be the processors 111 and 121 or the processing chips 114 and 124 of FIG. 1 , or the processor 610 of FIG. 20 . Meanwhile, the CRM of the present specification may be the memories 112 and 122 of FIG. 1 , the memory 620 of FIG. 20 , or a separate external memory/storage medium/disk.
상술한 본 명세서의 기술적 특징은 다양한 응용예(application)나 비즈니스 모델에 적용 가능하다. 예를 들어, 인공 지능(Artificial Intelligence: AI)을 지원하는 장치에서의 무선 통신을 위해 상술한 기술적 특징이 적용될 수 있다. The technical features of the present specification described above are applicable to various applications or business models. For example, the above-described technical features may be applied for wireless communication in a device supporting artificial intelligence (AI).
인공 지능은 인공적인 지능 또는 이를 만들 수 있는 방법론을 연구하는 분야를 의미하며, 머신 러닝(기계 학습, Machine Learning)은 인공 지능 분야에서 다루는 다양한 문제를 정의하고 그것을 해결하는 방법론을 연구하는 분야를 의미한다. 머신 러닝은 어떠한 작업에 대하여 꾸준한 경험을 통해 그 작업에 대한 성능을 높이는 알고리즘으로 정의하기도 한다.Artificial intelligence refers to a field that studies artificial intelligence or methodologies that can create it, and machine learning refers to a field that defines various problems dealt with in the field of artificial intelligence and studies methodologies to solve them. do. Machine learning is also defined as an algorithm that improves the performance of a certain task through constant experience.
인공 신경망(Artificial Neural Network; ANN)은 머신 러닝에서 사용되는 모델로써, 시냅스의 결합으로 네트워크를 형성한 인공 뉴런(노드)들로 구성되는, 문제 해결 능력을 가지는 모델 전반을 의미할 수 있다. 인공 신경망은 다른 레이어의 뉴런들 사이의 연결 패턴, 모델 파라미터를 갱신하는 학습 과정, 출력값을 생성하는 활성화 함수(Activation Function)에 의해 정의될 수 있다.An artificial neural network (ANN) is a model used in machine learning, and may refer to an overall model having problem-solving ability, which is composed of artificial neurons (nodes) that form a network by combining synapses. An artificial neural network may be defined by a connection pattern between neurons of different layers, a learning process that updates model parameters, and an activation function that generates an output value.
인공 신경망은 입력층(Input Layer), 출력층(Output Layer), 그리고 선택적으로 하나 이상의 은닉층(Hidden Layer)를 포함할 수 있다. 각 층은 하나 이상의 뉴런을 포함하고, 인공 신경망은 뉴런과 뉴런을 연결하는 시냅스를 포함할 수 있다. 인공 신경망에서 각 뉴런은 시냅스를 통해 입력되는 입력 신호들, 가중치, 편향에 대한 활성 함수의 함숫값을 출력할 수 있다. The artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include neurons and synapses connecting neurons. In the artificial neural network, each neuron may output a function value of an activation function for input signals, weights, and biases input through synapses.
모델 파라미터는 학습을 통해 결정되는 파라미터를 의미하며, 시냅스 연결의 가중치와 뉴런의 편향 등이 포함된다. 그리고, 하이퍼파라미터는 머신 러닝 알고리즘에서 학습 전에 설정되어야 하는 파라미터를 의미하며, 학습률(Learning Rate), 반복 횟수, 미니 배치 크기, 초기화 함수 등이 포함된다.Model parameters refer to parameters determined through learning, and include the weight of synaptic connections and the bias of neurons. In addition, the hyperparameter refers to a parameter that must be set before learning in a machine learning algorithm, and includes a learning rate, the number of iterations, a mini-batch size, an initialization function, and the like.
인공 신경망의 학습의 목적은 손실 함수를 최소화하는 모델 파라미터를 결정하는 것으로 볼 수 있다. 손실 함수는 인공 신경망의 학습 과정에서 최적의 모델 파라미터를 결정하기 위한 지표로 이용될 수 있다.The purpose of learning the artificial neural network can be seen as determining the model parameters that minimize the loss function. The loss function may be used as an index for determining optimal model parameters in the learning process of the artificial neural network.
머신 러닝은 학습 방식에 따라 지도 학습(Supervised Learning), 비지도 학습(Unsupervised Learning), 강화 학습(Reinforcement Learning), 준 지도 학습(Semi-supervised Learning)으로 분류할 수 있다.Machine learning can be classified into supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning according to a learning method.
지도 학습은 학습 데이터에 대한 레이블(label)이 주어진 상태에서 인공 신경망을 학습시키는 방법을 의미하며, 레이블이란 학습 데이터가 인공 신경망에 입력되는 경우 인공 신경망이 추론해 내야 하는 정답(또는 결과 값)을 의미할 수 있다. 비지도 학습은 학습 데이터에 대한 레이블이 주어지지 않는 상태에서 인공 신경망을 학습시키는 방법을 의미할 수 있다. 강화 학습은 어떤 환경 안에서 정의된 에이전트가 각 상태에서 누적 보상을 최대화하는 행동 혹은 행동 순서를 선택하도록 학습시키는 학습 방법을 의미할 수 있다.Supervised learning refers to a method of training an artificial neural network in a state in which a label for the training data is given, and the label is the correct answer (or result value) that the artificial neural network should infer when the training data is input to the artificial neural network. can mean Unsupervised learning may refer to a method of training an artificial neural network in a state where no labels are given for training data. Reinforcement learning can refer to a learning method in which an agent defined in an environment learns to select an action or sequence of actions that maximizes the cumulative reward in each state.
인공 신경망 중에서 복수의 은닉층을 포함하는 심층 신경망(DNN: Deep Neural Network)으로 구현되는 머신 러닝을 딥 러닝(심층 학습, Deep Learning)이라 부르기도 하며, 딥 러닝은 머신 러닝의 일부이다. 이하에서, 머신 러닝은 딥 러닝을 포함하는 의미로 사용된다.Among artificial neural networks, machine learning implemented as a deep neural network (DNN) including a plurality of hidden layers is also called deep learning, and deep learning is a part of machine learning. Hereinafter, machine learning is used in a sense including deep learning.
또한 상술한 기술적 특징은 로봇의 무선 통신에 적용될 수 있다. In addition, the above-described technical features can be applied to the wireless communication of the robot.
로봇은 스스로 보유한 능력에 의해 주어진 일을 자동으로 처리하거나 작동하는 기계를 의미할 수 있다. 특히, 환경을 인식하고 스스로 판단하여 동작을 수행하는 기능을 갖는 로봇을 지능형 로봇이라 칭할 수 있다.A robot can mean a machine that automatically handles or operates a task given by its own capabilities. In particular, a robot having a function of recognizing an environment and performing an operation by self-judgment may be referred to as an intelligent robot.
로봇은 사용 목적이나 분야에 따라 산업용, 의료용, 가정용, 군사용 등으로 분류할 수 있다. 로봇은 액츄에이터 또는 모터를 포함하는 구동부를 구비하여 로봇 관절을 움직이는 등의 다양한 물리적 동작을 수행할 수 있다. 또한, 이동 가능한 로봇은 구동부에 휠, 브레이크, 프로펠러 등이 포함되어, 구동부를 통해 지상에서 주행하거나 공중에서 비행할 수 있다.Robots can be classified into industrial, medical, home, military, etc. depending on the purpose or field of use. The robot may be provided with a driving unit including an actuator or a motor to perform various physical operations such as moving the robot joints. In addition, the movable robot includes a wheel, a brake, a propeller, and the like in the driving unit, and may travel on the ground or fly in the air through the driving unit.
또한 상술한 기술적 특징은 확장 현실을 지원하는 장치에 적용될 수 있다. In addition, the above-described technical features may be applied to a device supporting extended reality.
확장 현실은 가상 현실(VR: Virtual Reality), 증강 현실(AR: Augmented Reality), 혼합 현실(MR: Mixed Reality)을 총칭한다. VR 기술은 현실 세계의 객체나 배경 등을 CG 영상으로만 제공하고, AR 기술은 실제 사물 영상 위에 가상으로 만들어진 CG 영상을 함께 제공하며, MR 기술은 현실 세계에 가상 객체들을 섞고 결합시켜서 제공하는 컴퓨터 그래픽 기술이다.The extended reality is a generic term for virtual reality (VR), augmented reality (AR), and mixed reality (MR). VR technology provides only CG images of objects or backgrounds in the real world, AR technology provides virtual CG images on top of images of real objects, and MR technology is a computer that mixes and combines virtual objects in the real world. graphic technology.
MR 기술은 현실 객체와 가상 객체를 함께 보여준다는 점에서 AR 기술과 유사하다. 그러나, AR 기술에서는 가상 객체가 현실 객체를 보완하는 형태로 사용되는 반면, MR 기술에서는 가상 객체와 현실 객체가 동등한 성격으로 사용된다는 점에서 차이점이 있다.MR technology is similar to AR technology in that it shows both real and virtual objects. However, there is a difference in that in AR technology, a virtual object is used in a form that complements a real object, whereas in MR technology, a virtual object and a real object are used with equal characteristics.
XR 기술은 HMD(Head-Mount Display), HUD(Head-Up Display), 휴대폰, 태블릿 PC, 랩탑, 데스크탑, TV, 디지털 사이니지 등에 적용될 수 있고, XR 기술이 적용된 장치를 XR 장치(XR Device)라 칭할 수 있다.XR technology can be applied to HMD (Head-Mount Display), HUD (Head-Up Display), mobile phone, tablet PC, laptop, desktop, TV, digital signage, etc. can be called
본 명세서에 기재된 청구항들은 다양한 방식으로 조합될 수 있다. 예를 들어, 본 명세서의 방법 청구항의 기술적 특징이 조합되어 장치로 구현될 수 있고, 본 명세서의 장치 청구항의 기술적 특징이 조합되어 방법으로 구현될 수 있다. 또한, 본 명세서의 방법 청구항의 기술적 특징과 장치 청구항의 기술적 특징이 조합되어 장치로 구현될 수 있고, 본 명세서의 방법 청구항의 기술적 특징과 장치 청구항의 기술적 특징이 조합되어 방법으로 구현될 수 있다.The claims described herein may be combined in various ways. For example, the technical features of the method claims of the present specification may be combined and implemented as an apparatus, and the technical features of the apparatus claims of the present specification may be combined and implemented as a method. In addition, the technical features of the method claim of the present specification and the technical features of the apparatus claim may be combined to be implemented as an apparatus, and the technical features of the method claim of the present specification and the technical features of the apparatus claim may be combined and implemented as a method.

Claims (20)

  1. 무선랜 시스템에서 in a wireless LAN system
    제1 기기(device)가, 제2 기기와 능력 협상(Capabilities Negotiation)을 수행하는 단계; performing, by a first device, capability negotiation with a second device;
    상기 제1 기기가, 상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신하는 단계; 및receiving, by the first device, first decision information from the second device based on a result of the capability negotiation; and
    상기 제1 기기가, 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송하는 단계를 포함하되,and transmitting, by the first device, second decision information that is a result of processing the first decision information to the second device,
    상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 및The first determination information is prior information required for identification based on wireless sensing, and
    상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과인The second determination information is a result of identification based on the wireless sensing
    방법.method.
  2. 제1항에 있어서, According to claim 1,
    상기 제1 기기가, 기기 디스커버리(device discovery)를 수행하여 상기 제2 기기를 찾는 단계; 및finding, by the first device, the second device by performing device discovery; and
    상기 제1 기기가, CSI(Channel State Information) 정보가 포함된 무선 신호를 상기 제2 기기에게 전송하는 단계를 더 포함하는Further comprising the step of transmitting, by the first device, a radio signal including CSI (Channel State Information) information to the second device
    방법.method.
  3. 제2항에 있어서, 3. The method of claim 2,
    상기 제1 기기가, 상기 능력 협상을 기반으로, 상기 제2 기기와 능력 정보를 교환하고 대표 기기를 결정하는 단계를 더 포함하되,The method further comprising, by the first device, exchanging capability information with the second device based on the capability negotiation and determining a representative device,
    상기 능력 정보는 상기 무선 센싱의 지원 여부 및 상기 제1 결정 정보를 포함하고,The capability information includes whether to support the wireless sensing and the first determination information,
    상기 제1 결정 정보는 상기 제1 및 제2 기기의 레벨을 기반으로 제1 내지 제3 소프트 결정(Soft Decision) 중 하나로 결정되고,The first decision information is determined as one of first to third soft decisions based on the levels of the first and second devices,
    상기 제1 소프트 결정은 상기 무선 신호의 로우 데이터(raw data)이고,The first soft decision is raw data of the radio signal,
    상기 제2 소프트 결정은 상기 무선 신호에 신호 전 처리(pre-precessing)된 데이터이고,The second soft decision is data pre-processed on the radio signal,
    상기 제3 소프트 결정은 상기 무선 신호에 신호 전 처리된 데이터에서 추출된 입력 데이터이고,The third soft decision is input data extracted from data pre-processed into the radio signal,
    상기 제2 결정 정보는 상기 제1, 제2 또는 제3 소프트 결정에 머신 러닝(Machine Learning) 또는 딥 러닝(Deep Learning)을 기반으로 학습 및 예측된 결과인The second decision information is a result of learning and predicted based on machine learning or deep learning in the first, second, or third soft decision.
    방법.method.
  4. 제3항에 있어서, 4. The method of claim 3,
    상기 제1 및 제2 기기의 레벨은 상기 능력 정보를 기반으로 제1 내지 제4 레벨 중 하나로 결정되고,The level of the first and second devices is determined as one of the first to fourth levels based on the capability information,
    상기 제2 기기의 레벨이 상기 제1 레벨이면, 상기 제1 결정 정보는 상기 제1 소프트로 결정되고,When the level of the second device is the first level, the first determination information is determined as the first software,
    상기 제2 기기의 레벨이 상기 제2 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 소프트로 결정되고,When the level of the second device is the second level, the first determination information is determined as the first or second software;
    상기 제2 기기의 레벨이 상기 제3 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정되고,When the level of the second device is the third level, the first determination information is determined as the first or second or third software;
    상기 제2 기기의 레벨이 상기 제4 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정되거나 상기 제2 결정 정보가 되는When the level of the second device is the fourth level, the first determination information is determined as the first, second, or third software or becomes the second determination information.
    방법.method.
  5. 제4항에 있어서, 5. The method of claim 4,
    상기 대표 기기는 기기 레벨, 기기 성능 또는 AI 클라우드(Artificial Intelligence Cloud)를 지원하거나 연결되어 있는지 여부를 기반으로 결정되는The representative device is determined based on device level, device performance, or whether it supports or is connected to AI Cloud (Artificial Intelligence Cloud).
    방법.method.
  6. 제5항에 있어서, 6. The method of claim 5,
    상기 제1 기기가 상기 대표 기기로 결정된 경우,When the first device is determined as the representative device,
    상기 제1 결정 정보가 상기 제1 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 신호 전처리, 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 되고,If the first decision information is the first soft decision, the second decision information is data that has undergone a signal preprocessing, feature extraction, and learning and prediction process based on the first decision information and the machine learning or deep learning, and ,
    상기 제1 결정 정보가 상기 제2 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 되고,If the first decision information is the second soft decision, the second decision information is data that has undergone a learning and prediction process based on the feature extraction and the machine learning or the deep learning in the first decision information,
    상기 제1 결정 정보가 상기 제3 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 되는If the first decision information is the third soft decision, the second decision information is data that has undergone a learning and prediction process based on the deep learning in the first decision information.
    방법.method.
  7. 제5항에 있어서, 6. The method of claim 5,
    상기 제1 기기가 상기 AI 클라우드와 연결되어 있는 경우,When the first device is connected to the AI cloud,
    상기 제1 기기가, 상기 AI 클라우드에게 상기 제1 결정 정보를 전송하는 단계; 및transmitting, by the first device, the first determination information to the AI cloud; and
    상기 제1 기기가, 상기 AI 클라우드로부터 상기 제2 결정 정보를 수신하는 단계를 더 포함하되,The method further comprising the step of receiving, by the first device, the second determination information from the AI cloud,
    상기 제2 결정 정보는, 상기 AI 클라우드가 상기 제1 결정 정보에 대해 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정이 수행하여 획득되는The second decision information is obtained by performing a learning and prediction process based on the machine learning or the deep learning on the first decision information by the AI cloud.
    방법.method.
  8. 제1항에 있어서, According to claim 1,
    상기 제1 및 제2 기기는 무선 PHY 및 MAC 드라이버(Wireless PHY and MAC driver), 소프트 결정 인터페이스(Soft Decision Interface) 및 하드 결정 인터페이스(Hard Decision Interface)를 포함하고,The first and second devices include a wireless PHY and MAC driver, a soft decision interface and a hard decision interface,
    상기 제1 결정 정보는 상기 소프트 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달되고,The first decision information is transmitted to the wireless PHY and MAC driver through the soft decision interface,
    상기 제2 결정 정보는 상기 하드 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달되는The second decision information is transmitted to the wireless PHY and MAC driver through the hard decision interface.
    방법.method.
  9. 무선랜 시스템에서 제1 기기에 있어서,In the first device in the wireless LAN system,
    메모리;Memory;
    트랜시버; 및transceiver; and
    상기 메모리 및 상기 트랜시버와 동작 가능하게 결합된 프로세서를 포함하되, 상기 프로세서는:a processor operatively coupled with the memory and the transceiver, the processor comprising:
    제2 기기와 능력 협상(Capabilities Negotiation)을 수행하고; perform Capabilities Negotiation with the second device;
    상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신하고; 및receive first decision information from the second device based on a result of the capability negotiation; and
    상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송하되,Transmitting second decision information, which is a result of processing the first decision information, to the second device,
    상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 및The first determination information is prior information required for identification based on wireless sensing, and
    상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과인The second determination information is a result of identification based on the wireless sensing
    제1 기기.first device.
  10. 제9항에 있어서, 10. The method of claim 9,
    상기 프로세서가, 기기 디스커버리(device discovery)를 수행하여 상기 제2 기기를 찾고; 및the processor searches for the second device by performing device discovery; and
    상기 프로세서가, CSI(Channel State Information) 정보가 포함된 무선 신호를 상기 제2 기기에게 전송하는The processor transmits a radio signal including CSI (Channel State Information) information to the second device
    제1 기기.first device.
  11. 제10항에 있어서, 11. The method of claim 10,
    상기 프로세서가, 상기 능력 협상을 기반으로, 상기 제2 기기와 능력 정보를 교환하고 대표 기기를 결정하되,The processor exchanges capability information with the second device based on the capability negotiation and determines a representative device,
    상기 능력 정보는 상기 무선 센싱의 지원 여부 및 상기 제1 결정 정보를 포함하고,The capability information includes whether to support the wireless sensing and the first determination information,
    상기 제1 결정 정보는 상기 제1 및 제2 기기의 레벨을 기반으로 제1 내지 제3 소프트 결정(Soft Decision) 중 하나로 결정되고,The first decision information is determined as one of first to third soft decisions based on the levels of the first and second devices,
    상기 제1 소프트 결정은 상기 무선 신호의 로우 데이터(raw data)이고,The first soft decision is raw data of the radio signal,
    상기 제2 소프트 결정은 상기 무선 신호에 신호 전 처리(pre-precessing)된 데이터이고,The second soft decision is data pre-processed on the radio signal,
    상기 제3 소프트 결정은 상기 무선 신호에 신호 전 처리된 데이터에서 추출된 입력 데이터이고,The third soft decision is input data extracted from data pre-processed into the radio signal,
    상기 제2 결정 정보는 상기 제1, 제2 또는 제3 소프트 결정에 머신 러닝(Machine Learning) 또는 딥 러닝(Deep Learning)을 기반으로 학습 및 예측된 결과인The second decision information is a result of learning and predicted based on machine learning or deep learning in the first, second, or third soft decision.
    제1 기기.first device.
  12. 제11항에 있어서, 12. The method of claim 11,
    상기 제1 및 제2 기기의 레벨은 상기 능력 정보를 기반으로 제1 내지 제4 레벨 중 하나로 결정되고,The level of the first and second devices is determined as one of the first to fourth levels based on the capability information,
    상기 제2 기기의 레벨이 상기 제1 레벨이면, 상기 제1 결정 정보는 상기 제1 소프트로 결정되고,When the level of the second device is the first level, the first determination information is determined as the first software,
    상기 제2 기기의 레벨이 상기 제2 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 소프트로 결정되고,When the level of the second device is the second level, the first determination information is determined as the first or second software;
    상기 제2 기기의 레벨이 상기 제3 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정되고,When the level of the second device is the third level, the first determination information is determined as the first or second or third software;
    상기 제2 기기의 레벨이 상기 제4 레벨이면, 상기 제1 결정 정보는 상기 제1 또는 제2 또는 제3 소프트로 결정되거나 상기 제2 결정 정보가 되는When the level of the second device is the fourth level, the first determination information is determined as the first, second, or third software or becomes the second determination information.
    제1 기기.first device.
  13. 제12항에 있어서, 13. The method of claim 12,
    상기 대표 기기는 기기 레벨, 기기 성능 또는 AI 클라우드(Artificial Intelligence Cloud)를 지원하거나 연결되어 있는지 여부를 기반으로 결정되는The representative device is determined based on device level, device performance, or whether it supports or is connected to AI Cloud (Artificial Intelligence Cloud).
    제1 기기.first device.
  14. 제13항에 있어서, 14. The method of claim 13,
    상기 제1 기기가 상기 대표 기기로 결정된 경우,When the first device is determined as the representative device,
    상기 제1 결정 정보가 상기 제1 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 신호 전처리, 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 되고,If the first decision information is the first soft decision, the second decision information is data that has undergone a signal preprocessing, feature extraction, and learning and prediction process based on the first decision information and the machine learning or deep learning, and ,
    상기 제1 결정 정보가 상기 제2 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 특징 추출 및 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 되고,If the first decision information is the second soft decision, the second decision information is data that has undergone a learning and prediction process based on the feature extraction and the machine learning or the deep learning in the first decision information,
    상기 제1 결정 정보가 상기 제3 소프트 결정이면, 상기 제2 결정 정보는 상기 제1 결정 정보에 상기 딥 러닝을 기반으로 학습 및 예측 과정을 거친 데이터가 되는If the first decision information is the third soft decision, the second decision information is data that has undergone a learning and prediction process based on the deep learning in the first decision information.
    제1 기기.first device.
  15. 제13항에 있어서, 14. The method of claim 13,
    상기 제1 기기가 상기 AI 클라우드와 연결되어 있는 경우,When the first device is connected to the AI cloud,
    상기 프로세서가, 상기 AI 클라우드에게 상기 제1 결정 정보를 전송하고; 및the processor transmits the first determination information to the AI cloud; and
    상기 프로세서가, 상기 AI 클라우드로부터 상기 제2 결정 정보를 수신하되,The processor receives the second determination information from the AI cloud,
    상기 제2 결정 정보는, 상기 AI 클라우드가 상기 제1 결정 정보에 대해 상기 머신 런닝 또는 상기 딥 러닝을 기반으로 학습 및 예측 과정이 수행하여 획득되는The second decision information is obtained by performing a learning and prediction process based on the machine learning or the deep learning on the first decision information by the AI cloud.
    제1 기기.first device.
  16. 제9항에 있어서, 10. The method of claim 9,
    상기 제1 및 제2 기기는 무선 PHY 및 MAC 드라이버(Wireless PHY and MAC driver), 소프트 결정 인터페이스(Soft Decision Interface) 및 하드 결정 인터페이스(Hard Decision Interface)를 포함하고,The first and second devices include a wireless PHY and MAC driver, a soft decision interface and a hard decision interface,
    상기 제1 결정 정보는 상기 소프트 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달되고,The first decision information is transmitted to the wireless PHY and MAC driver through the soft decision interface,
    상기 제2 결정 정보는 상기 하드 결정 인터페이스를 통해 상기 무선 PHY 및 MAC 드라이버에 전달되는The second decision information is transmitted to the wireless PHY and MAC driver through the hard decision interface.
    제1 기기.first device.
  17. 무선랜 시스템에서 in a wireless LAN system
    제2 기기(device)가, 제1 기기와 능력 협상(Capabilities Negotiation)을 수행하는 단계; performing, by a second device, capability negotiation with the first device;
    상기 제2 기기가, 상기 능력 협상의 결과를 기반으로 상기 제1 기기에게 제1 결정 정보를 전송하는 단계; 및transmitting, by the second device, first decision information to the first device based on the result of the capability negotiation; and
    상기 제2 기기가, 상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제1 기기로부터 수신하는 단계를 포함하되,Receiving, by the second device, second decision information that is a result of processing the first decision information from the first device,
    상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 및The first determination information is prior information required for identification based on wireless sensing, and
    상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과인The second determination information is a result of identification based on the wireless sensing
    방법.method.
  18. 무선랜 시스템에서 제2 기기에 있어서,In the second device in the wireless LAN system,
    메모리;Memory;
    트랜시버; 및transceiver; and
    상기 메모리 및 상기 트랜시버와 동작 가능하게 결합된 프로세서를 포함하되, 상기 프로세서는:a processor operatively coupled with the memory and the transceiver, the processor comprising:
    제1 기기와 능력 협상(Capabilities Negotiation)을 수행하고; perform capability negotiation with the first device;
    상기 능력 협상의 결과를 기반으로 상기 제1 기기에게 제1 결정 정보를 전송하고; 및send first decision information to the first device based on the result of the capability negotiation; and
    상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제1 기기로부터 수신하되,Receive second decision information, which is a result of processing the first decision information, from the first device,
    상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 및The first determination information is prior information required for identification based on wireless sensing, and
    상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과인The second determination information is a result of identification based on the wireless sensing
    제2 기기.second device.
  19. 적어도 하나의 프로세서(processor)에 의해 실행됨을 기초로 하는 명령어(instruction)를 포함하는 적어도 하나의 컴퓨터로 읽을 수 있는 기록매체(computer readable medium)에 있어서,In at least one computer-readable recording medium comprising an instruction based on being executed by at least one processor,
    제2 기기와 능력 협상(Capabilities Negotiation)을 수행하는 단계; performing capability negotiation with a second device;
    상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신하는 단계; 및receiving first decision information from the second device based on a result of the capability negotiation; and
    상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송하는 단계를 포함하되,Transmitting second decision information that is a result of processing the first decision information to the second device,
    상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 및The first determination information is prior information required for identification based on wireless sensing, and
    상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과인The second determination information is a result of identification based on the wireless sensing
    기록매체.recording medium.
  20. 무선랜 시스템에서 장치에 있어서,A device in a wireless LAN system, comprising:
    메모리; 및Memory; and
    상기 메모리와 동작 가능하게 결합된 프로세서를 포함하되, 상기 프로세서는:a processor operatively coupled with the memory, the processor comprising:
    제2 기기와 능력 협상(Capabilities Negotiation)을 수행하고; perform capabilities negotiation with the second device;
    상기 능력 협상의 결과를 기반으로 상기 제2 기기로부터 제1 결정 정보를 수신하고; 및receive first decision information from the second device based on a result of the capability negotiation; and
    상기 제1 결정 정보를 처리한 결과인 제2 결정 정보를 상기 제2 기기에게 전송하되,Transmitting second decision information, which is a result of processing the first decision information, to the second device,
    상기 제1 결정 정보는 무선 센싱을 기반으로 식별에 필요한 사전 정보이고, 및The first determination information is prior information required for identification based on wireless sensing, and
    상기 제2 결정 정보는 상기 무선 센싱을 기반으로 식별한 결과인The second determination information is a result of identification based on the wireless sensing
    장치.Device.
PCT/KR2020/012003 2020-09-07 2020-09-07 Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing WO2022050461A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020237005371A KR20230043134A (en) 2020-09-07 2020-09-07 Method and apparatus for performing wireless sensing in cooperation with other devices based on wireless sensing
PCT/KR2020/012003 WO2022050461A1 (en) 2020-09-07 2020-09-07 Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/012003 WO2022050461A1 (en) 2020-09-07 2020-09-07 Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing

Publications (1)

Publication Number Publication Date
WO2022050461A1 true WO2022050461A1 (en) 2022-03-10

Family

ID=80491202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/012003 WO2022050461A1 (en) 2020-09-07 2020-09-07 Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing

Country Status (2)

Country Link
KR (1) KR20230043134A (en)
WO (1) WO2022050461A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070063115A (en) * 2005-12-14 2007-06-19 삼성전자주식회사 Apparatus and method for iterative detection and decoding in mimo wireless communication system
KR20140135569A (en) * 2013-05-16 2014-11-26 삼성전자주식회사 Method and divece for communication
KR20170071386A (en) * 2015-12-15 2017-06-23 한국과학기술원 Communication scheme in cognitive radio networks based on coorperative sensing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070063115A (en) * 2005-12-14 2007-06-19 삼성전자주식회사 Apparatus and method for iterative detection and decoding in mimo wireless communication system
KR20140135569A (en) * 2013-05-16 2014-11-26 삼성전자주식회사 Method and divece for communication
KR20170071386A (en) * 2015-12-15 2017-06-23 한국과학기술원 Communication scheme in cognitive radio networks based on coorperative sensing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GOYAL NEETU, MATHUR SANJAY: "Spectrum Sensing and Energy Efficiency Strategies in Cognitive Radio Networks-Perspective and Prospects 1", INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH (NEW DELHI), RESEARCH INDIA PUBLICATIONS, IN, vol. 13, no. 5, 1 January 2018 (2018-01-01), IN , pages 2395 - 2411, XP055906145, ISSN: 0973-4562 *
KIM JIN-WOO; HUR KYEONG; LEE SEONG-RO: "Channel State Information Based Distributed Reservation Protocol for Energy Efficiency in WiMedia Networks", WIRELESS PERSONAL COMMUNICATIONS., SPRINGER, DORDRECHT., NL, vol. 80, no. 2, 17 September 2014 (2014-09-17), NL , pages 769 - 784, XP035436091, ISSN: 0929-6212, DOI: 10.1007/s11277-014-2040-4 *

Also Published As

Publication number Publication date
KR20230043134A (en) 2023-03-30

Similar Documents

Publication Publication Date Title
WO2019245199A1 (en) Method for performing measurement and wireless communication device
WO2021246807A1 (en) Method and apparatus for performing sensing in wireless lan system
WO2022025670A1 (en) Methods and apparatus for mitigating codebook inaccuracy when using hierarchical beam operations in a wireless communication system
WO2021246842A1 (en) Method and device for performing sensing in wireless lan system
WO2020027615A1 (en) Method and communication device for performing measurement
WO2022092650A1 (en) Method and apparatus for performing sensing in wireless lan system
WO2021256832A1 (en) Method and device for performing sensing in wireless lan system
WO2021251625A1 (en) Method and apparatus for handling master cell group failure in wireless communication system
WO2020022856A1 (en) Method and apparatus for reporting channel state information in wireless communication system
WO2022010260A1 (en) Multi-link setup in wireless communication system
WO2020145793A1 (en) Method for transmitting and receiving plurality of physical downlink shared channels in wireless communication system, and device therefor
WO2020204538A1 (en) Method for transmitting channel state information in wireless communication system and device therefor
WO2020022748A1 (en) Method for reporting channel state information and device therefor
WO2021256831A1 (en) Method and device for performing sensing in wireless lan system
WO2022005165A1 (en) P2p transmission method in wireless lan system
WO2021251541A1 (en) Method and device for performing wi-fi sensing in wireless lan system
WO2022139449A1 (en) Improved wireless lan sensing procedure
WO2021256828A1 (en) Method and apparatus for performing sensing in wireless lan system
WO2020032507A1 (en) Method for transmitting and receiving reference signal for radio link monitoring in unlicensed band and device therefor
WO2021256830A1 (en) Method and device for carrying out sensing in wireless lan system
WO2021256838A1 (en) Method and device for performing sensing in wireless lan system
WO2021225191A1 (en) Method and device for generating user identification model on basis of wireless sensing
WO2021251685A1 (en) Method and apparatus for handling secondary cell group failure in wireless communication system
WO2022050461A1 (en) Method and apparatus by which wireless device cooperates with another device to perform wireless sensing on basis of wireless sensing
WO2020145791A1 (en) Method for transceiving plurality of physical downlink shared channels in wireless communication system and apparatus therefor

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20237005371

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20952560

Country of ref document: EP

Kind code of ref document: A1