WO2023071565A1 - Procédé de commande automatique sur la base d'une détection d'un corps humain, premier dispositif électronique et système - Google Patents

Procédé de commande automatique sur la base d'une détection d'un corps humain, premier dispositif électronique et système Download PDF

Info

Publication number
WO2023071565A1
WO2023071565A1 PCT/CN2022/118440 CN2022118440W WO2023071565A1 WO 2023071565 A1 WO2023071565 A1 WO 2023071565A1 CN 2022118440 W CN2022118440 W CN 2022118440W WO 2023071565 A1 WO2023071565 A1 WO 2023071565A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
notification message
video
message
area
Prior art date
Application number
PCT/CN2022/118440
Other languages
English (en)
Chinese (zh)
Inventor
董伟
徐昊玮
李建州
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023071565A1 publication Critical patent/WO2023071565A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the field of automatic control, in particular to an automatic control method based on human perception, a first electronic device and a system.
  • IoT devices also called IoT devices, smart devices, etc.
  • IoT devices also known as whole house smart, whole house smart home, smart housekeeper, etc.
  • IoT devices also known as whole house smart, whole house smart home, smart housekeeper, etc.
  • IoT devices also known as whole house smart, whole house smart home, smart housekeeper, etc.
  • IoT devices also known as whole house smart, whole house smart home, smart housekeeper, etc.
  • the present application provides an automatic control method based on human body perception, a first electronic device and a system.
  • the technical solution of the present application enables the IoT device to automatically perform a certain operation when the user is close to the IoT device, without requiring the user to do any operation or carry any electronic device, which greatly improves the user experience.
  • IoT devices do not require hardware changes. For example, smart speakers generally do not have a camera and do not require an additional camera.
  • the present application provides a communication system based on human perception.
  • the system includes a central device, a first electronic device and R second electronic devices; wherein, the central device, the first electronic device and any second electronic device (the second electronic device is any one of the R second electronic devices ) Any two of the three communicate by wired communication or wireless communication; the first electronic device includes a first ultra-wideband module and a millimeter-wave radar module; optionally, it also includes an IMU module.
  • the R second electronic devices include a mobile device, and the mobile device includes a second ultra-wideband module; R is a positive integer greater than or equal to 1.
  • the central device Based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the human body position, and the communication between the central device and the first electronic device, the central device acquires R second electronic devices in the whole room coordinate system provided by the central device The location information of the user and the location information of the user; according to the location information of the user and the location information of the R second electronic devices, the central device controls or notifies at least one of the R second electronic devices to perform a preset operation; In response to the control or notification of the central device, at least one second electronic device executes a preset operation.
  • the first electronic device obtains the position of any second electronic device and the position of any user in the whole house through the first ultra-wideband module and the millimeter-wave radar module; The position of the device controls or notifies the second electronic device to perform a preset operation.
  • the IoT device (the second electronic device) automatically performs a certain operation without any operation by the user or carrying any electronic device, which is convenient and quick.
  • the central device based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the position of the human body, and the communication between the central device and the first electronic device, the central device periodically or in real time obtains the whole house information provided by the central device. In the coordinate system, the position information of the R second electronic devices and the position information of the user. In this way, the central device can control the second electronic device in real time, improving the automatic control efficiency of the second electronic device.
  • the location information of the user is the location information of a user.
  • the location information of the user is central location information of the multiple users.
  • a user's position is represented by its coordinates in a coordinate system (such as a whole-room coordinate system).
  • the center positions of multiple users are the average value of multiple user coordinates.
  • T second electronic devices among the R second electronic devices support voice wake-up; in response to receiving the wake-up voice, the T second electronic devices send The central device sends the first message; in response to receiving at least one first message, the central device obtains the position information of the T second electronic devices and the position information of the user in the whole room coordinate system; Determine a device closest to the user among the electronic devices; the central device sends a first indication message to the device closest to the user; in response to receiving the first indication message, the device closest to the user wakes up; wherein, T is greater than or equal to 1 , and a positive integer less than or equal to R.
  • the hub device in response to receiving at least one first message, obtains the position information of the T second electronic devices and the position information of the user in the whole room coordinate system, including: in response to receiving a In the first message, the central device obtains the location information of a second electronic device that sent the first message and the first room or first area where it is located in the whole-room coordinate system; the central device then obtains the information in the whole-room coordinate system Next, the location information of the user in the first room or the first area.
  • the central device when the central device receives a first message, it obtains the first room or the first area where the second electronic device corresponding to the first message is located, and obtains the information of the users in the first room or the first area. location information. Select a device closest to the user in the first room or area to respond to wakeup.
  • the central device in response to receiving at least one first message, obtains the position information of the T second electronic devices and the position information of the user in the whole room coordinate system, including: responding to Assuming that a plurality of first messages are received within a certain period of time, the central device obtains the position information of T second electronic devices and the position information of the user in the whole room coordinate system.
  • the central device receives the first message sent by all (or all of the first messages capable of sending) the second electronic device, and the central device determines a device closest to the user from the T second electronic devices to respond to the wake-up call. .
  • the central device based on the communication between the central device and T second electronic devices among the R second electronic devices, the central device pre-obtains that T second electronic devices support voice Wake up; T is a positive integer greater than or equal to 1 and less than or equal to R; the user is one user or multiple users.
  • the device closest to the user is the device closest to the centers of the multiple users.
  • the center positions of multiple users are the average value of multiple user coordinates.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio; users include the first user ;
  • the first device starts to play the first video or first audio, and sends a first notification message to the hub device; in response to receiving the first notification message, the hub device establishes the relationship between the first user and the first The binding relationship of the video or the first audio; after acquiring that the second device is the playback device closest to the first user, the hub device sends the first notification message to the first device; in response to receiving the first notification message, the first The device automatically stops playing, and sends a first feedback message to the central device, the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first feedback message, the central device sends a feedback message to the second device The second notification message, the second notification message includes the identification of the first video or the first audio and the playback progress information; in response to receiving the first feedback message, the central device sends a feedback message to the second device
  • the second device when playing on the first device, according to the user leaving the first device and approaching the second device, the second device automatically continues playing, and the first device automatically stops playing.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • obtaining that the second device is the playback device closest to the first user includes: obtaining that the second device is the playback device closest to the first user among devices that do not play video or audio.
  • the second device after the second device continues to play the first video or the first audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio;
  • the user includes the first A user and a second user; in response to the operation of the first user, the first device starts to play the first video or the first audio, and sends a first notification message to the central device; in response to receiving the first notification message, the central device establishes The binding relationship between the first user and the first video or the first audio; in response to the operation of the second user, the first device starts to play the second video or the second audio, and sends a second notification message to the central device; in response to receiving Upon receiving the second notification message, the hub device establishes a binding relationship between the second user and the second video or second audio; after acquiring that the second device is the playback device closest to the second user, the hub device sends the second A notification message; in response to receiving the first notification message, the first device automatically stops playing, and sends a first feedback message to the central device, the first
  • the second user uses the first device to play the second video or the second audio. According to the second user leaving the first device and approaching the second device, the second device automatically continues to play the second video or audio, and the first device automatically stops playing.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • obtaining that the second device is the playback device closest to the second user includes: obtaining that the second device is the playback device closest to the second user among devices that do not play video or audio.
  • the second device after the second device continues to play the second video or the second audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the first A room or a first area, the second device is located in the second room or the second area; the users include the first user, the first user is located in the first room or the first area; in response to the operation of the first user, the first device starts Play the first video or first audio, and send a first notification message to the hub device; in response to receiving the first notification message, the hub device establishes a binding relationship between the first user and the first video or first audio; The first user leaves the first room or the first area and enters the second room or the second area, and the hub device sends a first notification message to the first device; in response to receiving the first notification message, the first device automatically stops playing, and Sending a first feedback message to the central device, the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first
  • the automatic switch to the second device automatically Continue to play, the first device will stop playing automatically.
  • This method allows the user to seamlessly and automatically switch between audio and video playback by leaving the first room or the first area and entering the second room or the second area without carrying any equipment during the above-mentioned movement.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the first user.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second device It is the playback device closest to the first user.
  • the second device after the second device continues to play the first video or the first audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the first A room or a first area, the second device is located in a second room or a second area; users include a first user and a second user, and the first user and the second user are located in the first room or the first area; in response to the first The user's operation, the first device starts to play the first video or the first audio, and sends the first notification message to the central device; in response to receiving the first notification message, the central device establishes the connection between the first user and the first video or first audio binding relationship; in response to the operation of the second user, the first device starts to play the second video or the second audio, and sends the second notification message to the central device; in response to receiving the second notification message, the central device establishes the second The binding relationship between the user and the second video or the second audio; after acquiring that the second user leaves the first room
  • the second user uses the first device to play the second video or the second audio.
  • the second user leaving the first room or the first area where the first device is located, and entering the second room or the second area where the second device is located, it is automatically switched to the second device automatically continuing to play the second video or the second audio.
  • a device stops playing automatically. This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the second user.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second user It is the playback device closest to the second user.
  • the second device after the second device continues to play the second video or the second audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device and the second device are located in the first room or the first area, the first distance between the first device and the user, and the second distance between the second device and the user ; The hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance; the hub device sends the first volume information to the first device and the second device respectively.
  • a play message and a second play message the first play message includes first volume information, and the second play message includes second volume information; in response to receiving the first play message, the first device plays according to the first volume information; in response to receiving To the second play message, the second device plays according to the second volume information.
  • the volume played by the two audio playback devices is automatically adjusted.
  • Make compensation The volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound; the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound; the central device also obtains that the third device is located in the first The third distance between the third device and the user in the room or the first area; the hub device determines the first volume information for the first device and the volume information for the second device based on the first distance, the second distance, and the third distance. The second volume information of the device and the third volume information for the third device; the hub device sends the first play message, the second play message and the third play message to the first device, the second device and the third device respectively, The third play message includes third volume information; in response to receiving the third play message, the third device plays according to the third volume information.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device is located in the first area, the second device is located in the second area, and the first area is adjacent to the second area; the hub device also obtains the distance between the first device and the user The first distance between the second device and the user; the hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance ; The hub device sends a first play message and a second play message to the first device and the second device respectively, the first play message includes the first volume information, and the second play message includes the second volume information; in response to receiving the first To play the message, the first device plays it according to the first volume information; in response to receiving the second play message, the second device plays it according to the second volume information.
  • the hub device obtains that the first device is located in the first area, the second device is located
  • the two audio playback devices that play the stereo left channel and the stereo right channel are located in adjacent areas, for example, the living room and the dining room are connected together, the separation and obstruction between the two are less, and the space is independent Weak; devices in the same stereo system can be located separately in the living room and dining room.
  • the two audio playback devices playing the stereo left channel and the stereo right channel play the volumes played by the two audio playback devices are automatically compensated according to the distance between the two audio playback devices and the user.
  • the volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound;
  • the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound;
  • the central device also obtains that the third device is located in the first area, the second area, or the third area adjacent to the second area, the third distance between the third device and the user;
  • the hub device determines the distance for the first The first volume information of the device, the second volume information for the second device, and the third volume information for the third device;
  • the hub device sends the first play message to the first device, the second device, and the third device respectively , a second play message and a third play message, the third play message includes third volume information; in response to receiving the third play message, the third device plays according to the third volume information.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance; The device determines first volume information for the first device and second volume information for the second device according to the first volume attenuation value and the second volume attenuation value.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance, including:
  • a p is the volume attenuation value
  • K is the distance attenuation factor, indicating the volume attenuation caused by distance, which is related to the distance between the device and the user
  • DL w is the directivity factor, indicating the volume attenuation caused by the direction of audio signal transmission
  • a e is the air absorption attenuation, which means that when the audio is transmitted in the air, the air absorbs the audio volume attenuation.
  • the playback volume of the audio playback device is compensated according to the attenuation value in the volume transmission process.
  • the R second electronic devices include the first device; the hub device acquires that the first device is located in the first room or the first area, and the first room or the second There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; when at least one second user is detected entering the first room or the first area, the hub The device sends a first notification message to the first device; in response to receiving the first notification message, the first device automatically stops playing and automatically switches to a screen saver; when it detects that there is no second user in the first room or the first area, the hub device sends a notification message to the first device. The first device sends a second notification message; in response to receiving the second notification message, the first device automatically resumes playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a certain type of user (such as a child) enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; When a category user (such as a child) leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • a certain type of user such as a child
  • the first device after the first device automatically stops playing and automatically switches to the screen saver, the first device sends a first feedback message to the hub device; in response to receiving the first feedback message, the hub device learns that the first device has Stop playback and switch to screen saver.
  • the first device after the first device automatically resumes playing the first video, the first device sends a second feedback message to the hub device; in response to receiving the second feedback message, the hub device learns that the first device has resumed playing .
  • the R second electronic devices include the first device; the hub device acquires that the first device is located in the first room or the first area, and the first room or the second There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; after detecting that a user other than the first user enters the first room or the first area, The hub device sends a first notification message to the first device; in response to receiving the first notification message, the first device automatically stops playing and automatically switches to a screen saver; when it is detected that there is no first user in the first room or the first area user, the hub device sends a second notification message to the first device; in response to receiving the second notification message, the first device automatically resumes playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a user other than the first user enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; if the central device detects that the first user When a user other than the user leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • the first device after the first device automatically stops playing and automatically switches to the screen saver, the first device sends a first feedback message to the hub device; in response to receiving the first feedback message, the hub device learns that the first device has Stop playback and switch to screen saver.
  • the first device after the first device automatically resumes playing the first video, the first device sends a second feedback message to the hub device; in response to receiving the second feedback message, the hub device learns that the first device has resumed playing .
  • the R second electronic devices include the first device; the hub device acquires that the first device is playing the first video, and the hub device also acquires the first device , the first device is located in the first room or first area, the user's second location in the first room or first area, and the first viewing area corresponding to the first device; in response to detecting that the user is in the first Within the preset time period, if there is no user in the first viewing area, the hub device sends a first notification message to the first device; in response to receiving the first notification message, the first device stops playing the first video; Within the preset time period, if there is a user in the first viewing area, the hub device sends a second notification message to the first device; in response to receiving the second notification message, the first device resumes playing the first video.
  • the hub device acquires the relative position between the user and the device, and if the user leaves the viewing area of the first device, the first device automatically stops or pauses playing; if the user enters the viewing area of the first device, the first The device automatically continues to play; the whole process does not require manual operation by the user.
  • the hub device in response to detecting that the distance between at least one user and the first device is less than the first preset distance within the third preset time period, the hub device Sending a third notification message to the first device; in response to receiving the third notification message, the first device stops playing the first video and outputs reminder information; The distance between the devices is less than the first preset distance, and there are users in the first viewing area, the hub device sends a fourth notification message to the first device; in response to receiving the fourth notification message, the first device resumes playing the first video .
  • the central device acquires the relative position between the user and the device. If the user is too close to the first device, the first device automatically stops or pauses the playback and outputs a reminder message; if the user is in the viewing area of the first device, And the first device is not too close to the first device, the first device automatically resumes playing; the whole process does not require manual operation by the user.
  • the hub device in response to detecting that within the fifth preset time period, the distance between no user and the first device is less than the first preset distance, and the first viewing If there is no user in the area, the hub device sends a fifth notification message to the first device; in response to receiving the fifth notification message, the first device automatically sleeps.
  • the central device obtains the relative position of the user and the device. If the user leaves the viewing area of the first device for a long time, the first device automatically sleeps to save power and protect privacy; no manual operation by the user is required throughout the process.
  • the first electronic device includes an inertial measurement unit module; the R second electronic devices include the first device, and the first device does not include an ultra-wideband module; based on the first The position measurement of the mobile device by the electronic device, the measurement and conversion of the human body position, the communication between the central device and the first electronic device, the central device obtains the position information of R second electronic devices under the whole room coordinate system provided by the central device .
  • the location information of the user including: the first ultra-wideband module establishes a first coordinate system, based on the location measurement of the mobile device by the first electronic device, and the labeling of the first device by the mobile device, the first electronic device obtains the first device The first coordinate in the first coordinate system; the millimeter wave radar module establishes the second coordinate system, and based on the perception of the human body by the first electronic device, the first electronic device obtains the second coordinate of the user in the second coordinate system; through The conversion between the first coordinate system and the second coordinate system, the first electronic device obtains the third coordinate of the second coordinate in the first coordinate system; the first electronic device sends the relevant coordinates in the first coordinate system to the central device,
  • the relevant coordinates include the first coordinate and the third coordinate; based on the output of the inertial measurement unit module, combined with the parallelism between the whole house coordinate system and the geographic coordinate system, the central device converts the relevant coordinates in the first coordinate system into the whole house coordinate system The coordinates below, and then obtain the location information of the R
  • the first electronic device marks it at least once through the mobile device that includes the ultra-wideband module, so as to obtain its location information.
  • the present application provides an automatic control method based on human body perception, which is applied to a communication system based on human body perception.
  • the system includes a central device, a first electronic device, and R second electronic devices; the central device, the first The electronic device communicates with any two of any one of the R second electronic devices through wired communication or wireless communication; the first electronic device includes a first ultra-wideband module and a millimeter-wave radar module;
  • the R second electronic devices include a mobile device, and the mobile device includes a second ultra-wideband module; R is a positive integer greater than or equal to 1.
  • the method includes: based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the position of the human body, and the communication between the central device and the first electronic device, the central device obtains R coordinates in the whole room coordinate system provided by the central device.
  • the position information of the second electronic device, the position information of the user according to the position information of the user and the position information of the R second electronic devices, the central device controls or notifies at least one second electronic device in the R second electronic devices to execute the preset A preset operation; in response to the control or notification of the central device, at least one second electronic device executes a preset operation.
  • the first electronic device obtains the position of any second electronic device and the position of any user in the whole house through the first ultra-wideband module and the millimeter-wave radar module;
  • the location controls or notifies the second electronic device to perform a preset operation.
  • the IoT device (the second electronic device) automatically performs a certain operation without any operation by the user or carrying any electronic device, which is convenient and quick.
  • the central device based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the human body position, and the communication between the central device and the first electronic device, the central device periodically or in real time obtains the whole house information provided by the central device. In the coordinate system, the position information of the R second electronic devices and the position information of the user. In this way, the central device can control the second electronic device in real time, improving the automatic control efficiency of the second electronic device.
  • the location information of the user is the location information of a user.
  • the location information of the user is central location information of the multiple users.
  • a user's position is represented by its coordinates in a coordinate system (such as a whole-room coordinate system).
  • the center positions of multiple users are the average value of multiple user coordinates.
  • T second electronic devices among the R second electronic devices support voice wake-up; in response to receiving the wake-up voice, the T second electronic devices send The central device sends the first message; in response to receiving at least one first message, the central device obtains the position information of the T second electronic devices and the position information of the user in the whole room coordinate system; Determine a device closest to the user among the electronic devices; the central device sends a first indication message to the device closest to the user; in response to receiving the first indication message, the device closest to the user wakes up; wherein, T is greater than or equal to 1 , and a positive integer less than or equal to R.
  • the hub device in response to receiving at least one first message, obtains the position information of the T second electronic devices and the position information of the user in the whole room coordinate system, including: in response to receiving a In the first message, the central device obtains the location information of a second electronic device that sent the first message and the first room or first area where it is located in the whole-room coordinate system; the central device then obtains the information in the whole-room coordinate system Next, the location information of the user in the first room or the first area.
  • the central device when the central device receives a first message, it obtains the first room or the first area where the second electronic device corresponding to the first message is located, and obtains the information of the users in the first room or the first area. location information. Select a device closest to the user in the first room or area to respond to wakeup.
  • the central device in response to receiving at least one first message, obtains the position information of the T second electronic devices and the position information of the user in the whole room coordinate system, including: responding to Assuming that a plurality of first messages are received within a certain period of time, the central device obtains the position information of T second electronic devices and the position information of the user in the whole room coordinate system.
  • the central device receives the first message sent by all (or all of the first messages capable of sending) the second electronic device, and the central device determines a device closest to the user from the T second electronic devices to respond to the wake-up call. .
  • the central device based on the communication between the central device and T second electronic devices among the R second electronic devices, the central device pre-obtains that T second electronic devices support voice Wake up; T is a positive integer greater than or equal to 1 and less than or equal to R; the user is one user or multiple users.
  • the device closest to the user is the device closest to the centers of the multiple users.
  • the center positions of multiple users are the average value of multiple user coordinates.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio; the users include the first user ;
  • the first device starts to play the first video or first audio, and sends a first notification message to the hub device; in response to receiving the first notification message, the hub device establishes the relationship between the first user and the first The binding relationship of the video or the first audio; after acquiring that the second device is the playback device closest to the first user, the hub device sends the first notification message to the first device; in response to receiving the first notification message, the first The device automatically stops playing, and sends a first feedback message to the central device, the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first feedback message, the central device sends a feedback message to the second device
  • the second notification message the second notification message includes the identification of the first video or the first audio and the playback progress information; in response to receiving the first feedback message, the central device sends a feedback message to the second device
  • the second device when playing on the first device, according to the user leaving the first device and approaching the second device, the second device automatically continues playing, and the first device automatically stops playing.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • obtaining that the second device is the playback device closest to the first user includes: obtaining that the second device is the playback device closest to the first user among devices that do not play video or audio.
  • the second device after the second device continues to play the first video or the first audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio;
  • the user includes the first device A user and a second user; in response to the operation of the first user, the first device starts to play the first video or the first audio, and sends a first notification message to the central device; in response to receiving the first notification message, the central device establishes The binding relationship between the first user and the first video or the first audio; in response to the operation of the second user, the first device starts to play the second video or the second audio, and sends a second notification message to the central device; in response to receiving Upon receiving the second notification message, the hub device establishes a binding relationship between the second user and the second video or second audio; after acquiring that the second device is the playback device closest to the second user, the hub device sends the second A notification message; in response to receiving the first notification message, the first device automatically stops playing, and sends a first feedback message to the central device,
  • the second user uses the first device to play the second video or the second audio. According to the second user leaving the first device and approaching the second device, the second device automatically continues to play the second video or audio, and the first device automatically stops playing.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • obtaining that the second device is the playback device closest to the second user includes: obtaining that the second device is the playback device closest to the second user among devices that do not play video or audio.
  • the second device after the second device continues to play the second video or the second audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the A room or a first area, the second device is located in the second room or the second area; the users include the first user, the first user is located in the first room or the first area; in response to the operation of the first user, the first device starts Play the first video or first audio, and send a first notification message to the hub device; in response to receiving the first notification message, the hub device establishes a binding relationship between the first user and the first video or first audio; The first user leaves the first room or the first area and enters the second room or the second area, and the hub device sends a first notification message to the first device; in response to receiving the first notification message, the first device automatically stops playing, and Sending a first feedback message to the central device, the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first feedback
  • the automatic switch to the second device automatically Continue to play, the first device will stop playing automatically.
  • This method allows the user to seamlessly and automatically switch between audio and video playback by leaving the first room or the first area and entering the second room or the second area without carrying any equipment during the above-mentioned movement.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the first user.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second device It is the playback device closest to the first user.
  • the second device after the second device continues to play the first video or the first audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the A room or a first area, the second device is located in a second room or a second area; users include a first user and a second user, and the first user and the second user are located in the first room or the first area; in response to the first The user's operation, the first device starts to play the first video or the first audio, and sends the first notification message to the central device; in response to receiving the first notification message, the central device establishes the connection between the first user and the first video or first audio binding relationship; in response to the operation of the second user, the first device starts to play the second video or the second audio, and sends the second notification message to the central device; in response to receiving the second notification message, the central device establishes the second The binding relationship between the user and the second video or the second audio; after acquiring that the second user leaves the first room or
  • the second user uses the first device to play the second video or the second audio.
  • the second user leaving the first room or the first area where the first device is located, and entering the second room or the second area where the second device is located, it is automatically switched to the second device automatically continuing to play the second video or the second audio.
  • a device stops playing automatically. This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the second user.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second user It is the playback device closest to the second user.
  • the second device after the second device continues to play the second video or the second audio according to the playback progress information, it sends a second feedback message to the hub device, which is used to indicate that the switching is completed.
  • the central device learns that the first device stops playing and the second device starts playing according to the second feedback message, and updates the saved running information of the first device and the second device.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device and the second device are located in the first room or the first area, the first distance between the first device and the user, and the second distance between the second device and the user ; The hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance; the hub device sends the first volume information to the first device and the second device respectively.
  • a play message and a second play message the first play message includes first volume information, and the second play message includes second volume information; in response to receiving the first play message, the first device plays according to the first volume information; in response to receiving To the second play message, the second device plays according to the second volume information.
  • the volume played by the two audio playback devices is automatically adjusted.
  • Make compensation The volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound; the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound; the central device also obtains that the third device is located in the first The third distance between the third device and the user in the room or the first area; the hub device determines the first volume information for the first device and the volume information for the second device based on the first distance, the second distance, and the third distance. The second volume information of the device and the third volume information for the third device; the hub device sends the first play message, the second play message and the third play message to the first device, the second device and the third device respectively, The third play message includes third volume information; in response to receiving the third play message, the third device plays according to the third volume information.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device is located in the first area, the second device is located in the second area, and the first area is adjacent to the second area; the hub device also obtains the distance between the first device and the user The first distance between the second device and the user; the hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance ; The hub device sends a first play message and a second play message to the first device and the second device respectively, the first play message includes the first volume information, and the second play message includes the second volume information; in response to receiving the first To play the message, the first device plays it according to the first volume information; in response to receiving the second play message, the second device plays it according to the second volume information.
  • the two audio playback devices that play the stereo left channel and the stereo right channel are located in adjacent areas, for example, the living room and the dining room are connected together, the separation and obstruction between the two are less, and the space is independent Weak; devices in the same stereo system can be located separately in the living room and dining room.
  • the two audio playback devices playing the stereo left channel and the stereo right channel play the volumes played by the two audio playback devices are automatically compensated according to the distance between the two audio playback devices and the user.
  • the volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound;
  • the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound;
  • the central device also obtains that the third device is located in the first area, the second area, or the third area adjacent to the second area, the third distance between the third device and the user;
  • the hub device determines the distance for the first The first volume information of the device, the second volume information for the second device, and the third volume information for the third device;
  • the hub device sends the first play message to the first device, the second device, and the third device respectively , a second play message and a third play message, the third play message includes third volume information; in response to receiving the third play message, the third device plays according to the third volume information.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance; The device determines first volume information for the first device and second volume information for the second device according to the first volume attenuation value and the second volume attenuation value.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance, including:
  • a p is the volume attenuation value
  • K is the distance attenuation factor, indicating the volume attenuation caused by distance, which is related to the distance between the device and the user
  • DL w is the directivity factor, indicating the volume attenuation caused by the direction of audio signal transmission
  • a e is the air absorption attenuation, which means that when the audio is transmitted in the air, the air absorbs the audio volume attenuation.
  • the playback volume of the audio playback device is compensated according to the attenuation value in the volume transmission process.
  • the R second electronic devices include the first device; the hub device acquires that the first device is located in the first room or the first area, and the first room or the second There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; when at least one second user is detected entering the first room or the first area, the hub The device sends a first notification message to the first device; in response to receiving the first notification message, the first device automatically stops playing and automatically switches to a screen saver; when it detects that there is no second user in the first room or the first area, the hub device sends a notification message to the first device. The first device sends a second notification message; in response to receiving the second notification message, the first device automatically resumes playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a certain type of user (such as a child) enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; When a category user (such as a child) leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • a certain type of user such as a child
  • the first device after the first device automatically stops playing and automatically switches to the screen saver, the first device sends a first feedback message to the hub device; in response to receiving the first feedback message, the hub device learns that the first device has Stop playback and switch to screen saver.
  • the first device after the first device automatically resumes playing the first video, the first device sends a second feedback message to the hub device; in response to receiving the second feedback message, the hub device learns that the first device has resumed playing .
  • the R second electronic devices include the first device; the hub device acquires that the first device is located in the first room or the first area, and the first room or the second There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; after detecting that a user other than the first user enters the first room or the first area, The hub device sends a first notification message to the first device; in response to receiving the first notification message, the first device automatically stops playing and automatically switches to a screen saver; when it is detected that there is no first user in the first room or the first area user, the hub device sends a second notification message to the first device; in response to receiving the second notification message, the first device automatically resumes playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a user other than the first user enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; if the central device detects that the first user When a user other than the user leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • the first device after the first device automatically stops playing and automatically switches to the screen saver, the first device sends a first feedback message to the hub device; in response to receiving the first feedback message, the hub device learns that the first device has Stop playback and switch to screen saver.
  • the first device after the first device automatically resumes playing the first video, the first device sends a second feedback message to the hub device; in response to receiving the second feedback message, the hub device learns that the first device has resumed playing .
  • the R second electronic devices include the first device; the hub device acquires that the first device is playing the first video, and the hub device also acquires the first device , the first device is located in the first room or first area, the user's second location in the first room or first area, and the first viewing area corresponding to the first device; in response to detecting that the user is in the first Within the preset time period, if there is no user in the first viewing area, the hub device sends a first notification message to the first device; in response to receiving the first notification message, the first device stops playing the first video; Within the preset time period, if there is a user in the first viewing area, the hub device sends a second notification message to the first device; in response to receiving the second notification message, the first device resumes playing the first video.
  • the hub device acquires the relative position between the user and the device, and if the user leaves the viewing area of the first device, the first device automatically stops or pauses playing; if the user enters the viewing area of the first device, the first The device automatically continues to play; the whole process does not require manual operation by the user.
  • the hub device in response to detecting that the distance between at least one user and the first device is less than the first preset distance within the third preset time period, the hub device Sending a third notification message to the first device; in response to receiving the third notification message, the first device stops playing the first video and outputs reminder information; The distance between the devices is less than the first preset distance, and there are users in the first viewing area, the hub device sends a fourth notification message to the first device; in response to receiving the fourth notification message, the first device resumes playing the first video .
  • the central device acquires the relative position between the user and the device. If the user is too close to the first device, the first device automatically stops or pauses the playback and outputs a reminder message; if the user is in the viewing area of the first device, And the first device is not too close to the first device, the first device automatically resumes playing; the whole process does not require manual operation by the user.
  • the hub device in response to detecting that within the fifth preset time period, the distance between no user and the first device is less than the first preset distance, and the first viewing If there is no user in the area, the hub device sends a fifth notification message to the first device; in response to receiving the fifth notification message, the first device automatically sleeps.
  • the central device obtains the relative position of the user and the device. If the user leaves the viewing area of the first device for a long time, the first device automatically sleeps to save power and protect privacy; no manual operation by the user is required throughout the process.
  • the first electronic device includes an inertial measurement unit module; the R second electronic devices include the first device, and the first device does not include an ultra-wideband module; based on the first The position measurement of the mobile device by the electronic device, the measurement and conversion of the human body position, the communication between the central device and the first electronic device, the central device obtains the position information of R second electronic devices under the whole room coordinate system provided by the central device .
  • the location information of the user including: the first ultra-wideband module establishes a first coordinate system, based on the location measurement of the mobile device by the first electronic device, and the labeling of the first device by the mobile device, the first electronic device obtains the first device The first coordinate in the first coordinate system; the millimeter wave radar module establishes the second coordinate system, and based on the perception of the human body by the first electronic device, the first electronic device obtains the second coordinate of the user in the second coordinate system; through The conversion between the first coordinate system and the second coordinate system, the first electronic device obtains the third coordinate of the second coordinate in the first coordinate system; the first electronic device sends the relevant coordinates in the first coordinate system to the central device,
  • the relevant coordinates include the first coordinate and the third coordinate; based on the output of the inertial measurement unit module, combined with the parallelism between the whole house coordinate system and the geographic coordinate system, the central device converts the relevant coordinates in the first coordinate system into the whole house coordinate system The coordinates below, and then obtain the location information of the R
  • the first electronic device marks it at least once through the mobile device that includes the ultra-wideband module, so as to obtain its location information.
  • the present application provides a central device.
  • the central device communicates with any two of the first electronic device and any one of the R second electronic devices through wired communication or wireless communication;
  • the first electronic device includes a first ultra-wideband module and a millimeter-wave radar module;
  • the R second electronic devices include a mobile device, and the mobile device includes a second ultra-wideband module;
  • R is a positive integer greater than or equal to 1.
  • the central device Based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the human body position, and the communication between the central device and the first electronic device, the central device acquires R second electronic devices in the whole room coordinate system provided by the central device The location information of the user and the location information of the user; according to the location information of the user and the location information of the R second electronic devices, the central device controls or notifies at least one of the R second electronic devices to perform a preset operation.
  • the central device controls or notifies the second electronic device to perform a preset operation according to the location of the user and the location of the second electronic device.
  • the IoT device the second electronic device automatically performs a certain operation without any operation by the user or carrying any electronic device, which is convenient and quick.
  • the central device based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the human body position, and the communication between the central device and the first electronic device, the central device obtains the coordinate system provided by the central device under the whole room coordinate system, R
  • the position information of the second electronic device and the position information of the user include: the position measurement of the mobile device based on the first electronic device, the measurement and conversion of the human body position, the communication between the central device and the first electronic device, the periodic or The position information of the R second electronic devices and the position information of the user are acquired in real time under the whole-room coordinate system provided by the central device. In this way, the central device can control the second electronic device in real time, improving the automatic control efficiency of the second electronic device.
  • the location information of the user is the location information of one user; when there are multiple users, the location information of the user is the location information of multiple users.
  • Central location information In one embodiment, a user's position is represented by its coordinates in a coordinate system (such as a whole-room coordinate system). The center positions of multiple users are the average value of multiple user coordinates.
  • T second electronic devices among the R second electronic devices support voice wake-up;
  • the central device obtains the position information of the T second electronic devices and the user's position information in the whole room coordinate system; the first message is used to indicate that the second electronic device has received the wake-up voice;
  • the central device sends a first indication message to the device closest to the user, and the first indication message is used to instruct the device closest to the user to wake up;
  • T is greater than or equal to 1, and A positive integer less than or equal to R.
  • the central device in response to receiving the first message from one of the T second electronic devices, acquires the position information and its The first room or the first area where the user is located; the central device further obtains the location information of the user in the first room or the first area under the whole-room coordinate system.
  • the central device receives a first message, it obtains the first room or the first area where the second electronic device corresponding to the first message is located, and obtains the information of the users in the first room or the first area. location information. Select a device closest to the user in the first room or area to respond to wakeup.
  • the central device in response to receiving a plurality of first messages within a preset time period, acquires location information of T second electronic devices and user location information in the whole-room coordinate system. In this embodiment, the central device receives the first message sent by all (or all of the first messages capable of sending) the second electronic device, and the central device determines a device closest to the user from the T second electronic devices to respond to the wake-up call. .
  • the central device based on the communication between the central device and T second electronic devices among the R second electronic devices, the central device pre-obtains that T second electronic devices support voice Wake up; T is a positive integer greater than or equal to 1 and less than or equal to R; the user is one user or multiple users.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio;
  • the users include the first user
  • the hub device receives a first notification message from the first device, and the first notification message is used to notify the first device to start playing the first video or the first audio; in response to receiving the first notification message, the hub device establishes the relationship between the first user and the second A binding relationship of video or first audio; after acquiring that the second device is the playback device closest to the first user, the hub device sends a first notification message to the first device, and the first notification message is used to indicate that the first device Stop playing; the hub device receives a first feedback message from the first device, and the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first feedback message, the hub device sends to the second device A second notification message, the second notification message is used to instruct the second device to play the first video or the first audio, and the second notification message
  • the hub device determines that the user moves away from the first device and approaches the second device, and the control is switched so that the second device continues to play, and the first device stops playing.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • obtaining that the second device is the playback device closest to the first user includes: obtaining that the second device is the playback device closest to the first user among devices that do not play video or audio.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio;
  • the users include the first user and the second user;
  • the hub device receives a first notification message from the first device, and the first notification message is used to notify the first device to start playing the first video or the first audio; in response to receiving the first notification message, the hub device establishes the first A binding relationship between a user and the first video or the first audio;
  • the hub device receives a second notification message from the first device, and the second notification message is used to inform the first device to start playing the second video or the second audio; in response to receiving Upon receiving the second notification message, the hub device establishes a binding relationship between the second user and the second video or second audio; after acquiring that the second device is the playback device closest to the second user, the hub device sends the second A notification message, the first notification message is used to instruct the first device to stop playing; the hub device receives a first feedback message from the first device
  • the second user uses the first device to play the second video or the second audio.
  • the central device determines that the second user leaves the first device and approaches the second device, and controls the switching so that the second device continues to play the second video or the second audio, and the first device stops playing.
  • obtaining that the second device is the playback device closest to the second user includes: obtaining that the second device is the playback device closest to the second user among devices that do not play video or audio.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the first A room or a first area, the second device is located in the second room or the second area; the user includes a first user, and the first user is located in the first room or the first area; the hub device receives a first notification message from the first device, The first notification message is used to inform the first device to start playing the first video or first audio; in response to receiving the first notification message, the hub device establishes a binding relationship between the first user and the first video or first audio; When the first user leaves the first room or the first area and enters the second room or the second area, the hub device sends a first notification message to the first device, and the first notification message is used to instruct the first device to stop playing; The first device receives the first feedback message, the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the
  • the hub device controls switching to the second room or the second area according to the user leaving the first room or the first area where the first device is.
  • the device continues playing, the first device stops playing.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the first user.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second device It is the playback device closest to the first user.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the first A room or a first area, the second device is located in the second room or the second area; users include the first user and the second user, the first user and the second user are located in the first room or the first area;
  • a device receives a first notification message, and the first notification message is used to notify the first device to start playing the first video or first audio; in response to receiving the first notification message, the hub device establishes the relationship between the first user and the first video or first audio Audio binding relationship;
  • the hub device receives a second notification message from the first device, and the second notification message is used to notify the first device to start playing the second video or second audio; in response to receiving the second notification message, the hub device establishes The binding relationship between the second user and the second video or second audio; after obtaining the second user leaving the first room or the first area and entering the second room or the
  • the second user uses the first device to play the second video or the second audio.
  • the central device controls switching so that the second device continues to play the second video or the second audio.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the second user.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second user It is the playback device closest to the second user.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device and the second device are located in the first room or the first area, the first distance between the first device and the user, and the second distance between the second device and the user ; The hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance; the hub device sends the first volume information to the first device and the second device respectively.
  • a play message and a second play message the first play message includes first volume information, and the second play message includes second volume information; the first volume information is used to indicate the playback volume of the first device, and the second volume information is used to indicate the second volume information Second, the playback volume of the device.
  • the volume played by the two audio playback devices is automatically adjusted.
  • Make compensation The volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound; the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound; the central device also obtains that the third device is located in the first The third distance between the third device and the user in the room or the first area; the hub device determines the first volume information for the first device and the volume information for the second device based on the first distance, the second distance, and the third distance. The second volume information of the device and the third volume information for the third device; the hub device sends the first play message, the second play message and the third play message to the first device, the second device and the third device respectively, The third playing message includes third volume information; the third volume information is used to indicate the playing volume of the third device.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device is located in the first area, the second device is located in the second area, and the first area is adjacent to the second area; the hub device also obtains the distance between the first device and the user The first distance between the second device and the user; the hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance
  • the central device sends a first play message and a second play message to the first device and the second device respectively, the first play message includes the first volume information, and the second play message includes the second volume information; the first volume information is used for The playback volume of the first device is indicated, and the second volume information is used to indicate the playback volume of the second device.
  • the two audio playback devices that play the stereo left channel and the stereo right channel are located in adjacent areas, for example, the living room and the dining room are connected together, the separation and obstruction between the two are less, and the space is independent Weak; devices in the same stereo system can be located separately in the living room and dining room.
  • the two audio playback devices playing the stereo left channel and the stereo right channel play the volumes played by the two audio playback devices are automatically compensated according to the distance between the two audio playback devices and the user.
  • the volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound;
  • the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound;
  • the central device also obtains that the third device is located in the first area, the second area, or the third area adjacent to the second area, the third distance between the third device and the user;
  • the hub device determines the distance for the first The first volume information of the device, the second volume information for the second device, and the third volume information for the third device;
  • the hub device sends the first play message to the first device, the second device, and the third device respectively , a second play message and a third play message, the third play message includes third volume information; in response to receiving the third play message, the third volume information is used to indicate the play volume of the third device.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance; The device determines first volume information for the first device and second volume information for the second device according to the first volume attenuation value and the second volume attenuation value.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance, including:
  • a p is the volume attenuation value
  • K is the distance attenuation factor, indicating the volume attenuation caused by distance, which is related to the distance between the device and the user
  • DL w is the directivity factor, indicating the volume attenuation caused by the direction of audio signal transmission
  • a e is the air absorption attenuation, which means that when the audio is transmitted in the air, the air absorbs the audio volume attenuation.
  • the playback volume of the audio playback device is compensated according to the attenuation value in the volume transmission process.
  • the R second electronic devices include the first device; the hub device obtains that the first device is located in the first room or the first area, and the first room or the first There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; when at least one second user is detected entering the first room or the first area, the hub The device sends a first notification message to the first device; the first notification message is used to instruct the first device to stop playing and switch to a screen saver; when it detects that there is no second user in the first room or in the first area, the hub device sends a notification message to the first device The second notification message; the second notification message is used to instruct the first device to resume playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a certain type of user (such as a child) enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; When a category user (such as a child) leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • a certain type of user such as a child
  • the hub device learns that the first device has stopped playing and switched to a screen saver.
  • the hub device learns that the first device has resumed playing.
  • the R second electronic devices include the first device; the hub device obtains that the first device is located in the first room or the first area, and the first room or the first There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; after detecting that a user other than the first user enters the first room or the first area, The hub device sends a first notification message to the first device; the first notification message is used to instruct the first device to stop playing and switch to a screen saver; when it is detected that there is no user other than the first user in the first room or the first area, the hub device sends a notification message to the first device. The first device sends a second notification message; the second notification message is used to instruct the first device to resume playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a user other than the first user enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; if the central device detects that the first user When a user other than the user leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • the R second electronic devices include the first device; the hub device acquires that the first device is playing the first video, and the hub device also acquires the first device , the first device is located in the first room or first area, the user's second location in the first room or first area, and the first viewing area corresponding to the first device; in response to detecting that the user is in the first Within the preset time period, if there is no user in the first viewing area, the hub device sends a first notification message to the first device; the first notification message is used to instruct the first device to stop playing the first video; Within the preset time period, if there is a user in the first viewing area, the hub device sends a second notification message to the first device; the second notification message is used to indicate that the first device resumes playing the first video.
  • the hub device acquires the relative position between the user and the device, and if the user leaves the viewing area of the first device, the first device automatically stops or pauses playing; if the user enters the viewing area of the first device, the first The device automatically continues to play; the whole process does not require manual operation by the user.
  • the hub device in response to detecting that the distance between at least one user and the first device is less than the first preset distance within a third preset time period, sends a third notification message to the first device; the third notification message is used to instruct the first device to stop playing the first video and output reminder information; The distance between the first devices is less than the first preset distance, and there are users in the first viewing area, the hub device sends a fourth notification message to the first device; the fourth notification message is used to instruct the first device to resume playing the first video .
  • the central device acquires the relative position between the user and the device, and if it is determined that the user is too close to the first device, it controls the first device to stop or pause playback and output reminder information; area, and not too close to the first device, the first device is controlled to automatically resume playback; no manual operation by the user is required throughout the process.
  • the hub device in response to detecting that the distance between no user and the first device is less than the first preset distance within the fifth preset time period, and the first viewing If there is no user in the area, the hub device sends a fifth notification message to the first device; the fifth notification message is used to instruct the first device to automatically sleep.
  • the central device obtains the relative position of the user and the device, and if it is determined that the user leaves the viewing area of the first device for a long time, it controls the first device to sleep to save power and protect privacy; no manual operation by the user is required throughout the process.
  • the present application provides an automatic control method based on human body perception, which is applied to a central device, and the central device is connected to the first electronic device and any one of the R second electronic devices.
  • the first electronic device includes a first ultra-wideband module and a millimeter-wave radar module
  • R second electronic devices include a mobile device, and the mobile device includes a second ultra-wideband module
  • R is a positive integer greater than or equal to 1.
  • the method includes: based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the position of the human body, and the communication between the central device and the first electronic device, the central device obtains R coordinates in the whole room coordinate system provided by the central device.
  • the position information of the second electronic device, the position information of the user according to the position information of the user and the position information of the R second electronic devices, the central device controls or notifies at least one second electronic device in the R second electronic devices to execute the preset set operation.
  • the central device controls or notifies the second electronic device to perform a preset operation according to the location of the user and the location of the second electronic device.
  • the IoT device the second electronic device automatically performs a certain operation without any operation by the user or carrying any electronic device, which is convenient and quick.
  • the central device based on the position measurement of the mobile device by the first electronic device, the measurement and conversion of the human body position, and the communication between the central device and the first electronic device, the central device obtains the coordinate system provided by the central device under the whole room coordinate system, R
  • the position information of the second electronic device and the position information of the user include: the position measurement of the mobile device based on the first electronic device, the measurement and conversion of the human body position, the communication between the central device and the first electronic device, the periodic or The position information of the R second electronic devices and the position information of the user are acquired in real time under the whole-room coordinate system provided by the central device. In this way, the central device can control the second electronic device in real time, improving the automatic control efficiency of the second electronic device.
  • the location information of the user is the location information of one user; when there are multiple users, the location information of the user is the location information of multiple users.
  • Central location information In one embodiment, a user's position is represented by its coordinates in a coordinate system (such as a whole-room coordinate system). The center positions of multiple users are the average value of multiple user coordinates.
  • T second electronic devices among the R second electronic devices support voice wake-up;
  • the central device obtains the position information of the T second electronic devices and the user's position information in the whole room coordinate system; the first message is used to indicate that the second electronic device has received the wake-up voice;
  • the central device sends a first indication message to the device closest to the user, and the first indication message is used to instruct the device closest to the user to wake up;
  • T is greater than or equal to 1, and A positive integer less than or equal to R.
  • the central device in response to receiving the first message from one of the T second electronic devices, acquires the position information and its The first room or the first area where the user is located; the central device further obtains the location information of the user in the first room or the first area under the whole-room coordinate system.
  • the central device receives a first message, it obtains the first room or the first area where the second electronic device corresponding to the first message is located, and obtains the information of the users in the first room or the first area. location information. Select a device closest to the user in the first room or area to respond to wakeup.
  • the central device in response to receiving a plurality of first messages within a preset time period, acquires location information of T second electronic devices and user location information in the whole-room coordinate system. In this embodiment, the central device receives the first message sent by all (or all of the first messages capable of sending) the second electronic device, and the central device determines a device closest to the user from the T second electronic devices to respond to the wake-up call. .
  • the central device based on the communication between the central device and T second electronic devices among the R second electronic devices, the central device pre-obtains that T second electronic devices support voice Wake up; T is a positive integer greater than or equal to 1 and less than or equal to R; the user is one user or multiple users.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio;
  • the users include the first user
  • the hub device receives a first notification message from the first device, and the first notification message is used to notify the first device to start playing the first video or the first audio; in response to receiving the first notification message, the hub device establishes the relationship between the first user and the second A binding relationship of video or first audio; after acquiring that the second device is the playback device closest to the first user, the hub device sends a first notification message to the first device, and the first notification message is used to indicate that the first device Stop playing; the hub device receives a first feedback message from the first device, and the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first feedback message, the hub device sends to the second device A second notification message, the second notification message is used to instruct the second device to play the first video or the first audio, and the second notification message
  • the hub device determines that the user moves away from the first device and approaches the second device, and the control is switched so that the second device continues to play, and the first device stops playing.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • obtaining that the second device is the playback device closest to the first user includes: obtaining that the second device is the playback device closest to the first user among devices that do not play video or audio.
  • the R second electronic devices include the first device and the second device, and the first device and the second device support playing video or audio;
  • the users include the first user and the second user;
  • the hub device receives a first notification message from the first device, and the first notification message is used to notify the first device to start playing the first video or the first audio; in response to receiving the first notification message, the hub device establishes the first A binding relationship between a user and the first video or the first audio;
  • the hub device receives a second notification message from the first device, and the second notification message is used to inform the first device to start playing the second video or the second audio; in response to receiving Upon receiving the second notification message, the hub device establishes a binding relationship between the second user and the second video or second audio; after acquiring that the second device is the playback device closest to the second user, the hub device sends the second A notification message, the first notification message is used to instruct the first device to stop playing; the hub device receives a first feedback message from the first device
  • the second user uses the first device to play the second video or the second audio.
  • the central device determines that the second user leaves the first device and approaches the second device, and the control is switched so that the second device continues to play the second video or the second audio, and the first device stops playing.
  • obtaining that the second device is the playback device closest to the second user includes: obtaining that the second device is the playback device closest to the second user among devices that do not play video or audio.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the A room or a first area, the second device is located in the second room or the second area; the user includes a first user, and the first user is located in the first room or the first area; the hub device receives a first notification message from the first device, The first notification message is used to inform the first device to start playing the first video or first audio; in response to receiving the first notification message, the hub device establishes a binding relationship between the first user and the first video or first audio; When the first user leaves the first room or the first area and enters the second room or the second area, the hub device sends a first notification message to the first device, and the first notification message is used to instruct the first device to stop playing; The first device receives the first feedback message, the first feedback message includes the identification of the first video or the first audio and playback progress information; in response to receiving the first
  • the hub device controls switching to the second room or the second area according to the user leaving the first room or the first area where the first device is.
  • the device continues playing, the first device stops playing.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the first user.
  • the hub device after obtaining that the first user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second device It is the playback device closest to the first user.
  • the R second electronic devices include a first device and a second device, the first device and the second device support playing video or audio, and the first device is located at the A room or a first area, the second device is located in the second room or the second area; users include the first user and the second user, the first user and the second user are located in the first room or the first area;
  • a device receives a first notification message, and the first notification message is used to notify the first device to start playing the first video or first audio; in response to receiving the first notification message, the hub device establishes the relationship between the first user and the first video or first audio Audio binding relationship;
  • the hub device receives a second notification message from the first device, and the second notification message is used to notify the first device to start playing the second video or second audio; in response to receiving the second notification message, the hub device establishes The binding relationship between the second user and the second video or second audio; after obtaining the second user leaving the first room or the first area and entering the second room or the second
  • the second user uses the first device to play the second video or the second audio.
  • the central device controls switching so that the second device continues to play the second video or the second audio.
  • This method enables the user to seamlessly and automatically switch between audio and video playback by leaving the first device and approaching the second device without carrying any device during the above-mentioned movement.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that the second device is the playback device closest to the second user.
  • the hub device after obtaining that the second user leaves the first room or the first area and enters the second room or the second area, the hub device also obtains that among the devices that are not playing video or audio, the second user It is the playback device closest to the second user.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device and the second device are located in the first room or the first area, the first distance between the first device and the user, and the second distance between the second device and the user ; The hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance; the hub device sends the first volume information to the first device and the second device respectively.
  • a play message and a second play message the first play message includes first volume information, and the second play message includes second volume information; the first volume information is used to indicate the playback volume of the first device, and the second volume information is used to indicate the second volume information Second, the playback volume of the device.
  • the volume played by the two audio playback devices is automatically adjusted.
  • Make compensation The volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound; the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound; the central device also obtains that the third device is located in the first The third distance between the third device and the user in the room or the first area; the hub device determines the first volume information for the first device and the volume information for the second device based on the first distance, the second distance, and the third distance. The second volume information of the device and the third volume information for the third device; the hub device sends the first play message, the second play message and the third play message to the first device, the second device and the third device respectively, The third playing message includes third volume information; the third volume information is used to indicate the playing volume of the third device.
  • the R second electronic devices include a stereo system, the stereo system includes a first device and a second device, the first device plays the stereo left channel, and the second device Play the stereo right channel; the hub device obtains that the first device is located in the first area, the second device is located in the second area, and the first area is adjacent to the second area; the hub device also obtains the distance between the first device and the user The first distance between the second device and the user; the hub device determines the first volume information for the first device and the second volume information for the second device according to the first distance and the second distance
  • the central device sends a first play message and a second play message to the first device and the second device respectively, the first play message includes the first volume information, and the second play message includes the second volume information; the first volume information is used for The playback volume of the first device is indicated, and the second volume information is used to indicate the playback volume of the second device.
  • the two audio playback devices that play the stereo left channel and the stereo right channel are located in adjacent areas, for example, the living room and the dining room are connected together, the separation and obstruction between the two are less, and the space is independent Weak; devices in the same stereo system can be located separately in the living room and dining room.
  • the two audio playback devices playing the stereo left channel and the stereo right channel play the volumes played by the two audio playback devices are automatically compensated according to the distance between the two audio playback devices and the user.
  • the volume compensation value of the audio playback device far away from the user is large, so that the user receives the same playback volume of the two audio playback devices, and improves the user's experience of listening to stereo audio.
  • the stereo system also includes a third device, and the third device plays stereo auxiliary sound;
  • the stereo sound includes stereo left channel, stereo right channel and stereo auxiliary sound;
  • the central device also obtains that the third device is located in the first area, the second area, or the third area adjacent to the second area, the third distance between the third device and the user;
  • the hub device determines the distance for the first The first volume information of the device, the second volume information for the second device, and the third volume information for the third device;
  • the hub device sends the first play message to the first device, the second device, and the third device respectively , a second play message and a third play message, the third play message includes third volume information; in response to receiving the third play message, the third volume information is used to indicate the play volume of the third device.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance; The device determines first volume information for the first device and second volume information for the second device according to the first volume attenuation value and the second volume attenuation value.
  • the hub device determines the first volume attenuation value at which the playback volume of the first device reaches the user and the second volume attenuation value at which the playback volume of the second device reaches the user according to the first distance and the second distance, including:
  • a p is the volume attenuation value
  • K is the distance attenuation factor, indicating the volume attenuation caused by distance, which is related to the distance between the device and the user
  • DL w is the directivity factor, indicating the volume attenuation caused by the direction of audio signal transmission
  • a e is the air absorption attenuation, which means that when the audio is transmitted in the air, the air absorbs the audio volume attenuation.
  • the playback volume of the audio playback device is compensated according to the attenuation value in the volume transmission process.
  • the R second electronic devices include the first device; the hub device obtains that the first device is located in the first room or the first area, and the first room or the second There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; when at least one second user is detected entering the first room or the first area, the hub The device sends a first notification message to the first device; the first notification message is used to instruct the first device to stop playing and switch to a screen saver; when it detects that there is no second user in the first room or in the first area, the hub device sends a notification message to the first device The second notification message; the second notification message is used to instruct the first device to resume playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a certain type of user (such as a child) enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; When a category user (such as a child) leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • a certain type of user such as a child
  • the hub device learns that the first device has stopped playing and switched to a screen saver.
  • the hub device learns that the first device has resumed playing.
  • the R second electronic devices include the first device; the hub device obtains that the first device is located in the first room or the first area, and the first room or the second There is a first user in an area, but no second user; the hub device also acquires that the first device has entered the private mode and is playing the first video; after detecting that a user other than the first user enters the first room or the first area, The hub device sends a first notification message to the first device; the first notification message is used to instruct the first device to stop playing and switch to a screen saver; when it is detected that there is no user other than the first user in the first room or the first area, the hub device sends a notification message to the first device. The first device sends a second notification message; the second notification message is used to instruct the first device to resume playing the first video.
  • the central device under the condition that the device turns on the private mode, if the central device detects that a user other than the first user enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; if the central device detects that the first user When a user other than the user leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback; no manual operation is required throughout the process.
  • the R second electronic devices include the first device; the hub device acquires that the first device is playing the first video, and the hub device also acquires the first device , the first device is located in the first room or first area, the user's second location in the first room or first area, and the first viewing area corresponding to the first device; in response to detecting that the user is in the first Within the preset time period, if there is no user in the first viewing area, the hub device sends a first notification message to the first device; the first notification message is used to instruct the first device to stop playing the first video; Within the preset time period, if there is a user in the first viewing area, the hub device sends a second notification message to the first device; the second notification message is used to indicate that the first device resumes playing the first video.
  • the hub device acquires the relative position between the user and the device, and if the user leaves the viewing area of the first device, the first device automatically stops or pauses playing; if the user enters the viewing area of the first device, the first The device automatically continues to play; the whole process does not require manual operation by the user.
  • the hub device in response to detecting that within a third preset time period, the distance between at least one user and the first device is less than the first preset distance, The hub device sends a third notification message to the first device; the third notification message is used to instruct the first device to stop playing the first video and output reminder information; The distance between the first devices is less than the first preset distance, and there are users in the first viewing area, the hub device sends a fourth notification message to the first device; the fourth notification message is used to instruct the first device to resume playing the first video .
  • the central device acquires the relative position between the user and the device, and if it is determined that the user is too close to the first device, it controls the first device to stop or pause playback and output reminder information; area, and not too close to the first device, the first device is controlled to automatically resume playback; no manual operation by the user is required throughout the process.
  • the hub device in response to detecting that the distance between no user and the first device is less than the first preset distance within the fifth preset time period, and the first viewing If there is no user in the area, the hub device sends a fifth notification message to the first device; the fifth notification message is used to instruct the first device to automatically sleep.
  • the central device obtains the relative position of the user and the device, and if it is determined that the user leaves the viewing area of the first device for a long time, it controls the first device to sleep to save power and protect privacy; no manual operation by the user is required throughout the process.
  • the present application provides a computer-readable storage medium.
  • the computer-readable storage medium includes a computer program, and when the computer program is run on the electronic device, the electronic device is made to execute the method according to the fourth aspect or any one of the implementation manners of the fourth aspect.
  • the present application provides a computer program product.
  • the computer program product runs on the electronic device, the electronic device is made to execute the method according to the fourth aspect or any one implementation manner of the fourth aspect.
  • the fifth aspect and any implementation manner of the fifth aspect correspond to the fourth aspect and any implementation manner of the fourth aspect respectively.
  • the technical effect corresponding to the fifth aspect and any one of the implementation manners in the fifth aspect please refer to the technical effect corresponding to the above fourth aspect and any one of the implementation manners in the fourth aspect, which will not be repeated here.
  • the sixth aspect and any implementation manner of the sixth aspect correspond to the fourth aspect and any implementation manner of the fourth aspect respectively.
  • the technical effects corresponding to the sixth aspect and any one of the implementation manners in the sixth aspect refer to the above-mentioned fourth aspect and the technical effects corresponding to any one of the implementation manners in the fourth aspect, which will not be repeated here.
  • FIG. 1 is a schematic diagram of a scene of an automatic control method based on human perception provided in an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of the first electronic device in the automatic control method provided by the embodiment of the present application;
  • FIG. 3 is a schematic structural diagram of a central device in an automatic control method provided by an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of the second electronic device in the automatic control method provided by the embodiment of the present application.
  • FIG. 5A is a schematic structural diagram of an ultra-wideband (Ultra-Wide Band, UWB) module in the first electronic device provided by the present application;
  • UWB Ultra-Wide Band
  • FIG. 5B is a schematic structural diagram of the millimeter-wave radar module in the first electronic device provided by the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a UWB module in a second electronic device provided in an embodiment of the present application.
  • Fig. 7 is a schematic diagram of distribution of several antennas of the UWB module in the first electronic device provided by the embodiment of the present application;
  • FIG. 8 is a schematic diagram of the distribution of several antennas of the millimeter-wave radar module in the first electronic device provided by the embodiment of the present application;
  • FIG. 9 is a schematic diagram of several establishment methods of the first coordinate system in the case that the UWB module of the first electronic device includes three antennas according to the embodiment of the present application;
  • FIG. 10 is a schematic diagram of a method of establishing a second coordinate system provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of a method of establishing the third coordinate system provided by the embodiment of the present application.
  • FIG. 12 is a schematic diagram of the principle of coordinate calculation of the second electronic device under the first coordinate system provided by the embodiment of the present application.
  • FIG. 13 is a schematic diagram of several marking methods for the second electronic device provided by the embodiment of the present application.
  • FIG. 14 is a schematic diagram of a marking method for a spatial region provided by an embodiment of the present application.
  • Figure 15 is the pitch angle of the second coordinate system relative to the first coordinate system provided by the embodiment of the present application Azimuth and roll angle schematic diagram;
  • Figure 16 is the pitch angle of the third coordinate system relative to the first coordinate system provided by the embodiment of the present application Azimuth and roll angle schematic diagram;
  • FIG. 17 is a schematic diagram of a way to establish the fourth coordinate system provided by the embodiment of the present application.
  • Fig. 18 is a schematic diagram of the principle of determining the distance and radial velocity of the reflection point by the millimeter-wave radar provided by the embodiment of the present application;
  • FIG. 19 is a schematic diagram of the principle of determining the signal direction of the reflected signal at the reflection point by the millimeter-wave radar provided in the embodiment of the present application;
  • FIG. 20 is a schematic diagram of the principle of determining the user's coordinates by the first electronic device under the fourth coordinate system provided by the embodiment of the present application;
  • Fig. 21 is a schematic diagram of a method for the first electronic device to determine the user's coordinates under the fourth coordinate system provided by the embodiment of the present application;
  • Fig. 22 is a schematic diagram of a method for obtaining the user's breathing frequency and heartbeat frequency by a millimeter-wave radar provided in an embodiment of the present application;
  • Fig. 23 is a schematic diagram of a method for determining a user's body posture by a millimeter-wave radar provided in an embodiment of the present application;
  • Fig. 24 is a schematic diagram of conversion between the first coordinate system and the fourth coordinate system in the first electronic device provided by the embodiment of the present application;
  • Fig. 25 is a schematic diagram of an example of establishing the whole house coordinate system (fifth coordinate system), the first coordinate system and the sixth coordinate system (geographical coordinate system) provided by the embodiment of the present application;
  • Fig. 26 is a schematic diagram of the process and principle of the automatic control method based on human perception in the whole house scenario provided by the embodiment of the present application;
  • Fig. 27 is a schematic flow chart of correcting the installation error of the first electronic device provided by the embodiment of the present application.
  • Fig. 28 is a schematic diagram of the principle of an ICP algorithm provided by the embodiment of the present application.
  • FIG. 29 is a schematic diagram of the area division and user interface provided by the embodiment of the present application.
  • FIG. 30A is a schematic diagram of a scene of a method for waking up an audio playback device provided
  • FIG. 30B is a schematic diagram of a scene of a method for automatically waking up a device based on human perception in a whole-house scene provided by an embodiment of the present application;
  • FIG. 30C is a schematic flowchart of a method for automatically waking up a device based on human perception in a whole-house scenario provided by an embodiment of the present application;
  • FIG. 30D is a schematic diagram of another scene of the method of automatically waking up the device based on human perception in the whole house scene provided by the embodiment of the present application;
  • FIG. 31 is a schematic diagram of a scene of a method for automatically switching devices based on human perception in a whole-house scene provided by an embodiment of the present application;
  • 32A-32E are schematic flowcharts of the method for automatically switching devices based on human perception in the whole house scenario provided by the embodiment of the present application;
  • FIG. 33 is a schematic diagram of the human-computer interaction interface involved in the method of automatically switching devices based on human perception in the whole-house scenario provided by the embodiment of the present application;
  • FIG. 34A is a schematic diagram of a scene and principle of a stereo automatic adjustment method based on human perception in the whole house scene provided by the embodiment of the present application;
  • Fig. 34B is a schematic diagram of another scene and principle of the stereo automatic adjustment method based on human perception in the whole house scene provided by the embodiment of the present application;
  • FIG. 35 is a schematic flowchart of a stereo automatic adjustment method based on human perception in a whole-house scenario provided by an embodiment of the present application;
  • Fig. 36 is a schematic diagram of the scene and principle of the device automatic playback method based on human perception in the whole house scene provided by the embodiment of the present application;
  • Fig. 37A-Fig. 37B are schematic flow diagrams of the device automatic playback method based on human perception in the whole house scenario provided by the embodiment of the present application;
  • Fig. 38 is a schematic diagram of the human-computer interaction interface involved in the device automatic playback method based on human perception in the whole house scenario provided by the embodiment of the present application;
  • Fig. 39A is a schematic diagram of a device automatically pausing playback, automatically stopping playback, or automatically shutting down based on human perception in a whole-house scenario provided by the embodiment of the present application;
  • Fig. 39B is a schematic diagram of a device based on human perception that automatically starts playing, resumes playing, or turns on automatically in the whole-house scenario provided by the embodiment of the present application;
  • Fig. 40A-Fig. 40C are schematic flow diagrams of the device automatic playback method based on human perception in the whole house scenario provided by the embodiment of the present application;
  • Fig. 41 is a schematic diagram of a method of determining a viewing area in a device automatic playback method based on human perception in a whole-house scenario provided by an embodiment of the present application;
  • Fig. 42 is a schematic diagram of a way to determine the distance between the user and the video playback device in the device automatic playback method based on human perception in the whole-house scene provided by the embodiment of the present application;
  • Fig. 43 is a schematic structural diagram of the first electronic device provided by the embodiment of the present application.
  • FIG. 44 is a schematic structural diagram of a second electronic device provided by an embodiment of the present application.
  • a and/or B may indicate: A exists alone, A and B exist at the same time, and B exists alone, Wherein A and B can be singular or plural.
  • the character "/" generally indicates that the contextual objects are an "or" relationship.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • the term “connected” includes both direct and indirect connections, unless otherwise stated. "First” and “second” are used for descriptive purposes only, and should not be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features.
  • FIG. 1 is a schematic diagram of a scenario of an automatic control method based on human perception provided in an embodiment of the present application.
  • the whole house includes the entrance passage, kitchen, dining room, living room, balcony, master bedroom, second bedroom, bathroom, etc.
  • the whole house is provided with at least one first electronic device.
  • each room or area includes at least one first electronic device.
  • the whole house is also provided with a second electronic device (such as an IoT device).
  • the kitchen is equipped with a rice cooker or electric pressure cooker, gas equipment, etc.
  • the living room is equipped with speakers (such as smart speakers), TVs (such as smart TVs, also known as smart screens, large screens, etc.), routing equipment, etc.
  • the restaurant is equipped with sweeping robots, etc.;
  • the master bedroom is equipped with TVs (for example, smart TVs), speakers (for example, smart speakers), floor lamps (for example, smart floor lamps), routing equipment etc.;
  • the second bedroom is equipped with desk lamps (eg, smart desk lamps), speakers (eg, smart speakers), etc.;
  • the bathroom is equipped with body fat scales, etc.
  • the automatic control method based on human perception provided in the embodiment of the present application includes the automatic control method based on human perception in the whole house scene, and also includes the automatic control method based on human perception in a single room or area.
  • the automatic control method based on human perception provided in the embodiment of the present application is applied to a communication system.
  • the communication system includes a communication system based on human body perception in a whole-house scene (also called a whole-house intelligent system), and a communication system based on human body perception in a single room or area (also called a room smart system). or regional intelligence systems).
  • the communication system includes at least one first electronic device 100 and at least one second electronic device 300 .
  • the communication system may further include a central device 200 .
  • the first electronic device 100 is used to locate the second electronic device 300 and/or the user.
  • the first electronic device 100 may include a sensor.
  • the first electronic device 100 includes an Ultra-Wide Band (Ultra-Wide Band, UWB) module and a millimeter wave radar module.
  • UWB Ultra-Wide Band
  • the UWB module is used to locate the second electronic device 300
  • the millimeter wave radar module is used to locate the user.
  • UWB technology is a radio communication technology that does not use carrier modulation signals, but uses energy pulse sequences below the nanosecond or microsecond level, and extends the pulses to a frequency range through orthogonal frequency division modulation or direct sequencing.
  • UWB has the characteristics of wide spectrum, high precision, low power consumption, strong multipath resistance, high security, and low system complexity. It is mostly used for short-distance, high-speed wireless communication, especially in the field of indoor positioning. Generally speaking, the positioning accuracy of the UWB system can reach centimeter level.
  • the UWB system includes UWB base stations and UWB tags.
  • the UWB base station determines the position (coordinates) of the UWB tag by detecting the distance between the UWB tag and the UWB base station, and the signal direction of the UWB tag; that is, positioning the UWB tag.
  • the UWB positioning is based on the UWB coordinate system (also called the first coordinate system).
  • the second electronic device includes a UWB module.
  • the UWB module of the first electronic device 100 realizes the function of UWB base station, and the UWB module of the second electronic device realizes the function of UWB tag.
  • the UWB module of the first electronic device 100 By positioning the UWB module of the first electronic device 100 to the UWB module of the second electronic device, the positioning of the second electronic device by the first electronic device 100 can be realized.
  • some second electronic devices do not include a UWB module.
  • the second electronic device including the UWB module is a mobile device (such as a smart phone, a remote controller, etc.).
  • the second electronic device including the UWB module can be used to mark the second electronic device not including the UWB module, so that the first electronic device 100 can locate the second electronic device not including the UWB module.
  • the specific labeling method will be introduced in detail later.
  • Millimeter wave radar works in the millimeter wave (millimeter wave) band and is mainly used to detect moving objects. Its working frequency band is distributed in the frequency domain of 30-300 GHz (wavelength is 1-10 mm).
  • the millimeter-wave radar continuously transmits (radiates) a specific form of wireless electromagnetic signals when it is working, and receives the electromagnetic echo signals reflected by the object, and determines the spatial information of the object by comparing the difference between the transmitted signal and the received signal.
  • Millimeter wave radar has the characteristics of small size and high spatial resolution; deployed indoors, it can be used to detect the position of the human body (user) in the whole house, physiological characteristics (such as breathing rate, heartbeat frequency, etc.), identity category (such as, Adults, children, etc.) and human body posture (such as standing, sitting, lying down, etc.) information.
  • physiological characteristics such as breathing rate, heartbeat frequency, etc.
  • identity category such as, Adults, children, etc.
  • human body posture such as standing, sitting, lying down, etc.
  • the specific method will be introduced in detail later.
  • the positioning by the millimeter-wave radar is based on the millimeter-wave radar coordinate system (also referred to as the second coordinate system).
  • the coordinates of the second coordinate system and the first coordinate system need to be converted or unified into the same coordinate system.
  • the specific coordinate system conversion method will be introduced in detail later.
  • the embodiment of the present application is introduced by taking a first electronic device 100 including a UWB module and a millimeter wave radar as an example.
  • the first electronic device 100 may only include a UWB module or a millimeter wave radar, the first electronic device 100 including the UWB module is used to locate the second electronic device 300, and the first electronic device 100 including the millimeter wave radar
  • the device 100 is used to locate a user.
  • the aforementioned two types of first electronic devices 100 cooperate with each other to implement the automatic control method based on human body perception provided by the embodiment of the present application.
  • the embodiment of the present application does not limit this.
  • the first electronic device 100 may acquire the location of the second electronic device 300 (ie, locate the second electronic device), and may also acquire the location of the user in the room or area (ie, locate the user).
  • the whole house includes at least one room or area. If only one first electronic device is installed in the whole house, signal attenuation may be caused due to wall blocking and other reasons. Such a first electronic device cannot cover all areas of the whole house. Therefore, generally a plurality of first electronic devices are installed in the whole house.
  • each relatively independent space in the whole house (for example, living room, bedroom, study room, balcony, bathroom, kitchen, corridor, etc.) is respectively provided with a first electronic device, which is used for the second electronic device in the independent space and The user performs positioning; in this way, the second electronic device or user at any position in the whole house can be detected by the first electronic device.
  • a first electronic device which is used for the second electronic device in the independent space and The user performs positioning; in this way, the second electronic device or user at any position in the whole house can be detected by the first electronic device.
  • the signal sending and receiving range of the first electronic device installed in the entrance hallway may cover the entrance hallway.
  • the transmitting and receiving signal range of the first electronic device set in the kitchen can cover the kitchen.
  • the transmitting and receiving signal range of the first electronic device set in the living room can cover the living room.
  • the transmitting and receiving signal range of the first electronic device set in the restaurant can cover the restaurant.
  • the transmitting and receiving signal range of the first electronic device installed on the balcony can cover the balcony.
  • the sending and receiving signal range of the first electronic device set in the master bedroom can cover the master bedroom.
  • the transmitting and receiving signal range of the first electronic device set in the bathroom can cover the bathroom.
  • the sending and receiving signal range of the first electronic device set in the second bedroom can cover the second bedroom.
  • the first electronic device may be set on a wall of a room or area.
  • the first electronic device may be installed on a ceiling of a room or an area, or the like. In this way, the shielding of the signal by objects such as furniture in the whole house can be reduced, preventing the signal from being blocked from reducing the detection accuracy of the first electronic device.
  • the first electronic device may be set on the floor of a room or area, or the like.
  • the first electronic device 100 may exist independently, or may be integrated with the second electronic device. This application does not limit this.
  • the first electronic device 100 and the smart air conditioner are integrated into one device.
  • some rooms or areas do not need to be provided with the first electronic device; that is, not all rooms or areas are provided with at least one first electronic device.
  • a restaurant may not be provided with the first electronic device.
  • the entrance aisle and the dining room can share a first electronic device, or the dining room and living room can share a first electronic device.
  • the second electronic device 300 includes but is not limited to a smart TV, Lamps (such as ceiling lamps, smart table lamps, aroma lamps, etc.), sweeping robots, body fat scales, smart clothes hangers, smart rice cookers, air purifiers, humidifiers, desktop computers, routing devices, smart sockets, water dispensers, smart refrigerators, Smart air conditioner, smart switch, smart door lock, etc.
  • the second electronic device 300 may not be a smart home device, but a portable device, such as a personal computer (person computer, PC), a tablet computer, a mobile phone, and a smart remote control. The embodiment of the present application does not limit the specific form of the second electronic device 300 .
  • the second electronic device 300 and the first electronic device 100 may be wired (for example, power line communication (power line communication, PLC)) and/or wireless (for example, wireless fidelity (wireless fidelity, Wi-Fi), Bluetooth, etc.)
  • the mode is connected with the central device 200.
  • the connection manners of the second electronic device 300 and the first electronic device 100 to the hub device 200 may be the same or different.
  • both the second electronic device 300 and the first electronic device 100 are connected to the central device 200 in a wireless manner.
  • the second electronic device 300 is connected to the central device 200 in a wireless manner
  • the first electronic device 100 is connected to the central device 200 in a wired manner.
  • devices such as smart speakers, smart TVs, body fat scales, and sweeping robots in the second electronic device 300 are connected to the central device 200 wirelessly (such as Wi-Fi).
  • devices such as smart door locks are connected to the central device 200 through a wired method (such as PLC).
  • the first electronic device 100 communicates with the second electronic device 300 in a wireless manner.
  • the first electronic device 100 may transmit at least one item of the location information of the second electronic device 300 acquired through detection, and the user's location, physiological characteristics, identity category, and human body posture, through wired or Upload to the central device 200 wirelessly.
  • the hub device 200 also referred to as a hub, a central control system, or a host, is configured to receive information sent by the first electronic device 100 .
  • the central device 200 is also used to build a whole-house map, establish a whole-house coordinate system, and unify the location information obtained by each first electronic device 100 into the whole-house coordinate system for unified measurement.
  • the position information of the second electronic device 300 or the user detected and acquired by each first electronic device 100 can be unified into the whole room coordinate system, and the specific position of the second electronic device 300 or the user in the whole room can be determined.
  • the hub device 200 also notifies or controls the second electronic device 300 according to the received information (including but not limited to location information).
  • the conversion of each coordinate system is also involved, which will be described in detail later.
  • the central device 200 receives the information sent by the first electronic device 100, including the location information of the second electronic device 300 and at least one of the user's location, physiological characteristics, identity category, and human body posture.
  • the central device 200 notifies or controls the second electronic device 300 to execute preset instructions according to the location information of the second electronic device 300 and at least one of the user's location, physiological characteristics, identity category, and human body posture. For example, when a user wakes up a smart speaker by voice, the central device 200 notifies or controls one or more smart speakers closest to the user to be woken up according to the positions of multiple smart speakers in the whole house.
  • the hub device 200 controls the smart speaker in the room where the user leaves to stop playing audio, and controls the smart speaker in the room where the user enters to start playing (for example, continue playing) audio.
  • the central device 200 controls the playback volume of the smart speakers according to the distance between the two smart speakers (the two smart speakers respectively play the left channel audio and the right channel audio of the same audio) and the user, so that the left audio received by the user channel audio and right channel audio at the same volume.
  • the user watches a video (for example, the video includes violent content, etc.) through a smart TV in a room, and detects that a child user enters the room, and the central device 200 controls the smart TV to stop playing the video.
  • the hub device 200 notifies or controls the smart TV to start or stop playing the video according to the user's position relative to the smart TV (such as distance, orientation, etc.).
  • At least one central device 200 is provided in the whole house.
  • the first electronic device in each room or area may send the detected location information of the user and the location information of one or more second electronic devices in the room or area to the central device 200 .
  • the central device 200 obtains the detection data (including but not limited to location information) of each room or area of the whole house, so that it can notify or control the corresponding second electronic device in the corresponding room or area when the preset condition is met.
  • a central device may also be provided in each room or area of the whole house.
  • the first electronic device in each room or area can send the detected location information of the user and the location information of one or more second electronic devices in the room or area to the central device in the room or area.
  • the central device in this room or in this area sends it to the central device 200 in the whole house.
  • the central device 200 of the whole house acquires the detection data of each room or area of the whole house, so that when the preset conditions are met, the central device in the corresponding room or area can be notified or controlled.
  • the central device in the corresponding room or the corresponding area then notifies or controls the corresponding second electronic device.
  • the central equipment of each room or area, and the central equipment of the whole house can exist independently, or can be integrated with the first electronic equipment or the second electronic equipment into one equipment, and can also be integrated with the first electronic equipment and the second electronic equipment. Electronic devices integrated into one device. This application does not limit this.
  • some rooms or areas do not need to be provided with a central device; that is, not all rooms or areas are provided with at least one central device.
  • a central device For example, a restaurant does not need to set up a central device.
  • the dining room and the entrance aisle share a central device, or the dining room and the living room share a central device.
  • the central device 200 of the whole house may also assume the function of the central device of a certain room or area.
  • the central device 200 of the whole house may also assume the function of the central device in the living room.
  • a central device is provided for each room or area other than a certain room or area (for example, a living room).
  • the central device 200 of the whole house communicates to the second electronic device in each room or area other than the above-mentioned certain room or area (such as the living room), it still passes through the central device in each room or area;
  • the second electronic device in a room or area for example, the living room
  • communicates it no longer communicates through the central device in the above-mentioned room or area (for example, the living room).
  • the communication system further includes a routing device (such as a router).
  • Routing devices are used to connect to local area networks or the Internet, using specific protocols to select and set the path for sending signals.
  • one or more routers are deployed in the whole house to form a local area network, or access the local area network or the Internet.
  • the second electronic device 300 or the first electronic device 100 is connected to the router, and performs data transmission with devices in the local area network or devices in the Internet through the Wi-Fi channel established by the router.
  • the hub device 200 and the routing device may be integrated into one device.
  • the hub device 200 and the routing device are integrated into a routing device, that is, the routing device has the functions of the hub device 200 .
  • the routing device may be one or more routing devices in the parent-child routing device, or may be an independent routing device.
  • the communication system further includes a gateway (Gateway).
  • Gateway is also called internet connector and protocol converter.
  • the gateway is used to forward the information of the first electronic device 100 to the routing device or the hub device 200 .
  • the functions of the hub device 200 may be implemented by a gateway.
  • the communication system further includes a server (eg, a cloud server).
  • the hub device 200, the routing device or the gateway may send the received information from the first electronic device 100 to the server. Further, the central device 200, the routing device or the gateway may also send the control information of the central device 200 to the second electronic device 300 to the server. Furthermore, the hub device 200, the routing device or the gateway may also upload various information generated during the operation of the second electronic device 300 to the server for the user to view.
  • the communication system further includes one or more input devices (for example, the input device is a control panel).
  • the control panel displays a human-computer interaction interface of the communication system.
  • the user can view the information of the communication system (for example, the connection information of each device in the communication system), the operation information of the second electronic device 300 and/or the control information of the second electronic device 300 by the central device 200 on the human-computer interaction interface. Users can also input control commands on the human-computer interaction interface by clicking on the screen or voice to control the devices in the communication system.
  • FIG. 2 shows a schematic structural diagram of a first electronic device 100 .
  • the first electronic device 100 may include a processor 110 , a memory 120 , a power management module 130 , a power supply 131 , a wireless communication module 140 , a UWB module 150 , a millimeter wave radar module 160 and so on.
  • the structure shown in FIG. 2 does not constitute a specific limitation on the first electronic device 100 .
  • the first electronic device 100 may include more or fewer components than shown in the illustration, or combine some components, or separate some components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, and different processing units may be independent devices or integrated into one or more processors.
  • the processor 110 is a central processing unit (central processing unit, CPU), may also be a specific integrated circuit (application specific integrated circuit, ASIC), or is configured to implement one or more integrated circuits of the embodiments of the present application , for example: one or more microprocessors (digital signal processor, DSP), or, one or more field programmable gate arrays (field programmable gate array, FPGA).
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the memory 120 may be used to store computer-executable program code, which includes instructions.
  • the memory 120 may also store data processed by the processor 110 .
  • the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 110 executes various functional applications and data processing of the first electronic device 100 by executing instructions stored in the memory 120 and/or instructions stored in the memory provided in the processor.
  • the power management module 130 is configured to receive an input from a power source 131 .
  • the power source 131 may be a battery or a commercial power.
  • the power management module 130 receives power from the battery and/or commercial power, and supplies power to various components of the first electronic device 100, such as the processor 110, the memory 120, the wireless communication module 140, the UWB module 150, and the millimeter wave radar module 160.
  • the wireless communication module 140 can provide applications on the first electronic device 100 including wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global Solutions for wireless communications such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), and ZigBee plan.
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • Bluetooth blue-BT
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • the wireless communication module 140 can also receive the signal to be sent from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna. It should be noted that the number of antennas of the wireless communication module 140 , the UWB module 150 and the millimeter wave radar module 160 in FIG. 2 is only for illustration. It can be understood that the communication module 140 , the UWB module 150 and the millimeter wave radar module 160 may include more or fewer antennas, which is not limited in this embodiment of the present application.
  • the UWB module 150 may provide a wireless communication solution based on UWB technology applied on the first electronic device 100 .
  • the UWB module 150 is used to implement the above-mentioned functions of the UWB base station.
  • the UWB base station can locate the UWB tag.
  • the UWB signal can be detected and combined with some positioning algorithms to calculate the duration of the UWB signal flying in the air, and the duration is multiplied by the transmission rate of the UWB signal in the air (such as the speed of light) to obtain the distance between the UWB tag and the UWB base station .
  • the UWB base station can also determine the direction of the UWB tag relative to the UWB base station (that is, the direction of the signal of the UWB tag) according to the phase difference of the UWB signal sent by the UWB tag arriving at different antennas of the UWB base station.
  • the signal direction includes horizontal direction and vertical direction.
  • FIG. 3 shows a schematic structural diagram of a central device 200 .
  • the central device 200 may include a processor 210 , a memory 220 , a power management module 230 , a power supply 231 , a wireless communication module 240 and the like.
  • the structure shown in FIG. 3 does not constitute a specific limitation on the central device 200 .
  • the hub device 200 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 210 may include one or more processing units, and different processing units may be independent devices or integrated into one or more processors.
  • the processor 210 is a central processing unit (central processing unit, CPU), may also be a specific integrated circuit (application specific integrated circuit, ASIC), or is configured to implement one or more integrated circuits of the embodiments of the present application , for example: one or more microprocessors (digital signal processor, DSP), or, one or more field programmable gate arrays (field programmable gate array, FPGA).
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Memory 220 may be used to store computer-executable program code, which includes instructions.
  • the memory 220 may also store data processed by the processor 210 .
  • the memory 220 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 210 executes various functional applications and data processing of the hub device 200 by executing instructions stored in the memory 220 and/or instructions stored in the memory provided in the processor.
  • the power management module 230 is configured to receive an input from a power source 231 .
  • the power source 231 may be a battery or a commercial power.
  • the power management module 230 receives power from the battery and/or commercial power, and supplies power to various components of the central device 200, such as the processor 210, the memory 220, the wireless communication module 240, and the like.
  • the wireless communication module 240 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the central device 200.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • purple peak ZigBee
  • the wireless communication module 240 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 240 receives electromagnetic waves through the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210 .
  • the wireless communication module 240 can also receive the signal to be sent from the processor 210, frequency-modulate it, amplify it, and convert it into electromagnetic wave to radiate through the antenna.
  • FIG. 4 shows a schematic structural diagram of a second electronic device 300 .
  • the second electronic device 300 may include a processor 310, a memory 320, a universal serial bus (universal serial bus, USB) interface 330, a power module 340, a UWB module 350, a wireless communication module 360, and the like.
  • the second electronic device 300 may further include an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a display screen 380, a sensor module 390, and the like.
  • the structure shown in FIG. 4 does not constitute a specific limitation on the second electronic device 300 .
  • the second electronic device 300 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the interface connection relationship among the modules shown in FIG. 4 is only for schematic illustration, and does not constitute a structural limitation of the second electronic device 300 .
  • the second electronic device 300 may also adopt an interface connection manner different from that in FIG. 4 , or a combination of multiple interface connection manners.
  • the processor 310 may include one or more processing units, for example: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), video codec, digital signal processor (digital signal processor, DSP), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • DSP digital signal processor
  • Memory 320 may be used to store computer-executable program code, which includes instructions.
  • the memory 320 may also store data processed by the processor 310 .
  • the memory 320 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 310 executes various functional applications and data processing of the second electronic device 300 by executing instructions stored in the memory 320 and/or instructions stored in the memory provided in the processor.
  • the USB interface 330 is an interface conforming to the USB standard specification, specifically, it may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 330 can be used to connect a charger to charge the second electronic device 300 , and can also be used to transmit data between the second electronic device 300 and peripheral devices.
  • the power supply module 340 is used for supplying power to various components of the second electronic device 300, such as the processor 310, the memory 320, and the like.
  • the UWB module 350 can provide a wireless communication solution based on UWB technology applied on the second electronic device 300 .
  • the UWB module 350 is used to implement the above-mentioned functions of the UWB tag.
  • the wireless communication module 360 can provide applications on the second electronic device 300 including wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global Solutions for wireless communications such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), and ZigBee plan.
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • Bluetooth blue-BT
  • BT wireless fidelity
  • BT wireless fidelity
  • the wireless communication module 360 can also receive the signal to be transmitted from the processor 310, frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna.
  • the wireless communication module 360 may be integrated with the UWB module 350 or set separately, which is not limited in this application.
  • the second electronic device 300 can realize the audio function through the audio module 370 , the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor. Such as audio playback, recording, etc.
  • the audio module 370 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 370 may also be used to encode and decode audio signals.
  • the audio module 370 can be set in the processor 310 , or some functional modules of the audio module 370 can be set in the processor 310 .
  • Speaker 370A also called “horn” is used to convert audio electrical signals into sound signals.
  • the second electronic device 300 may listen to audio through the speaker 370A.
  • Receiver 370B also called “earpiece” is used to convert audio electrical signals into audio signals.
  • the microphone 370C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. The user can make a sound by approaching the microphone 370C with the human mouth, and input the sound signal to the microphone 370C.
  • the earphone interface 370D is used to connect wired earphones.
  • the earphone interface 370D may be a USB interface 330, or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the display screen 380 is used to display images, videos and the like.
  • the display screen 380 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light emitting diodes quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the sensor module 390 includes an inertial measurement unit (inertial measurement unit, IMU) module and the like.
  • IMU modules can include gyroscopes, accelerometers, etc.
  • the gyroscope and the accelerometer can be used to determine the motion posture of the second electronic device 300 .
  • the angular velocity of the second electronic device 300 around three axes can be determined by a gyroscope.
  • the accelerometer can be used to detect the acceleration of the second electronic device 300 in various directions (generally three axes). When the second electronic device 300 is stationary, the magnitude and direction of gravity can be detected.
  • the device attitude of the second electronic device 300 may be acquired according to the angular velocity and acceleration measured by the IMU module.
  • some second electronic devices may include an IMU module, and some second electronic devices do not include an IMU module.
  • the second electronic device 300 further includes a filter (for example, a Kalman filter).
  • a filter for example, a Kalman filter.
  • the output of the IMU module and the output of the UWB module 350 can be superimposed, and the superimposed signal can be input to a Kalman filter for filtering, thereby reducing errors.
  • FIG. 5A shows the structure of the UWB module in the first electronic device provided by the embodiment of the present application.
  • the UWB module 150 includes a transmitter 1501 and a receiver 1502 .
  • Transmitter 1501 and receiver 1502 can operate independently.
  • the transmitter 1501 includes a data signal generator, a pulse generator, a modulator, a digital-to-analog converter, a power amplifier, and a transmitting antenna.
  • the data signal generator is used to generate data signals, and is also used to send timing start instruction information to the receiver 1502 when the data signal generation starts.
  • the pulse generator is used to generate periodic pulse signals.
  • Digital-to-analog converters are used to convert digital signals into analog signals.
  • the data signal to be sent is modulated by the modulator to the pulse signal generated by the pulse generator, and after being amplified by the power amplifier, the UWB signal is transmitted through the transmitting antenna.
  • the receiver 1502 includes a receiving antenna, a mixer, a filter, a sampling module, a first processing module and so on. Any receiving antenna receives a UWB signal (for example, in the form of a pulse sequence), mixes the received UWB signal with a mixer, filters and amplifies it with a filter, and performs analog-to-digital conversion through a sampling module to obtain a baseband digital signal.
  • the first processing module is used to process the baseband digital signal to realize the detection of the UWB signal.
  • the first processing module calculates the signal time of flight (time of flight, ToF) of the UWB signal according to the timing start indication information and the moment when the pulse sequence is received, and calculates the second according to the ToF and the rate of transmission of the UWB signal in the air (such as the speed of light)
  • the distance between an electronic device 100 and a second electronic device 300 including the UWB module 350 .
  • the first processing module calculates the signal direction of the second electronic device 300 including the UWB module 350 according to the phase difference of the pulse sequences received by the multiple receiving antennas.
  • the pulse sequence received by each receiving antenna in FIG. 5A passes through a set of power amplifiers, mixers, filters and sampling modules, which represents the processing flow of the pulse sequence.
  • the receiver 1502 may only include a set of power amplifiers, mixers, filters and sampling modules.
  • one antenna can realize the functions of one transmitting antenna in the transmitter 1501 and one receiving antenna in the receiver 1502, that is, the transmitting antenna and the receiving antenna can be integrated into the same antenna.
  • FIG. 5B shows the structure of the millimeter wave radar module in the first electronic device provided by the embodiment of the present application.
  • the millimeter wave radar module 160 includes a transmitting antenna, a relay switch, a receiving antenna, a waveform generator, a mixer, a filter, a second processing module and the like.
  • the waveform generator is used to generate a transmission signal, for example, the transmission signal is a linear frequency-modulation continuous wave (LFMCW).
  • LFMCW linear frequency-modulation continuous wave
  • n and m are positive integers greater than or equal to 1.
  • Fig. 6 shows the structure of the UWB module in the second electronic device.
  • the UWB module 350 includes a transmitter 3501 and a receiver 3502 .
  • the transmitter 3501 and receiver 3502 can operate independently.
  • the transmitter 3501 includes a data signal generator, a pulse generator, a modulator, a digital-to-analog converter, a power amplifier, and a transmitting antenna.
  • the data signal generator is used for generating data signals.
  • the pulse generator is used to generate periodic pulse signals.
  • Digital-to-analog converters are used to convert digital signals into analog signals.
  • the data signal to be sent is modulated by the modulator to the pulse signal generated by the pulse generator, and after being amplified by the power amplifier, the UWB signal is transmitted through the transmitting antenna.
  • Receiver 3502 includes receiving antenna, mixer, filter, sampling module, processing module and so on.
  • the receiving antenna receives the UWB signal (for example, in the form of a pulse sequence), and after the received UWB signal is mixed by a mixer, filtered and amplified by a filter, the analog-to-digital conversion is performed by the sampling module to obtain a baseband digital signal.
  • the processing module is used to process the baseband digital signal to realize the detection of the UWB signal.
  • the transmitting antenna in the transmitter 3501 and the receiving antenna in the receiver 3502 may be integrated into the same antenna.
  • FIG. 7 shows several antenna distributions of the UWB module in the first electronic device provided by the embodiment of the present application.
  • (a) of FIG. 7 exemplarily shows two two-antenna structures.
  • One is a horizontal (for example, horizontal) antenna structure
  • the other is a longitudinal (for example, vertical) antenna structure.
  • the distance between antenna 0 and antenna 1 is ⁇ /2, where ⁇ is the wavelength of the UWB signal.
  • the horizontal antenna structure can be used to measure the lateral direction of UWB signals (eg, horizontal direction)
  • the longitudinal antenna structure can be used to measure the longitudinal direction of UWB signals (eg, vertical direction).
  • the first electronic device on the left and the first electronic device on the right as shown in (a) of FIG. ), to detect the incoming and outgoing signal of the second electronic device including the UWB module.
  • FIG. 7(b) of FIG. 7 and (c) of FIG. 7 exemplarily show a three-antenna structure.
  • the three antennas present an L-shaped (or called a right-angled triangle) structural relationship.
  • antenna 0 and antenna 1 are aligned in the lateral direction (for example, the horizontal direction)
  • antenna 0 and antenna 2 are aligned in the longitudinal direction (for example, the vertical direction), that is, antenna 0,
  • the plane where the antenna 1 and the antenna 2 are located is a longitudinal plane (for example, a longitudinal plane), and an L-shaped distribution relationship is present on the longitudinal plane.
  • the plane where antenna 0, antenna 1, and antenna 2 are located is a transverse plane (for example, a horizontal plane), and the connection line between antenna 0 and antenna 1 (assuming that there is a connection line between antenna 0 and antenna 1 ), perpendicular to the connection line between antenna 0 and antenna 2 (assuming that there is a connection between antenna 0 and antenna 2). That is, the antenna 0, the antenna 1, and the antenna 2 present an L-shaped distribution relationship on the lateral plane.
  • the distance between Antenna 0 and Antenna 1, and between Antenna 0 and Antenna 2 may be less than or equal to ⁇ /2; where ⁇ is a UWB signal wavelength.
  • the distance between antenna 0 and antenna 1, and the distance between antenna 0 and antenna 2 may be the same or different.
  • FIG. 7 exemplarily shows some other three-antenna structures.
  • the three antennas are in a triangular (eg, equilateral triangle, isosceles triangle) structural relationship.
  • the plane where the antenna 0, the antenna 1, and the antenna 2 are located is a longitudinal plane (for example, a vertical plane), and a triangular distribution appears on the longitudinal plane.
  • antenna 0 , antenna 1 , and antenna 2 are distributed in a triangle on a lateral plane (for example, a horizontal plane).
  • the distance between any two antennas among Antenna 0, Antenna 1, and Antenna 2 can be less than or equal to ⁇ /2; where ⁇ is the UWB signal wavelength.
  • the distance between any two antennas among antenna 0, antenna 1 and antenna 2 may be the same or different. For example, the distance between antenna 0 and antenna 1 is ⁇ /2; the distance between antenna 0 and antenna 2 is
  • antennas are also within the scope of the present application.
  • four antennas, antenna 0 , antenna 1 , antenna 2 and antenna 3 are distributed in a rectangular shape.
  • any three antennas among the four antennas are distributed in an L-shape or a triangle as described above.
  • the first electronic device 100 obtains the lateral direction of the UWB signal according to the phase difference between the UWB signal from the second electronic device 300 and the two laterally distributed antennas of the UWB module 150; The phase difference between the two longitudinally distributed antennas reaching the UWB module 150 obtains the longitudinal direction of the UWB signal. Furthermore, the first electronic device 100 acquires the direction of the UWB signal according to the horizontal direction and the vertical direction.
  • the UWB module 150 of the first electronic device 100 may only include one antenna. At this time, more than three first electronic devices 100 need to be used, and more than three first electronic devices 100 are distributed in an L shape or a triangle, and cooperate together to obtain the direction of the UWB signal. The specific principle is similar to the above, and will not be repeated here.
  • the embodiment of the present application does not limit the number and distribution of antennas in the UWB module of the first electronic device 100, as long as the direction of the UWB signal can be obtained.
  • FIG. 8 shows several antenna distributions of the millimeter wave radar module in the first electronic device provided by the embodiment of the present application.
  • the transmitting antennas include transmitting antenna 0, transmitting antenna 1, and transmitting antenna 2.
  • the receiving antennas include receiving antenna 0 , receiving antenna 1 , receiving antenna 2 and receiving antenna 3 .
  • the distribution of transmitting antenna 0, transmitting antenna 1, and transmitting antenna 2, and receiving antenna 0, receiving antenna 1, receiving antenna 2, and receiving antenna 3 may be as shown in (a) or (b) of FIG. 8 .
  • the transmitting antenna is used to transmit the electromagnetic signal working in the millimeter wave band (such as LFMCW), and the receiving antenna is used to receive the electromagnetic signal working in the millimeter wave band reflected by a reflector (object or human body).
  • the millimeter wave radar module 160 obtains a difference frequency signal according to the transmitted signal and the received signal, and determines the position of an object or a human body according to the difference frequency signal.
  • the three transmitting antennas and the four receiving antennas are located on the same longitudinal plane (for example, a vertical plane), and the three transmitting antennas are distributed in a triangle on the longitudinal plane.
  • the transmitting antenna 0 and the transmitting antenna 2 are located on the same horizontal plane (for example, a horizontal plane), and the four receiving antennas are located on the same horizontal line (for example, a horizontal line).
  • the distances between any two receiving antennas are equal (for example, both are ⁇ L /2); the distances between the transmitting antenna 0 and the transmitting antenna 2 are all equal (for example, both are 2 ⁇ L ); the transmitting antenna 1 and transmitting antenna 0, and the distances between transmitting antenna 1 and transmitting antenna 2 in the longitudinal direction are equal (for example, both are ⁇ L /2).
  • ⁇ L is the wavelength of the highest frequency of the chirp continuous signal.
  • the transmitting antenna 0 and the transmitting antenna 2 are located on the same longitudinal line (for example, a vertical line); the four receiving antennas are located on the same longitudinal line (for example, a vertical line on-line.
  • the distance between any two receiving antennas is equal (for example, ⁇ L /2); the distance between transmitting antenna 0 and transmitting antenna 2 is equal (for example, both are 2 ⁇ L ); transmitting antenna 1 and transmitting antenna 0, And the distances between the transmitting antenna 1 and the transmitting antenna 2 in the lateral direction are equal (for example, both are ⁇ L /2). It can be understood that the number and distribution of transmitting antennas and/or receiving antennas may be in other forms. The embodiment of the present application does not limit this.
  • the millimeter-wave radar module 160 can calculate the lateral direction of the target according to the phase difference of multiple receiving antennas in the horizontal direction (such as the horizontal direction) of the reflected signal; Receive the phase difference of the antenna and calculate the longitudinal direction of the target.
  • the number of transmitting antennas may be more or less than three.
  • the number of receiving antennas may be more than four or less than four. This application does not limit this. In an implementation manner, there is at least one transmitting antenna, and at least three receiving antennas.
  • the number of transmitting antennas is one, and the number of receiving antennas is three.
  • three receiving antennas—receiving antenna 0, receiving antenna 1 and receiving antenna 2 are distributed in a triangle.
  • the connection line between receiving antenna 0 and receiving antenna 1 is located in the horizontal direction
  • the connection line between receiving antenna 0 and receiving antenna 2 is located in the vertical direction direction. In this way, after the transmitting signal of the transmitting antenna is reflected by the reflector (object or human body), the three receiving antennas respectively receive the reflected signal.
  • the millimeter-wave radar module 160 can obtain the lateral direction (for example, the horizontal direction) of the reflected signal according to the phase difference between the reflected signals received by the receiving antenna 0 and the receiving antenna 1 respectively. 2 The phase difference between the reflected signals received by the two respectively, and obtain the longitudinal direction of the reflected signal (for example, the vertical direction). Furthermore, the direction of the reflected signal can be determined according to the horizontal direction and the vertical direction.
  • the number of transmitting antennas is at least two, and the number of receiving antennas is at least two.
  • the number of transmitting antennas is at least two.
  • two transmit antennas transmit antenna 0 and transmit antenna 1
  • two receive antennas receive antenna 0 and receive antenna 1 as an example.
  • the connecting line (actually no connecting line) between the transmitting antenna 0 and the transmitting antenna 1 is located in the horizontal direction
  • the connecting line (actually no connecting line) between the receiving antenna 0 and the receiving antenna 1 is located in the longitudinal direction.
  • the transmitting signals of the transmitting antenna 0 and the transmitting antenna 1 are respectively reflected by the reflector (object or human body), at least one receiving antenna receives the reflected signal.
  • the millimeter-wave radar module 160 can calculate the lateral direction of the reflected signal (in this case, it can also be called the reflected signal) according to the phase difference between the signals transmitted by the transmitting antenna 0 and the transmitting antenna 1 and reaching the same receiving antenna. (for example, horizontally). After the transmitting signal of the transmitting antenna is reflected by the reflector (object or human body), the two receiving antennas receive the reflected signal respectively; according to the phase difference between the reflected signals received by the receiving antenna 0 and the receiving antenna 1, the reflected signal is obtained The longitudinal direction (for example, the vertical direction). Furthermore, the direction of the reflected signal can be determined according to the horizontal direction and the vertical direction.
  • the number of transmitting antennas is at least two, and the number of receiving antennas is at least two.
  • the number of transmitting antennas is at least two.
  • two transmit antennas transmit antenna 0 and transmit antenna 1
  • two receive antennas receive antenna 0 and receive antenna 1 as an example.
  • the connecting line (actually no connecting line) between transmitting antenna 0 and transmitting antenna 1 is located in the longitudinal direction
  • the connecting line (actually no connecting line) between receiving antenna 0 and receiving antenna 1 is located in the lateral direction.
  • the transmitting signals of the two transmitting antennas are respectively reflected by the reflector (object or human body), at least one receiving antenna receives the reflected signal.
  • the millimeter-wave radar module 160 can calculate the longitudinal direction of the reflected signal (in this case, it can also be called the reflected signal) according to the phase difference between the signals transmitted by the transmitting antenna 0 and the transmitting antenna 1 and reaching the same receiving antenna. (for example, the horizontal direction); according to the phase difference between the reflected signals received by the receiving antenna 0 and the receiving antenna 1, the lateral direction of the reflected signal (for example, the horizontal direction) is acquired. Furthermore, the direction of the reflected signal can be determined according to the horizontal direction and the vertical direction.
  • the number of transmitting antennas is at least three, and the number of receiving antennas is at least one.
  • the number of transmitting antennas is at least one.
  • Transmitting antenna 0, transmitting antenna 1 and transmitting antenna 2 are distributed in a triangle. Assuming that the connection line between transmitting antenna 0 and transmitting antenna 1 (actually there is no connection line) is located in the lateral direction (for example, horizontal direction), the connection line between transmitting antenna 0 and transmitting antenna 2 (actually there is no connection line) in portrait orientation.
  • the receiving antenna 0 receives the reflected signal.
  • the millimeter-wave radar module 160 can calculate the lateral direction of the reflected signal (in this case, it can also be called the reflected signal) according to the phase difference between the signals transmitted by the transmitting antenna 0 and the transmitting antenna 1 and reaching the same receiving antenna. (For example, horizontal direction); According to the phase difference between the signals transmitted by the transmitting antenna 0 and the transmitting antenna 2 and arriving at the same receiving antenna, calculate the longitudinal direction of the reflected signal (at this time, it can also be called the reflected signal) Coming (for example, vertical coming).
  • the following takes the first electronic device 100 including the UWB module 150 and the millimeter wave radar module 160 as an example to introduce specific positioning principles in detail.
  • positioning means acquiring a location.
  • the position is represented by coordinates in a coordinate system.
  • the coordinates of the first electronic device represent the position of the first electronic device
  • the coordinates of the second electronic device represent the position of the second electronic device
  • the coordinates of the user represent the position of the user.
  • the location may be represented in other ways. This embodiment of the present application does not limit it.
  • the projection of the antenna 2 on the Z e axis is located on the positive direction of the Z e axis.
  • the direction of the Y e axis is determined based on the rule of the right-handed rectangular coordinate system.
  • the right-handed rectangular coordinate system can be referred to simply as the right-handed coordinate system, which is one of the methods for specifying the rectangular coordinate system in space.
  • the positive directions of the X e axis, the Y e axis and the Z e axis in the right-hand rectangular coordinate system are specified as follows: put the right hand at the position of the origin, and make the thumb, index finger and middle finger At right angles to each other, the thumb and index finger are in the same plane, and when the thumb points to the positive direction of the X e axis and the middle finger points to the positive direction of the Z e axis, the direction pointed by the index finger is the positive direction of the Y e axis.
  • the Z e axis can be set on a vertical plane, and the positive direction of the Z e axis is opposite to the direction of gravity.
  • the outer surface of the first electronic device 100 may be marked with prompt information, which is used to prompt the correct installation method or placement method, so that the Z e axis of the first coordinate system is located on the vertical plane, and the positive direction of the Z e axis Opposite to the direction of gravity. Exemplarily, as shown in FIG. 9(a) or FIG.
  • an arrow is marked on the outer surface of the UWB base station, which is used to prompt to install or place the first electronic device 100 in the direction indicated by the arrow (the direction of the arrow is upward).
  • the Z e axis of the first coordinate system is located on the vertical plane, and the positive direction of the Z e axis is opposite to the direction of gravity.
  • the arrow on the outer surface of the first electronic device is parallel to the wall, and the direction of the arrow is upward, so that the Z e axis of the first coordinate system is on the vertical plane, and the Z e The positive direction of the axis is opposite to the direction of gravity.
  • the user when installing the first electronic device, the user can use instruments such as a plumb meter to make the arrow on the outer surface of the first electronic device parallel to the plumb line determined by the plumb meter, and the direction of the arrow is upward, so that the first coordinate
  • the Z e axis of the system is located in the vertical plane, and the positive direction of the Z e axis is opposite to the direction of gravity.
  • the first electronic device 100 may include only one UWB module, and the UWB module 150 may include only one antenna.
  • three first electronic devices 100 need to cooperate with each other to establish the first coordinate system.
  • the Chinese invention patent application with application number 202110872916.6 please refer to the Chinese invention patent application with application number 202110872916.6. I won't repeat them here. It should be pointed out that the entire content of the Chinese invention patent application with application number 202110872916.6 is incorporated into this application and is within the scope of this application.
  • the positive direction of the Y e axis is determined based on the rule of the left-handed Cartesian coordinate system.
  • the left-handed rectangular coordinate system can be referred to simply as the left-handed system, which is one of the methods for specifying the rectangular coordinate system in space.
  • the positive directions of X e axis, Y e axis and Z e axis in the left-hand Cartesian coordinate system are specified as follows: put the left hand at the origin, make the thumb, index finger and middle finger form a right angle to each other, and the thumb and The index finger is in the same plane, and when the thumb points to the positive direction of the X e axis and the middle finger points to the positive direction of the Z e axis, the direction the index finger points to is the positive direction of the Y e axis.
  • the Y e axis, Y b axis, Y t axis and the positive directions of the Y e axis, Y b axis, and Y t axis are determined according to the rule of the right-handed rectangular coordinate system. It should be understood by those skilled in the art that the Y e axis, the Y b axis, the Y t axis and the positive direction of the Y e axis, the Y b axis, and the Y t axis are determined by the rule of the left-handed rectangular coordinate system or in other ways, Also within the scope of this application.
  • the first coordinate system may be established automatically after receiving a specific input, or may be pre-established.
  • the first electronic device 100 when the first electronic device 100 receives a specific input, the first electronic device 100 automatically establishes the first coordinate system.
  • the specific input may be a user input, and may also be a non-user input (for example, receiving an instruction message from another device such as a remote controller).
  • the first electronic device 100 after the first electronic device 100 is installed according to the requirements of the prompt information, when the first electronic device 100 receives a specific input, the first electronic device 100 automatically retrieves relevant information from the local or server, thereby calling Create a pre-established first coordinate system.
  • the server in this application may be a home central device or a cloud server.
  • a point on antenna 0 is taken as the origin of the first coordinate system. This is only exemplary, and a point on other antennas (for example, antenna 1) may also be the origin of the first coordinate system.
  • FIG. 10 shows the establishment process of the second coordinate system provided by the embodiment of the present application.
  • the edge profile of the second electronic device includes four sides: two vertical sides and two horizontal sides.
  • O b is the center of gravity or the center of the second electronic device, the axis that includes the point O b and is parallel to the lateral side of the second electronic device is the X b axis, and the positive direction of the X b axis points to the right side of the second electronic device; contains O
  • the axis at point b and parallel to the vertical side of the second electronic device is the Y b axis, the positive direction of the Y b axis points to the upper side of the second electronic device, and the direction of the second electronic device is the positive direction of the Y b axis;
  • Z b The axis is perpendicular to the plane where the X b axis and the Y b axis are located, and the positive direction of the Z b axis
  • O b may be the center of the second electronic device, or O b may be the center of the IMU module of the second electronic device (on the premise that the second electronic device includes an IMU module).
  • (b) of FIG. 10 is a perspective view of the second electronic device in (a) of FIG. 10 .
  • FIG. 10 only schematically introduces the second coordinate system.
  • the second coordinate system can also be defined according to other rules.
  • the coordinate origin O b may also be any point on the second electronic device, or any point outside the second electronic device.
  • the three-axis directions of the second coordinate system are not limited to the positive directions of the X b axis, the Y b axis, and the Z b axis shown in (a) or (b) of FIG. 10 .
  • the second coordinate system can be established in advance. For example, when the second electronic device leaves the factory, it has already been established, and the relevant information of the second coordinate system is saved locally or on the server; when the second electronic device is started, or when the first electronic device receives a specific input, The second electronic device calls the relevant information of the second coordinate system locally or from the server.
  • FIG. 11 shows the establishment process of the third coordinate system provided by the embodiment of the present application.
  • the edge profile of the second electronic device includes four sides: a first side A 0 A 1 , a second side A 1 A 2 , a third side A 2 A 3 and a fourth side A 3 A 0 .
  • the first side A 0 A 1 and the third side A 2 A 3 are vertical sides
  • the second side A 1 A 2 and the fourth side A 3 A 0 are horizontal sides.
  • the origin of coordinates O t is an intersection point where the leftmost edge of the display area of the second electronic device meets the bottom edge of the display area of the second electronic device (ie, the lower left corner of the display area of the second electronic device).
  • X t axis Take the axis including O t point and parallel to A 3 A 0 as X t axis, and the positive direction of X t axis is the direction from A 0 point to A 3 point; take the axis including O t point and parallel to A 0 A 1
  • the axis is the Y t axis, and the positive direction of the Y t axis is the direction from A 0 to A 1 ;
  • the Z t axis is perpendicular to the plane where the X t axis and the Y t axis are located, and is determined according to the rule of the right-handed Cartesian coordinate system out of the positive direction of the Z t axis.
  • FIG. 11 only schematically introduces the third coordinate system.
  • the third coordinate system can also be defined according to other rules.
  • Ot may be the center of the display area of the second electronic device, or any point of the display area of the second electronic device.
  • the positive directions of the three axes of the third coordinate system are not limited to the positive directions indicated by the X t axis, the Y t axis, and the Z t axis shown in FIG. 11 .
  • the edge profile of the display area of the second electronic device is the edge profile of the second electronic device
  • the A0 point of the second electronic device coincides with the Ot point
  • the outline is not the edge outline of the second electronic device, for example, when there is a frame outside the display area of the second electronic device, the A0 point of the second electronic device does not coincide with the Ot point.
  • the third coordinate system can be established in advance. For example, when the second electronic device leaves the factory, that is, the third coordinate system has been established, and the relevant information of the third coordinate system is stored locally or on the server; when the second electronic device is started, or, the second electronic device receives a trigger , the second electronic device invokes the third coordinate system locally or from the server.
  • the second electronic device 300 includes a UWB module, and the second electronic device 300 is a mobile device (such as a smart phone or a remote controller) as an example.
  • the first electronic device 100 and the second electronic device 300 are located in the same room or area. Wherein, the first electronic device 100 has established the first coordinate system, and the second electronic device 300 has established the second coordinate system.
  • the first electronic device 100 and the second electronic device 300 can perform UWB communication, and accordingly the distance between the second electronic device 300 and the first electronic device 100, the second electronic device 300 relative to the first electronic device 100, so that the coordinates of the second electronic device 300 in the first coordinate system can be determined.
  • the distance L between the first electronic device and the second electronic device can be acquired in the following manner:
  • the first electronic device 100 may measure the distance L between the first electronic device 100 and the second electronic device 300 by using a two-way ranging method.
  • the two-way ranging method includes Single-sided Two-way Ranging (SS-TWR) and Double-sided Two-way Ranging (DS-TWR).
  • SS-TWR Single-sided Two-way Ranging
  • DS-TWR Double-sided Two-way Ranging
  • DS-TWR is taken as an example to briefly describe the ranging method.
  • DS-TWR time stamps between the two first electronic devices 100 and the second electronic device 300 are recorded, and finally the flight time is obtained.
  • the DS-TWR method increases the response time, it reduces the ranging error.
  • Bilateral and two-way ranging is divided into two methods according to the number of messages sent: 4-message method and 3-message method.
  • the second electronic device 300 sends the ranging request message (that is, the first message), and records the sending time T s1 .
  • the first electronic device 100 After receiving the request message, the first electronic device 100 records the receiving time T r1 .
  • the time difference t between T r1 and T s1 is the transmission time of the message between the two devices. It takes time T re1 for the first electronic device 100 to process the request message. Then, the first electronic device 100 sends a response message (that is, the second message), and records the sending time T s2 , and the second electronic device 300 records the receiving time T r2 after receiving the response message.
  • the time difference between T r2 and T s2 is t.
  • the time difference from the generation of the first message to the reception of the second message by the second electronic device 300 is T ro1 . It takes time T re2 for the second electronic device 300 to process the response message.
  • the second electronic device 300 sends the last message (that is, the third message), and records the sending time T s3 .
  • the first electronic device 100 receives the third packet, and records the receiving time T r3 .
  • the time difference between T r3 and T s3 is t.
  • the time difference between the first electronic device 100 sending the second message and receiving the third message is T ro2 . Therefore, the following formula (1) can be used to calculate the transmission time t of the message between the two devices, and the distance L between the first electronic device 100 and the second electronic device 300 can be calculated according to formula (2).
  • c is the transmission rate of UWB signal in the medium.
  • c is generally chosen as the speed of light.
  • the ranging request message can also be sent by the first electronic device 100; correspondingly, the response message can also be sent by the second electronic device 300; correspondingly, the last message can also be sent by the first electronic device 100 send.
  • the first electronic device 100 or the second electronic device 300 may calculate L according to formula (1) and formula (2).
  • the distance L between the first electronic device 100 and the second electronic device 300 may also be calculated in other ways, and is not limited to the ways listed above.
  • the direction of the second electronic device 200 can be represented by the direction measured by the first electronic device 100 to the signal emitted by the second electronic device 200 .
  • two angles ⁇ and ⁇ may be used to indicate the direction from which the second electronic device 200 transmits the signal.
  • is the angle between the component of the UWB signal emitted by the second electronic device on the X e O e Y e plane of the first coordinate system and the negative axis of the X e axis, Usually ⁇ [0, ⁇ ].
  • can also be understood as: assuming that the UWB module of the second electronic device forms a vector to the UWB module of the first electronic device, the angle between the component of the vector on the X e O e Y e plane and the negative axis of the X e axis.
  • is the angle between the UWB signal emitted by the second electronic device and the positive axis of the Ze axis of the first coordinate system, usually ⁇ [0, ⁇ ].
  • can also be understood as: assuming that the UWB module of the second electronic device forms a vector to the UWB module of the first electronic device, the angle between the vector and the positive axis of the Z e axis is assumed.
  • the following takes three antennas as an example and combines two types of antenna distribution in the UWB module of the first electronic device - L-shaped and triangular-shaped, to specifically describe the process of solving ⁇ and ⁇ .
  • the UWB module of the first electronic device adopts an L-shaped three-antenna structure, and the first electronic device can determine the angle ⁇ according to the UWB signal received by the antenna 1;
  • the UWB signal received by the antenna 2 determines the included angle ⁇ .
  • L1 and L2 are far smaller than the distance L between the second electronic device and the first electronic device, so the UWB signal can be regarded as parallel when it reaches the UWB module of the first electronic device;
  • the components on the X e O e Y e plane can also be regarded as parallel.
  • the UWB signal is represented by two parallel solid lines, and the components of the UWB signal on the X e O e Y e plane are represented by two parallel dotted lines;
  • the distance between antenna 0 and antenna 1 The distance is L1, and the distance between antenna 0 and antenna 2 is L2;
  • O e M1 is perpendicular to the straight line where M1N1 passes through the point O e
  • O e M2 is perpendicular to the straight line where M2N2 is located through the point O e; among them,
  • N1 is The intersection point of the straight line where the component of the UWB signal is on the X e O e Y e plane and the X e axis
  • N2 is the intersection point of the straight line where the UWB signal is located and the Z e axis.
  • the angle between the UWB signal and the positive axis of the Z e axis is ⁇
  • the angle between the component of the UWB signal on the X e O e Y e plane and the positive axis of the X e axis is ⁇ .
  • both L1 and L2 are ⁇ /2.
  • is the wavelength of the UWB signal.
  • the phases of the same UWB signal measured by antenna 0, antenna 1 and antenna 2 are and The phase difference between antenna 1 and antenna 0 is The phase difference between antenna 2 and antenna 0 is because and have been measured, so can be calculated. Since ⁇ /2 corresponds to the phase difference ⁇ , ⁇ and ⁇ can be calculated according to formula (3) and formula (4) by combining the cosine formula, d1, d2, L1 and L2.
  • the UWB module of the first electronic device adopts a triangular three-antenna structure. Similar to the introduction to (e) in Figure 12, the UWB signal can be regarded as parallel when it reaches the UWB module of the first electronic device; similarly, when the UWB signal reaches the UWB module of the first electronic device, it is The components on the e O e Y e plane can also be regarded as parallel.
  • the UWB signal is represented by two parallel solid lines, and the components of the UWB signal on the X e O e Y e plane are represented by two parallel dotted lines; the distance between antenna 0 and antenna 1 The distance is L1, and the distance between the projections of antenna 0 and antenna 2 on the Z e axis is L2; point N0 is the center point of antenna 1 and antenna 0 on the X e axis.
  • N1 is the straight line where the component of the UWB signal on the X e O e Y e plane is located and the X e axis
  • N2 is the intersection of the straight line where the UWB signal is located and the Z e axis.
  • the angle between the UWB signal and the positive axis of the Z e axis is ⁇
  • the angle between the component of the UWB signal on the X e O e Y e plane and the negative axis of the X e axis is ⁇ .
  • both L1 and L2 are ⁇ /2.
  • is the wavelength of the UWB signal.
  • the calculation formula of ⁇ is the same as the formula (3), and will not be repeated here.
  • first calculate the phase at which the UWB signal emitted by the second electronic device reaches N0 Phase with Arrival Antenna 2 difference, that is
  • the first electronic device can use the formula (7) Calculate the coordinates (x e , y e , z e ) of the second electronic device in the first coordinate system established by the first electronic device, as follows:
  • the distance L between the first electronic device 100 and the second electronic device 300 can be obtained through the communication interaction such as shown in (c) of FIG.
  • the electronic device 100 can determine the origin of the UWB signal according to the received UWB signal. Therefore, the first electronic device 100 can obtain the direction and distance of the second case device 300 relative to the first electronic device 100 , so as to obtain the coordinates of the second electronic device 300 in the first coordinate system.
  • the coordinates of the second electronic device in the first coordinate system can be obtained in real time or periodically coordinate of.
  • the second electronic device does not contain a UWB module, such as a smart phone containing a UWB module, a smart speaker not containing a UWB module, or a smart air conditioner not containing a UWB module, there may be two marking methods at this time.
  • a UWB module such as a smart phone containing a UWB module, a smart speaker not containing a UWB module, or a smart air conditioner not containing a UWB module
  • Marking method 1 as shown in (a) of Figure 13, the smart phone moves to the smart speaker, and the coordinates of the smart phone in the first coordinate system are obtained through the communication of the UWB signal between the smart phone and the first electronic device , marking the coordinates as the coordinates of the smart speaker in the first coordinate system.
  • Marking method 2 As shown in Figure 13(b), first use the smartphone to point to the smart air conditioner at position 1, so that the Y b axis of the second coordinate system is facing the first point on the smart air conditioner (such as the switch button) ).
  • the coordinate 1 of the position 1 in the first coordinate system is acquired through the communication of the UWB signal between the smart phone and the first electronic device.
  • the smart phone also includes an IMU module, and the attitude angle 1 of the smart phone at position 1 is determined through the IMU module. According to the coordinate 1 and the attitude angle 1, the straight line 1 established by the smart phone pointing at the first point of the smart air conditioner at the position 1 can be determined.
  • the coordinate 2 of the position 2 in the first coordinate system is obtained through the communication of the UWB signal between the smart phone and the first electronic device.
  • the straight line 2 established by the smart phone pointing at the second point of the smart air conditioner at the position 2 can be determined. Calculate the coordinates of the intersection of straight line 1 and straight line 2, that is, the coordinates of the smart air conditioner in the first coordinate system.
  • a smart speaker or a smart air conditioner is only a schematic example.
  • a smart speaker is used to represent a second electronic device that is easily touched by a mobile device such as a smart phone held by the user.
  • a smart air conditioner is used to represent a second electronic device that is not easily The second electronic device touched by the mobile device of the smartphone.
  • the smart phone that includes the UWB module can be used to mark the smart TV multiple times.
  • the smart phone moves to point A0 of the smart TV to mark the position, and the first electronic device obtains the coordinates of the lower left corner contour point of the smart TV.
  • the first electronic device can obtain the coordinates of multiple contour points (for example, the three contour points of the lower left corner, the upper left corner, and the lower right corner, etc.) of the smart TV.
  • the first electronic device can obtain the coordinates of the four corner contour points. If A 0 , A 1 , A 2 and A 3 are marked, the first electronic device can obtain the coordinates of the four corner contour points. If A 0 , A 1 and A 2 are marked, the first electronic device can obtain the coordinates of the three corner contour points. If A 0 and A 2 are marked, the first electronic device can obtain the coordinates of the two corner contour points.
  • the present application does not limit which corner contour points are selected among the four corner contour points, as long as the contour range of the smart TV can be finally acquired.
  • the above outline range may refer to the outline range of the display area of the smart TV.
  • the above display area may include or not include the frame of the display screen of the smart TV.
  • the smart phone can move to more than three different positions of the display area of the smart TV, and when moving to a position of the display area of the smart TV, based on the user's input, the coordinates of the smart phone at this time are marked as the coordinates of the smart TV.
  • the coordinates of a position in the display area in this way, the coordinates of more than three different positions in the display area of the smart TV can be marked.
  • the coordinates of a position in the front area of the smart TV can be marked. In the process of marking three or more different positions of the display area of the smart TV, it is not required that the pointing, orientation, etc. No limit.
  • the three or more positions of the display area of the smart TV may be more than three positions (for example, 1/2 position , 1/3, etc.), or more than three positions in the center of the display area of the smart TV.
  • the second electronic device containing the UWB module can not only mark the second electronic device not containing the UWB module, but can also mark the space area. For example, mark the extent of a three-dimensional space region. Let's take a smartphone containing a UWB module as an example.
  • the smartphones are respectively placed at four positions A, B, C, and D.
  • the first electronic device 100 respectively acquires D The coordinates of the four positions in the first coordinate system and The plumb line passing through position A The plumb line passing through position B The plumb line passing through position C and a plumb line passing through position D form a three-dimensional area.
  • z e may be a preset value, or the height of the room or the area.
  • the smart phone is respectively placed at the eight vertex positions of the three-dimensional space area.
  • the first electronic device 100 respectively obtains the eight The coordinates in the coordinate system, so that the coordinate range of the three-dimensional space area can be obtained.
  • the three-dimensional space area is a room, that is, the coordinate range of the room in the first coordinate system is acquired.
  • the above is only an example of the position of the vertex, and the actual area can be determined according to the position where the smart phone is placed.
  • the smart phone may not be placed at the vertex, so that the determined area is an area smaller than the entire area of the room.
  • the conversion of coordinates in different coordinate systems can be performed in the form of vectors.
  • the distance between two points is the same in different coordinate systems, but the direction representation of the vector formed by the two points may be different in different coordinate systems.
  • the conversion can be carried out in the form of vectors.
  • the way to convert the vector The distance (both L) in the first coordinate system and the second coordinate system is the same, but the vector the direction in the first coordinate system, and the vector The directions represented by the second coordinate system are different.
  • the vector By obtaining the relative direction change between the first coordinate system and the second coordinate system, in the known vector Using the direction represented by the first coordinate system, the vector can be obtained The direction represented by the second coordinate system; combined with the coordinates of O e point, O b point in the first coordinate system, and the coordinates of O b point in the second coordinate system, can obtain the O e point in the second coordinate system Coordinates in the coordinate system.
  • the relative direction change of different coordinate systems can be changed by the pitch angle between the coordinate systems.
  • Azimuth (yaw) ⁇ and roll angle (roll) ⁇ are expressed.
  • the azimuth angle may also be called a yaw angle or a heading angle.
  • the coordinate origin O e of the UWB base station is moved in parallel to the coordinate origin O b of the second coordinate system.
  • the first coordinate system also moves accordingly. Definitions of the pitch angle, azimuth angle and roll angle are well known to those skilled in the art, and will not be repeated here.
  • Fig. 15 shows the pitch angle of the second coordinate system relative to the first coordinate system Azimuth and roll angle
  • the coordinate origin O b of the second coordinate system coincides with the coordinate origin O e of the first coordinate system after the parallel movement
  • the three axes of the second coordinate system are the X b axis, the Y b axis and the Z b axis
  • the first coordinate system's The three axes are X e axis, Y e axis and Z e axis.
  • O e Y b ' (that is, O b Y b ') is the projection of the Y b axis on the X e O e Y e plane of the first coordinate system.
  • O e Z b ' ie O b Z b '
  • the pitch angle of the second coordinate system relative to the first coordinate system The angle between the Y b axis of the second coordinate system and the X e O e Y e plane of the first coordinate system. That is, the angle between O b Y b ' and the Y b axis.
  • O b Y b on the Z e axis When the component of O b Y b on the Z e axis is located on the positive axis of the Z e axis, is positive; when the component of O b Y b on the Z e axis is located on the negative axis of the Z e axis, is negative.
  • the azimuth of the second coordinate system relative to the first coordinate system The angle between the projection of the Y b axis of the second coordinate system on the X e O e Y e plane of the first coordinate system and the Y e axis of the first coordinate system. That is, the angle between O b Y b ' and the Y e axis.
  • O b Y b ' on the X e axis is located on the positive axis of the X e axis, is positive; when the component of O b Y b ' on the X e axis is located on the negative axis of the X e axis, is negative.
  • the roll angle of the second coordinate system relative to the first coordinate system The angle between the Z b axis of the second coordinate system and the Y b O e Z e plane. That is, the angle between O b Z b ' and the Z b axis.
  • the component of the projection of the positive axis of the Z b axis on the Y b O e Z e plane on the X b axis is located on the positive axis of the X b axis, is positive; when the component of the projection of the positive axis of the Z b axis on the Y b O e Z e plane on the X b axis is located on the negative axis of the X b axis, is negative.
  • the component on the X b axis is located on the positive axis of the X b axis, is positive; when the projection of O b Z b 'on the X b O b Y b plane, the component on the X b axis is located on the negative axis of the X b axis, is negative.
  • Fig. 16 shows the pitch angle of the third coordinate system relative to the first coordinate system Azimuth and roll angle
  • the coordinate origin O t of the third coordinate system coincides with the coordinate origin O e of the first coordinate system after the parallel movement
  • the three axes of the third coordinate system are the X t axis, the Y t axis and the Z t axis
  • the three axes of the first coordinate system are X e axis, Y e axis and Z e axis.
  • O e Y t ' (that is, O t Y t ') is the projection of the Y t axis on the X e O e Y e plane of the first coordinate system.
  • O e Z t ' that is, O t Z t '
  • O t Z t ' is the projection of the Z t axis on the Y t O e Z e plane.
  • the pitch angle of the third coordinate system relative to the first coordinate system The angle between the Y t axis of the third coordinate system and the X e O e Y e plane of the first coordinate system. That is, the angle between O e Y t ' (that is, O t Y t ') and the Y t axis.
  • the azimuth of the third coordinate system relative to the first coordinate system The angle between the projection of the Y t axis of the third coordinate system on the X e O e Y e plane of the first coordinate system and the Y e axis of the first coordinate system. That is, the angle between O e Y t ' (that is, O t Y t ') and the Y e axis.
  • the roll angle of the third coordinate system relative to the first coordinate system The angle between the Z t axis of the third coordinate system and the Y t O e Z e plane. That is, the angle between O t Z t ' and the Z t axis.
  • the component of the projection of the positive axis of the Z t axis on the Y t O e Z e plane on the X t axis is located on the positive axis of the X t axis, is positive; when the component of the projection of the positive axis of the Z t axis on the Y t O e Z e plane on the X t axis is located on the negative axis of the X t axis, is negative.
  • the component on the X t axis is located on the positive axis of the X t axis, is positive; when the projection of O t Z t 'on the X t O t Y t plane, the component on the X t axis is located on the negative axis of the X t axis, is negative.
  • the direction change of the third coordinate system relative to the first coordinate system can use the attitude matrix to express.
  • attitude matrix The above formula (8) is the prior art, and those skilled in the art can obtain it from the prior art.
  • the first chapter 1.2.1 of the book “Inertial Navigation” (Beijing: Science Press, ISBN 7-03-016428-8, edited by Qin Yongyuan, first edition in May 2006, first printing in May 2006) pose matrix.
  • the second electronic device may include an IMU module.
  • the IMU module of the second electronic device is calibrated first. That is, the coordinate system based on the pitch angle, azimuth angle and roll angle output by the IMU module of the second electronic device is calibrated to the first coordinate system, or the IMU module output by the second electronic device calibrated to In this way, following the movement of the second electronic device, the pitch angle, azimuth angle and roll angle output by the IMU module of the second electronic device are the pitch angle, azimuth angle and roll angle of the second coordinate system relative to the first coordinate system. roll angle; or, the output of the IMU module of the second electronic device After transposition, the direction change of the second coordinate system relative to the first coordinate system can be reflected.
  • the second coordinate system of the second electronic device may be parallel to the first coordinate system (for example, the X b axis is parallel to the X e axis, the Y b axis is parallel to the Y e axis, and the Z b axis is parallel to the Z e axis. ), and the corresponding coordinate axes of the two coordinate systems have the same positive direction (for example, the positive direction of the X b axis is the same as the positive direction of the X e axis, the positive direction of the Y b axis is the same as the positive direction of the Y e axis, and the positive direction of the Z b axis is the same as that of the Y e axis.
  • Z e axis positive direction is the same)
  • the pitch angle, azimuth angle and roll angle output by the IMU module of the second electronic device at this time are all set to 0.
  • the second coordinate system of the second electronic device can be parallel to the first coordinate system, and the positive directions of each axis of the two coordinate systems are the same.
  • the millimeter wave radar module 160 of the first electronic device 100 is used to implement the millimeter wave radar function.
  • the multiple antennas in the millimeter-wave radar have distance differences in the lateral direction (for example, horizontal direction) and/or longitudinal direction (for example, vertical direction), and the distance difference between the antennas can be used to establish the coordinate system of the millimeter-wave radar ( fourth coordinate system).
  • the millimeter wave radar module 160 includes three transmitting antennas and four receiving antennas. Exemplarily, as shown in FIG. 17 , three transmitting antennas and four receiving antennas are located on the same longitudinal plane (for example, a vertical plane). The three transmitting antennas present a triangular distribution on the longitudinal plane. Wherein, the transmitting antenna 0 and the transmitting antenna 2 are located on the same horizontal plane; the four receiving antennas are located on the same horizontal line (for example, a horizontal line).
  • the receiving antenna 0 for example, an end point on one side
  • take the line connecting the receiving antenna 0 and the receiving antenna 1 as the X of the fourth coordinate system m axis, and the direction of receiving antenna 1 pointing to receiving antenna 0 is the positive direction of the X m axis
  • the line passing through the origin O m and perpendicular to the X m axis is the Z m axis of the fourth coordinate system
  • the direction pointing to the zenith is The positive direction of the Z m axis; combined with the rule of the right-hand rectangular coordinate system, the Y m axis and the positive direction of the Y m axis of the fourth coordinate system are determined.
  • prompt information may be marked on the outer surface of the first electronic device 100 for prompting the correct installation method or correct placement method, so that the three transmitting antennas and the four antennas of the millimeter-wave radar module 160 in the first electronic device 100 The receiving antennas are located on the same longitudinal plane.
  • the names of the three axes in the fourth coordinate system and the positive direction of the three axes can also adopt other definitions, which will not be repeated here.
  • the embodiment of the present application is introduced by taking the X m axis, the Y m axis, and the Z m axis in the fourth coordinate system shown in FIG. 17 as an example.
  • the point on the receiving antenna 0 is used as the origin of the fourth coordinate system above, which is only exemplary.
  • a point on other antennas may also be the origin of the fourth coordinate system.
  • the fourth coordinate system may be established in advance. It is only necessary for the installer to install the first electronic device 100 as required. For example, before the first electronic device 100 leaves the factory, it has already been established, and the related information of the fourth coordinate system is stored locally or in the server. When the first electronic device 100 is started, or when the first electronic device 100 receives a specific trigger, the first electronic device 100 calls the relevant information of the fourth coordinate system from the local or the server.
  • the server in this application may be the home central device 200 or a cloud server.
  • the outer surface of the first electronic device 100 may only have one marked prompt information, and the marked prompt information indicates the installation of the first electronic device.
  • the transmitting antenna and the receiving antenna of the millimeter wave radar module, and the antenna of the UWB module all meet the preset requirements.
  • the transmitting antenna of the millimeter wave radar module 160 transmits a signal, and the signal is received by the receiving antenna of the millimeter wave radar module 160 after being reflected by a reflection point.
  • the frequency of the LFMCW millimeter-wave radar transmission signal increases linearly with time, and this type of signal is called a chirp signal.
  • the millimeter-wave radar module 160 receives the Chirp signal through the receiving antenna, and the received signal and the local oscillator signal are mixed by a mixer to output a difference frequency signal. The number is converted into a digital difference frequency signal.
  • FIG. 18 shows a schematic diagram of the principle of determining the distance and radial velocity of the reflection point by the millimeter-wave radar provided in the embodiment of the present application.
  • the solid line is the transmission signal of the millimeter wave radar module 160
  • the dotted line is the reception signal of the millimeter wave radar module 160 .
  • a frequency sweep period Tc of the Chirp signal is usually on the order of microseconds (us), and the modulation frequency S 0 (ie, the frequency change rate) reaches an order of magnitude of 10 12 (unit Hz/s).
  • a Chirp signal within a frequency sweep period Tc is referred to as a Chirp signal. It is generally considered that the spatial position of the target does not change within a frequency sweep period Tc.
  • the transmitting antenna transmits a Chirp signal.
  • the receiving antenna receives the signal reflected from the reflection point, and the frequency difference between the received signal and the transmitted signal is ⁇ *S 0 .
  • 2d/c
  • d is the distance between the reflection point and the millimeter-wave radar module (also can be regarded as the first electronic device)
  • c is the transmission rate of the Chirp signal in the air
  • the speed of light is generally selected. Therefore, the relationship between the distance d of the reflection point and the frequency f 0 of the beat frequency signal is shown in formula (9).
  • the time domain signal can be converted into a frequency domain signal, and the sine wave in the time domain correspondingly generates a peak value in the frequency domain, and the peak value corresponds to the frequency f 0 of the difference frequency signal.
  • the transmission signal of the millimeter-wave radar module is reflected back as three signals by three reflection points.
  • the millimeter-wave radar module receives 3 received signals, and obtains 3 corresponding difference frequency signals respectively.
  • Performing fast Fourier transformation (FFT) on the three difference frequency signals to obtain a range (range) curve (called range FFT (range FFT)
  • range FFT range FFT
  • each peak indicates that there is a reflection point at the corresponding position.
  • the frequency of the beat frequency signal can be obtained by calculating the frequency corresponding to the peak value.
  • the distance of the reflection point can be obtained by detecting the frequency of the beat frequency signal.
  • doppler FFT Doppler FFT
  • the millimeter-wave radar module receives the Chirp signal, mixes the transmitted signal and the received signal to obtain a difference frequency signal after frequency mixing, power amplification and filtering, and the difference frequency signal is converted into a digital difference frequency signal through analog-to-digital conversion.
  • the distance and radial velocity of the reflection point can be obtained by detecting the digital difference frequency signal.
  • one frame of data of the millimeter-wave radar module is the data within one radar scanning period, and one radar scanning period includes M frequency scanning periods Tc. There are N sampling points of the beat frequency signal in the period Tc.
  • the frequency of the beat frequency signal can be obtained by performing a one-dimensional range FFT on the digital beat frequency signal within a frequency sweep period Tc. This allows the distance to the reflection point to be calculated based on the beat signal frequency.
  • the number of points of the range FFT is the number of sampling points N of the difference frequency signal corresponding to the Chirp signal.
  • the phase difference of multiple digital difference frequency signals can be obtained by performing one-dimensional doppler FFT on the digital difference frequency signals of the same reflection point in multiple adjacent frequency sweep periods Tc. In this way, the radial velocity of the reflection point can be calculated according to the phase difference of multiple beat frequency signals.
  • the number of doppler FFT points is the number of sweep cycles included in one frame of data.
  • the joint operation of range FFT and doppler FFT can be considered as a two-dimensional FFT of one frame of data.
  • one frame of data processed by the two-dimensional FFT is referred to as one frame of two-dimensional FFT data.
  • FIG. 18 is a schematic diagram of a frame of two-dimensional FFT data acquired by the millimeter wave radar module. As shown in (e) of FIG. 18 , there are multiple peaks in a frame of two-dimensional FFT data, and each peak represents a reflection point at a corresponding location. The value of a reflection point in the distance dimension or velocity dimension is the distance of the reflection point or the radial velocity of the reflection point.
  • the signal direction of the reflected signal includes a horizontal direction (for example, a horizontal direction) and a vertical direction (for example, a vertical direction).
  • the azimuth angle can be used to indicate the horizontal direction of the signal
  • the elevation angle can be used to indicate the vertical direction of the signal.
  • the azimuth angle and the elevation angle can be calculated through the phase difference between the received signals of the multiple receiving antennas of the millimeter wave radar module.
  • FIG. 19 shows a schematic diagram of the principle of the millimeter wave radar provided in the embodiment of the present application for determining the signal direction of the reflected signal at the reflection point.
  • the millimeter wave radar module includes four receiving antennas. After the signal transmitted by the same transmitting antenna is reflected by the reflection point, the phase difference between the reflected signals arriving at any two different receiving antennas can be used by the millimeter-wave radar module to measure the azimuth of the reflected signal.
  • the millimeter-wave radar module determines the lateral direction of the reflected signal according to the phase difference between the signals arriving at two adjacent receiving antennas, and can refer to the calculation method of the angle ⁇ in (e) of FIG. 12 . I won't repeat them here.
  • the accuracy of measuring the incoming direction of signals may be improved by increasing the number of antennas.
  • the antenna of the millimeter wave radar module adopts the distribution structure shown in (a) of FIG. 8 .
  • the millimeter-wave radar module transmits signals, it can switch the transmitting antenna by changing the relay switch, so as to realize the separation of signals from different transmitting antennas by the receiving antenna.
  • the distance between the transmitting antenna 0 and the transmitting antenna 2 is 2 ⁇ L ; where ⁇ L is the wavelength of the millimeter wave.
  • the signal transmitted by the transmitting antenna 2 and reaching the receiving antenna 0 can be equivalent to the receiving signal of the receiving antenna 4; the signal transmitted by the transmitting antenna 2 reaching the receiving antenna 1 can be equivalent to the receiving signal of the receiving antenna 5;
  • the signal transmitted by the transmitting signal reaching the receiving antenna 2 can be equivalent to the receiving signal of the receiving antenna 6 ;
  • the signal transmitted by the transmitting antenna 2 reaching the receiving antenna 3 can be equivalent to the receiving signal of the receiving antenna 7 .
  • the receiving antenna 4 , the receiving antenna 5 , the receiving antenna 6 and the receiving antenna 7 in the one-transmit-eight-receive schematic diagram in (b) of FIG. 19 are equivalent virtual receiving antennas.
  • the antenna of the millimeter wave radar module adopts the structure shown in (a) of FIG. 8 .
  • the signals transmitted by the transmitting antenna 0 and received by the receiving antenna 2 and the receiving antenna 3 can be combined; the transmitting antenna 2 transmits and the signals received by the receiving antenna 0 and the receiving antenna 1; Antenna 1 transmits, receives signals received by antenna 0, receiving antenna 1, receiving antenna 2, and receiving antenna 3; compares the signals with phase differences in the longitudinal dimension, and calculates the longitudinal direction of the reflected signal (for example, through the pitch angle to reveal ).
  • the signal transmitted by transmitting antenna 0 and received by antenna 2 can be compared with the signal transmitted by transmitting antenna 1 and received by antenna 0 to obtain the phase difference between the two, and calculate the pitch angle based on the phase difference.
  • the specific steps of calculating the pitch angle according to the phase difference of the received signals refer to the calculation method of the angle ⁇ in (f) of FIG. 12 ; no more details are given here.
  • the first electronic device may use formula (7) to calculate the fourth coordinate of the reflection point established by the first electronic device according to the distance between the reflection point and the millimeter-wave radar, and the signal direction (azimuth and elevation angle) of the reflection signal coordinates in the system.
  • the millimeter-wave radar will detect a human body as multiple reflection points within the detectable range.
  • the point cloud data of the reflection points can be clustered, that is, multiple detected reflection points are aggregated into one class, and the cluster is determined as an object or a human body.
  • FIG. 20 is a schematic diagram of the effect of clustering processing on point cloud data.
  • Each point in (a) of Figure 20 represents a reflection point detected by the millimeter-wave radar module, and the three closed curves represent the clustered classes, and the points outside the three closed curves indicate that they have not been converged into any category reflection points in .
  • the millimeter-wave radar module uses a clustering algorithm to cluster multiple reflection points into an object or human body (user), and the object or human body (user) can be calculated according to the coordinates of the clustered multiple reflection points Coordinates in the fourth coordinate system.
  • the coordinates of the object or human body (user) in the fourth coordinate system may be the coordinates of the center of gravity of the object or human body in the fourth coordinate system.
  • the smaller point in (b) of Figure 20 represents the reflection point detected by the millimeter-wave radar, and the largest point is the human body (user) in the fourth coordinate system coordinate point.
  • the coordinates of the human body (user) in the fourth coordinate system are marked as
  • the height of the object or the height of the human body (user) may also be calculated according to the height H of the first electronic device from the ground and the coordinates of the object or human body (user) in the fourth coordinate system.
  • formula (10) can be used to calculate the height h m of a human body (user), as follows:
  • FIG. 21 shows a flow chart of a method for the first electronic device to determine the user's coordinates in the fourth coordinate system provided by the embodiment of the present application.
  • the method may include:
  • the millimeter wave radar module receives reflected signals.
  • the millimeter wave radar module performs two-dimensional fast Fourier transform on the digital difference frequency signal.
  • the receiving antenna of the millimeter-wave radar module receives the reflected signal, obtains the digital difference frequency signal according to the reflected signal, performs two-dimensional fast Fourier transform on the digital difference frequency signal, and obtains two-dimensional FFT data.
  • the millimeter-wave radar module uses a target detection algorithm to obtain the distance between the reflection point and the millimeter-wave radar and the radial velocity.
  • the millimeter-wave radar can use the target detection algorithm to detect a frame of two-dimensional FFT data to obtain the distance and radial velocity of the target.
  • the signal received by the millimeter-wave radar includes target reflection signals, background noise, and clutter interference.
  • the signal received by the millimeter-wave radar includes target reflection signals, background noise, and clutter interference.
  • the test environment of a frame of two-dimensional FFT data shown in (e) of Figure 18 there are moving human bodies at distances of 1m, 2m and 4m from the millimeter-wave radar, and it can be seen from (e) of Figure 18 , in addition to the above three peaks at 1m, 2m and 4m from the millimeter-wave radar, there are other large peaks caused by reflected signals (background noise and clutter interference, etc.). If the reflection signal caused by background noise and clutter interference is detected as a reflection point, a false alarm will be generated.
  • a constant false-alarm rate (CFAR) target detection algorithm can be used to obtain the distance and radial velocity of the reflection point, so as to maintain a constant false alarm rate and improve the target detection accuracy.
  • CFAR constant false-alarm rate
  • the millimeter-wave radar module can use the target detection algorithm in the prior art as needed to obtain the distance and radial velocity of the reflection point according to the two-dimensional FFT data. limited. The specific implementation method of the target detection algorithm can be obtained from the prior art, and will not be repeated here.
  • the millimeter wave radar module determines the signal direction of the reflected signal.
  • a phase difference method such as a phase difference method, a sum-difference beam method, and a music method may be used to estimate the azimuth angle and the elevation angle.
  • Algorithms such as the phase difference method, the sum-difference beam method, and the music method can be obtained from the prior art, and will not be repeated here.
  • the millimeter wave radar module determines the coordinates of the reflection point in the fourth coordinate system.
  • the millimeter-wave radar determines the coordinates of the reflection point in the fourth coordinate system according to the distance between the reflection point and the millimeter-wave radar and the signal of the reflection point.
  • the millimeter wave radar module determines the coordinates of the smart device or the user in the fourth coordinate system.
  • a clustering algorithm is used to cluster the detected reflection points, and multiple reflection points are clustered into smart devices or users.
  • Clustering algorithms include: partition-based clustering methods, density-based partition methods, model-based partition methods, network-based partition methods, etc.
  • common clustering algorithms include density-based spatial clustering of applications with noise (DBSCAN), K-Means algorithm, Birch algorithm, etc. Any clustering algorithm may be used for clustering processing, which is not limited in this embodiment of the present application.
  • the coordinates of the smart device or the user in the fourth coordinate system can be calculated according to the average coordinates of multiple reflection points of the smart device or the user after clustering.
  • the millimeter wave radar module tracks smart devices or users.
  • the millimeter wave radar module performs target detection on each frame of data received. Furthermore, after using the target detection algorithm and clustering algorithm to detect the object (smart device) or human body in each frame of data, the detection result in the current frame can also be combined with the detection result in the previous frame through the association algorithm.
  • One matching to realize the tracking of the object or human body (that is, to obtain the change of the coordinate value of the object or human body over time).
  • a tracking algorithm front and rear frame correlation algorithm
  • the millimeter wave radar module can determine whether the target is stationary or moving according to the target tracking result.
  • the millimeter wave radar module can also be used to detect the physiological characteristics (such as breathing rate, heartbeat rate) of the target in a stationary state. If it is determined that the physiological characteristics of the target meet the set conditions (for example, the breathing rate is within a preset range, and the heart rate is within a preset range), then it is determined that the target or the clustered target is a human body (user); and the user is tracked .
  • the millimeter wave radar module detects the user's physiological characteristics, identity category and human body posture, etc.
  • the millimeter-wave radar module detects information such as the user's physiological characteristics, identity category, and human body posture in conjunction with the accompanying drawings.
  • the millimeter wave radar module detects the user's physiological characteristics
  • the user's physiological characteristics include the user's breathing rate, heart rate, and the like.
  • the slight displacement of the user's body caused by breathing and heartbeat can cause the phase change of the reflected signal of the millimeter-wave radar module.
  • the breathing frequency and heartbeat frequency of the user can be obtained by detecting the phase change of the reflected signal of the millimeter-wave radar module when the user is stationary.
  • the method for the millimeter-wave radar module to obtain the user's breathing frequency and heartbeat frequency may include:
  • the Range FFT is performed on each frame of data of the millimeter-wave radar, and the frequency of the difference frequency signal can be obtained according to the result of the Range FFT, that is, the phase of the difference frequency signal can be obtained.
  • the millimeter-wave radar tracks the user's target, and can obtain the change of the user's position over time, that is, the user's position at a certain moment can be obtained.
  • the phase extraction is performed on the RangeFFT result at the user's current position, that is, the phase information of the difference frequency signal is extracted.
  • the radar scanning period is 100ms, that is to say, the period of one frame of data is 100ms.
  • the phase information of the beat frequency signal is extracted once for each frame of data.
  • phase unwrapping is performed by subtracting 2 ⁇ from the phase value; if the phase value calculated in S2201 is less than - ⁇ , phase unwrapping is performed by adding 2 ⁇ to the phase value.
  • phase difference operation is performed on the unwrapped phase by subtracting successive phase values, resulting in a phase difference ⁇ v; this enhances the heartbeat signal and removes any phase drift.
  • ⁇ v(k) v(k)-v(k-1).
  • the phase values are filtered with a band-pass filter for discrimination according to the heart rate and respiratory rate, respectively. For example, set the passband range of the bandpass filter to 0.8Hz-4Hz, filter the phase value, and detect heartbeat; set the passband range of the bandpass filter to 0.1Hz-0.6Hz, and filter the phase value Values are filtered to detect respiration.
  • the millimeter wave radar module detects the identity category of the user
  • the millimeter wave radar module can determine the identity category of the user according to the calculated height h m (of the user).
  • User identity categories include adult, child, etc.
  • the millimeter-wave radar module calculates the height of the user detected in each frame of data, denoted as h m (t) represents the height value at time t.
  • the average value H m of the user's height can also be calculated according to h m (t), and the identity category of the user can be determined according to H m .
  • the corresponding relationship between the user's height and the user's identity category is shown in Table 1.
  • the millimeter wave radar module detects the user's human body posture
  • the millimeter wave radar module can determine the user's body posture according to the calculated (user's) height h m changes.
  • Human posture includes standing, sitting, lying down, etc.
  • the millimeter-wave radar module performs target tracking on the user. If it is determined that the height of the user has changed, and the value of the height change is greater than the preset height difference threshold, and the maintenance time after the height change is longer than the preset time length, then It is determined that the user's body posture changes. For example, as shown in (a) of FIG. 23 , the millimeter-wave radar module detects that the user's height changes from 175 cm to 80 cm, and keeps at 80 cm for a period of time, then determines that the user changes from standing to lying down. The millimeter-wave radar module detects that the height of the user changes from 175 cm to 120 cm, and keeps at 120 cm for a period of time, then it is determined that the user changes from standing to sitting.
  • the millimeter wave radar module determines the user's body posture according to the height difference ⁇ h between the user's current height and the standing height.
  • ⁇ h(t) can be calculated by formula (11).
  • ⁇ h(t) represents the height difference between the height of the user at time t and the height of the user standing, as follows:
  • the millimeter-wave radar module determines that ⁇ h at multiple consecutive moments (greater than a preset duration) satisfies the preset altitude difference threshold, it determines that the user's body posture changes.
  • the corresponding relationship between height difference ⁇ h and human body posture is shown in Table 2.
  • the millimeter wave radar can also identify the user's fall behavior by monitoring the change of the user's height.
  • (b) of FIG. 23 shows the height change of the user falling down and lying down normally. As shown in (b) of Figure 23, compared to lying down normally, the height of the user changes faster when falling (that is, the height difference within the same duration is large), and the height after the fall is lower.
  • the millimeter-wave radar module determines that the height difference ⁇ h between the user's current height and the standing height meets the preset fall height threshold, and the time period ⁇ t for the user to change from the standing height to the current height meets the preset If the fall time threshold is exceeded, it is determined that the user has fallen.
  • the corresponding relationship between ⁇ h, ⁇ t and the user's fall is shown in Table 3.
  • the first electronic device 100 establishes the first coordinate system and the fourth coordinate system, in order to facilitate coordination, it is necessary to convert the coordinate values in the first coordinate system and the coordinate values in the fourth coordinate system. For example, transform the coordinates of the second electronic device 300 in the first coordinate system into the coordinates of the second electronic device 300 in the fourth coordinate system, or transform the coordinates of the user in the fourth coordinate system into the coordinates of the user in the first coordinate system. Coordinates in the coordinate system. Therefore, the conversion between the first coordinate system and the fourth coordinate system is involved.
  • the antenna distribution of both the UWB module 150 and the millimeter wave radar module 160 of the first electronic device 100 can be set as shown in FIG. 24 .
  • the antenna 0, the antenna 1 and the antenna 2 present an L-shaped distribution on a longitudinal plane (for example, a vertical plane).
  • Transmitting antenna 0, transmitting antenna 1, and transmitting antenna 2 present a triangular distribution on a longitudinal plane (for example, a vertical plane), and receiving antenna 0, receiving antenna 1, receiving antenna 2, and receiving antenna 3 are distributed in a longitudinal plane (for example, a vertical plane). plane) on the same horizontal line, and three transmitting antennas and four receiving antennas are arranged on the same longitudinal plane.
  • the origin O e of the first coordinate system is set at the end point of antenna 0 (which can also be replaced by a center point, etc.); the connection line between antenna 0 and antenna 1 is used as the X e axis, and the direction that antenna 1 points to antenna 0 is the positive direction of X e axis.
  • a straight line perpendicular to the X e axis is the Z e axis of the first coordinate system, and the antenna 2 is located in the positive direction of the Z e axis.
  • the Y e axis and the positive direction of the Y e axis of the first coordinate system are determined.
  • the direction pointing to the receiving antenna 0 is the positive direction of the X m axis;
  • the longitudinal line passing through the origin O m (for example, a plumb line) is the Z m axis of the fourth coordinate system, and the direction pointing to the zenith is the positive direction of the Z m axis ;
  • combined with the rule of the right-hand rectangular coordinate system determine the Y m axis and the positive direction of the Y m axis of the fourth coordinate system.
  • the X e axis is parallel to the X m axis
  • the Y e axis is parallel to the Y m axis
  • the Z e axis is parallel to the Z m axis.
  • the fourth coordinate system and the first coordinate system can be transformed into each other only by translation.
  • the first coordinate system moves along the direction parallel to the X e axis for a distance dx, then moves along a direction parallel to the Y e axis for a distance dy, and then moves along a direction parallel to the Z e axis for a distance dz coincides with the fourth coordinate system.
  • the relative positions of the first coordinate system and the fourth coordinate system of the first electronic device 100 may be set in other manners.
  • a similar method can be used to convert the fourth coordinate system and the first coordinate system, which will not be repeated here.
  • each room or each area is provided with one first electronic device.
  • the first electronic device acquires information about the devices in the room or in the area by marking the second electronic device that does not contain the UWB module with the second electronic device that contains the UWB module, and communicating and interacting with the second electronic device that contains the UWB module. Position information of each setting area.
  • the first electronic device obtains the location information of the user in the room or the area through the millimeter-wave radar module; and can further obtain information such as the user's physiological characteristics, identity category, and human body posture.
  • the first electronic device controls or notifies the second electronic device to perform a preset operation according to the received information. This example is for a single room or area.
  • a hub device may be set.
  • Devices such as the central device, the first electronic device, and the second electronic device form a whole-house system in a wired or wireless manner.
  • the first electronic device acquires information about the devices in the room or in the area by marking the second electronic device that does not contain the UWB module with the second electronic device that contains the UWB module, and communicating and interacting with the second electronic device that contains the UWB module. Position information of each setting area.
  • the first electronic device obtains the location information of the user in the room or the area through the millimeter-wave radar module, and obtains information such as the user's physiological characteristics, identity category, and human body posture.
  • the first electronic device sends the location information of each device and each set area, and at least one item of information such as the user's location, physiological characteristics, identity category, and human body posture, to the central device through wired or wireless means.
  • the central device controls or notifies the second electronic device to perform a preset operation according to the received information.
  • the hub device may be integrated with a specific first electronic device (for example, the first electronic device in the living room) into one device.
  • a fifth coordinate system (also known as the whole house coordinate system) needs to be established.
  • the user may input the floor plan of the whole house, the installation location of the central device, the location of the installation location of the central device in the floor plan of the whole house, the height information of the whole house, etc. into the central device.
  • the floor plan of the whole house is the planar space layout of the house, which is a picture describing the use function, relative position and size of each independent space in the whole house.
  • the central device establishes the fifth coordinate system according to the floor plan of the whole house.
  • the projection point of the southernmost point of the whole house type projected on the horizontal plane is the first projection point, and a first straight line parallel to the east-west direction is drawn through the first projection point. line; the projection point of the westernmost point of the whole house on the horizontal plane is the second projection point, and a second straight line parallel to the north-south direction is drawn through the second projection point; the intersection point of the first straight line and the second straight line is taken as the fifth coordinate The origin Oh of the system.
  • the first straight line serves as the X h axis, and the due east direction is the positive direction of the X h axis.
  • the second straight line serves as the Y h axis, and the true north direction is the positive direction of the Y h axis.
  • the Z h axis is perpendicular to the horizontal plane, and the direction pointing to the sky is the positive direction of the Z h axis.
  • the names of the three axes in the fifth coordinate system and the positive directions of the three axes can also be determined in other ways, which will not be repeated here.
  • the first electronic device includes an IMU module.
  • a central device is installed in the living room, and the central device and the first electronic device in the living room are installed on the wall or the ceiling in parallel.
  • the included angle can be output by the IMU module of the first electronic device, or can be obtained through analysis based on the results output by the IMU module of the first electronic device, or can be calculated by measuring results of instruments such as a level gauge and/or a plumb gauge.
  • the fifth coordinate system 25 shows the included angle ⁇ between the positive direction of the Y g axis and the positive direction of the Y e axis.
  • the conversion between the sixth coordinate system and the fifth coordinate system is well known to those skilled in the art, and will not be repeated here.
  • the three axes of the sixth coordinate system are respectively parallel to the three axes of the fifth coordinate system. In this way, the conversion between the first coordinate system and the fifth coordinate system can be realized.
  • the central device when the central device establishes the fifth coordinate system, the three axes of the fifth coordinate system are respectively parallel to the three axes of the sixth coordinate system, and the fifth coordinate system can also be determined according to the method shown in (a) of Figure 25 The origin Oh of the coordinate system.
  • instruments such as a level and/or a plumb meter or equipment including an IMU module are used to assist, so that the three axes of the first coordinate system established by the first electronic device are respectively parallel to the three axes of the sixth coordinate system. axis. In this way, the three axes of the first coordinate system are respectively parallel to the three axes of the fifth coordinate system, and the conversion between the first coordinate system and the fifth coordinate system is not required.
  • the distance difference between the coordinate origins of the first coordinate system and the fifth coordinate system can be obtained through two coordinate values of the same central device in the first coordinate system and the fifth coordinate system.
  • the hub device can obtain coordinate information of the hub device in the fifth coordinate system.
  • the coordinate information of the central device in the first coordinate system can be obtained in two ways: (i) If the central device includes a UWB module, through the UWB communication between the central device and the first electronic device, the central device can be obtained at Coordinate information in the first coordinate system; (ii) If the central device does not include a UWB module, the second electronic device containing the UWB module can mark the central device to obtain the coordinate information of the central device in the first coordinate system.
  • the distance difference between the coordinate origins of the first coordinate system and the fifth coordinate system can be realized.
  • each room is equipped with a first electronic device in all or some of the rooms, all or some areas are equipped with a first electronic device, and a single room is equipped with one or more second electronic devices. equipment.
  • FIG. 26 shows the overall steps of the automatic control method based on human perception. As shown in (a) of Figure 26, the method may include:
  • the first electronic device establishes the first coordinate system and the fourth coordinate system
  • the second electronic device establishes the second coordinate system
  • the central device establishes the fifth coordinate system; through the first coordinate system, the second coordinate system, and the third coordinate system 1. Conversion from the fourth coordinate system to the fifth coordinate system, obtaining position information of each device, area, and user in the fifth coordinate system.
  • S1 The introduction of S1 is specifically divided into the following steps.
  • the first electronic device establishes the first coordinate system and the fourth coordinate system
  • the second electronic device establishes the second coordinate system
  • the central device establishes the fifth coordinate system.
  • the second electronic device including the UWB module establishes the second coordinate system
  • the second electronic device not including the UWB module establishes the third coordinate system
  • the establishment of the first coordinate system and the fourth coordinate system by the first electronic device the establishment of the second coordinate system by the second electronic device, and the establishment of the fifth coordinate system by the central device, please refer to the above principles, which will not be repeated here.
  • the first electronic device may be calibrated during the first use, so as to reduce or even avoid errors caused by installation.
  • the installation error of the first electronic device may reduce the measurement accuracy of the UWB system.
  • at least one of the first electronic device 1 located in the entrance hallway and the first electronic device 3 located in the living room may have an installation error.
  • the first electronic device 1 determines that the second electronic device is located at position 1
  • the first electronic device 3 determines that the second electronic device is located at position 2.
  • the first electronic device 1 and the first electronic device 3 determine that the electronic devices at positions 1 and 2 are actually the same second electronic device, which indicates that there is an installation error, which will reduce the measurement precision.
  • the first electronic device includes a UWB module and a millimeter-wave radar module, and can perform installation error correction on the UWB module and the millimeter-wave radar module respectively.
  • the antennas in the UWB module 150 and the millimeter wave radar module 160 of the first electronic device 100 are distributed as shown in FIG. 24 . Since the hardware configuration ensures that the relative positions of the UWB module and the millimeter-wave radar in the first electronic device are determined, installation error correction can only be performed on the UWB module or the millimeter-wave radar module. In another implementation manner, installation error correction can be performed on the UWB module and the millimeter-wave radar module of the first electronic device at the same time, and correction accuracy can be improved through multiple corrections.
  • the correction of the UWB module of the first electronic device is taken as an example for introduction. It can be understood that the process of calibrating the millimeter-wave radar module of the first electronic device is similar to the calibrating process of the UWB module of the first electronic device, which will not be repeated here. The following methods are merely exemplary, and do not limit the methods of correction. Other methods of correction are also within the scope of this application.
  • Step 11 Correct the installation error of the reference first electronic device through the central device, and obtain the first correction parameter.
  • the reference first electronic device is one first electronic device among the plurality of first electronic devices.
  • the central device is installed in a living room, the first electronic device in the living room may serve as the reference first electronic device.
  • the first coordinate system of the reference first electronic device is denoted as e1 system.
  • the central device can display a map of the whole house.
  • the hub device may instruct the user to hold a second electronic device including a UWB module and move from a known and easily identifiable location 1 to another known and easily identifiable location 2 according to the first track.
  • the reference first electronic device detects the movement trajectory of the second electronic device (through the UWB module) or the movement trajectory of the user (through the millimeter-wave radar module). Taking the movement trajectory of the second electronic device as an example, if the detected second electronic device There is a deviation between the movement trajectory of the device and the first trajectory.
  • the hub device may instruct the user to hold a second electronic device containing a UWB module to move from position one with known coordinates to position two with known coordinates along a straight line.
  • the first electronic device 1 can acquire the actual movement track of the second electronic device according to the detection of the UWB module.
  • the attitude error rotation matrix W and the position error vector G can be calculated by an algorithm.
  • the algorithm may adopt the ICP algorithm as shown in FIG. 28 .
  • the optimal matching attitude error rotation matrix W and position error vector G can be calculated to minimize the error function.
  • the first correction parameters include the optimal matching attitude error rotation matrix W and position error vector G at this time.
  • the installation error of the first electronic device is corrected based on the reference first electronic device.
  • Step 12 Correct the installation error of the first electronic device by reference to the first electronic device, and obtain a second correction parameter.
  • the user holds the second electronic device including the UWB module and moves in each room of the whole house, and each first electronic device in the whole house locates the second electronic device.
  • each first electronic device in the whole house locates the second electronic device.
  • the user moves in the overlapping area 2701 , as shown in (b) of FIG. 27 , there is a certain deviation in the movement trajectory of the second electronic device acquired by the first electronic device 1 and the first electronic device 3.
  • the first electronic device 1 is the reference first electronic device.
  • the installation errors of the two first electronic devices are corrected according to the moving trajectories of the second electronic devices respectively detected by the two first electronic devices.
  • the correction parameters include an attitude error rotation matrix W and a position error vector G.
  • the first electronic device 1 acquires the moving track of the second electronic device in the first coordinate system (e1 system) established by the first electronic device 1
  • the first electronic device 3 obtains the movement track of the second electronic device under the first coordinate system (e3 system) established by the first electronic device 3 in Indicates the coordinates of the second electronic device in the e1 system detected by the first electronic device 1 at time tn, Indicates the coordinates of the second electronic device detected by the first electronic device 3 in the e3 system at time tn.
  • the user movement trajectories q e1 and q e3 can be converted to the fifth coordinate system through the following formula (13), which are respectively recorded as q e1->h and q e3->h .
  • q e1->h and q e3->h are point clouds that record the user’s movement trajectory
  • the iterative closest point (ICP) algorithm can be used to calculate the difference between the two point clouds q e1->h and q e3->h
  • the attitude error rotation matrix W e3->e1 and the position error vector G e3->e1 make the three-dimensional space error between the q e1->h corrected and the q e3->h two cluster point clouds the smallest.
  • the basic principle of the ICP algorithm is shown in Figure 28.
  • the second correction parameters include the optimal matching attitude error rotation matrix W and position error vector G at this time.
  • the above steps 11 and 12 can be exchanged in order.
  • the above step 11 and step 12 are only an example.
  • all the first electronic devices may be calibrated with the central device without going through the reference first electronic device.
  • each first electronic device can detect the user's movement track.
  • the relevant manner is similar to the processing manner of the movement track of the second electronic device, and will not be repeated here.
  • Step 13 Transform the coordinates of the second electronic device in the first coordinate system or the user's coordinates in the fourth coordinate system calculated by the first electronic device into coordinates in the fifth coordinate system.
  • an attitude error rotation matrix and a position error vector of the reference first electronic device relative to the hub device are acquired.
  • the attitude error rotation matrix and position error vector of the first electronic device 1 relative to the central device are denoted as [W e1->h , G e1->h ].
  • the attitude error rotation matrix and the position error vector of other first electronic devices relative to the reference first electronic device are corrected.
  • the attitude error rotation matrix and position error vector of other first electronic devices relative to the first electronic device 1 are recorded as [W ek->e1 , G ek->e1 ], k ⁇ 2,...,n.
  • the first electronic device 1 as a reference first electronic device as an example.
  • the coordinates of the origin of the first coordinate system of the kth other first electronic equipment except the reference first electronic equipment in the fifth coordinate system are expressed as Select any point in space as point q, and the coordinates of point q in the first coordinate system established by the kth first electronic device are expressed as The coordinates of point q in the fifth coordinate system after installation error correction are expressed as The coordinates of the point q in the first coordinate system established by the k-th first electronic device can be converted to the fifth coordinate system after installation error correction through the formula (12), as follows:
  • the coordinates of the origin of the first coordinate system established by the reference first electronic device in the fifth coordinate system are expressed as The coordinates of point q in the first coordinate system established by the reference first electronic device are expressed as The coordinates of point q in the fifth coordinate system after installation error correction are expressed as The coordinates of point q in the first coordinate system established by the reference first electronic device can be converted to the fifth coordinate system after installation error correction through formula (13), as follows:
  • the coordinates detected by the first electronic device are directly converted to the fifth coordinate system after installation error correction without passing through the reference first electronic device.
  • the attitude error rotation matrix between the user's movement track detected by the first electronic device and the actual user's movement track is denoted as W e->h
  • the position error vector is denoted as G e->h .
  • the coordinates of the origin of the first coordinate system established by the first electronic device in the fifth coordinate system are expressed as
  • the coordinates of point q in the first coordinate system established by the first electronic device are expressed as
  • the coordinates of point q in the fifth coordinate system after installation error correction are expressed as
  • the coordinates of point q in the first coordinate system established by the first electronic device can be converted to the fifth coordinate system after installation error correction by formula (14), as follows:
  • the method of converting the coordinates of the user detected by the first electronic device in the fourth coordinate system to the fifth coordinate system after installation error correction is the same as converting the coordinates of the user detected by the first electronic device in the first coordinate system
  • the method of converting to the fifth coordinate system after installation error correction is similar and will not be repeated here.
  • the UWB module of the first electronic device can be used to locate the second electronic device in the whole house.
  • the first electronic device in each room or area may determine the coordinates of the second electronic device in the room or area in the first coordinate system of the first electronic device.
  • the second electronic device in an overlapping area among the multiple first electronic devices may be positioned.
  • each first electronic device in the whole house transforms the acquired coordinates of one or more second electronic devices in the first coordinate system into the fifth coordinate system, and converts the coordinates of one or more second electronic devices The coordinates of the second electronic device in the fifth coordinate system are sent to the central device.
  • each first electronic device in the whole house sends the acquired coordinates of one or more second electronic devices in the first coordinate system to the central device, and the central device sends the coordinates received from each first electronic device The coordinates of the second electronic device in the first coordinate system are converted to the fifth coordinate system.
  • converting the coordinates of the second electronic device in the first coordinate system to the fifth coordinate system includes: converting the coordinates of the second electronic device in the first coordinate system to the fifth coordinate system after installation error correction.
  • the hub device may save the acquired coordinates of the second electronic device in the fifth coordinate system.
  • the second electronic device may be added or removed in the whole house, or the location of the second electronic device may change.
  • the central device periodically locates the second electronic device through the first electronic device in each room or area, and updates the coordinates of the second electronic device saved by the central device.
  • the central device detects a new second electronic device in the whole house, it triggers the first electronic device to locate the second electronic device, and updates the coordinates of the second electronic device saved by the central device.
  • the central device stores configuration information of all first electronic devices, second electronic devices, and other devices in the whole house.
  • the second electronic device connects to the central device, and adds corresponding configuration information; the central device determines to add the second electronic device according to the configuration information, and then triggers the first electronic device to locate the second electronic device.
  • the user can manually trigger the first electronic device to locate the second electronic device and update the central device The saved coordinates of the second electronic device.
  • the user activates the first electronic device to locate the second electronic device through the human-computer interaction interface displayed on the control panel.
  • the control panel displays an interface 2901 for positioning the central device, and the interface 2901 for positioning the central device includes room options such as living room, dining room, and kitchen.
  • the user can select one or more of the room options, and click the "OK" button 2902 to start the first electronic device in the corresponding room to locate the central device in the room.
  • the interface 2901 of the positioning hub device further includes a "Cancel” button 2903, which is used to cancel the positioning of the IoT device.
  • the locating IoT device interface 2901 also includes a "select all” button 2904, the user can click the "select all” button 2904 to select all the rooms in the house, and click the "OK” button 2902 to activate the first electronic devices in the whole house respectively Locate the second electronic device.
  • the first electronic device in each room or area can periodically locate the users in the whole house and track the movement trajectory of each user. For example, if the period is 1 second, the first electronic device performs detection at a frequency of 10 Hz (10 times per second), and sends the detection result to the central device at a frequency of 1 Hz (1 time per second). Wherein, each first electronic device in the whole house can locate (acquire the coordinates of the user in the fourth coordinate system) and track (acquire the user's movement trajectory in the fourth coordinate system) of the user within the signal coverage area of the first electronic device .
  • one of the multiple first electronic devices may locate and track users in the overlapping areas.
  • each first electronic device in the whole house converts the acquired coordinates or movement trajectories of one or more users in the fourth coordinate system to the coordinates or movement trajectories of the fifth coordinate system, and The coordinates or movement tracks of one or more users in the fifth coordinate system are sent to the central device.
  • each first electronic device in the whole house sends the acquired coordinates or movement tracks of one or more users in the fourth coordinate system to the central device, and the central device sends The coordinates or movement track of the user in the fourth coordinate system are converted to the coordinates or movement track of the fifth coordinate system.
  • transforming the coordinates or movement trajectory of the user in the fourth coordinate system to the fifth coordinate system includes: converting the coordinates or movement trajectory of the user in the fourth coordinate system to the coordinates of the fifth coordinate system after installation error correction or moving tracks.
  • the hub device may save and periodically update the acquired user's position (for example, the user's coordinates in the fifth coordinate system) or movement track (the coordinate track in the fifth coordinate system).
  • step (2) is not necessary, but optional. For example, at the beginning of the installation, it needs to be corrected. After that, it generally does not need to be calibrated, or it will be calibrated again after a long period of use.
  • step (3) does not need to be carried out again.
  • step (3) is performed. That is, choose one of step (2) and step (3) to execute.
  • the conversion of the first coordinate system, the second coordinate system, the third coordinate system, the fourth coordinate system to the fifth coordinate system can be specifically: the conversion of the second coordinate system, the third coordinate system and the first coordinate system can be based on According to the aforementioned principle, to fulfill.
  • the conversion between the fourth coordinate system and the first coordinate system has been clarified in the previous part of the principle.
  • the second coordinate system, the third coordinate system, and the fourth coordinate system are all converted to the first coordinate system, based on the aforementioned principle, it can be obtained Further, the transformation from the first coordinate system to the fifth coordinate system is realized.
  • the first electronic device may transform the coordinates of the first coordinate system or the fourth coordinate system of its own device into the fifth coordinate system. That is, the coordinates of the fourth coordinate system do not need to be converted to the coordinates of the first coordinate system first, and then the coordinates of the first coordinate system are converted to the coordinates of the fifth coordinate system; instead, they can be directly converted to the coordinates of the fifth coordinate system. Afterwards, the transformed coordinates of the fifth coordinate system are sent to the central device. Based on the aforementioned principles, it can be obtained Further, the transformation from the fourth coordinate system to the fifth coordinate system is realized.
  • the above conversion is performed by the central device.
  • the first electronic device respectively sends the coordinates of the first coordinate system or the fourth coordinate system of its own device to the central device, and the central device converts the coordinates based on the first coordinate system or the fourth coordinate system of each first electronic device to the fifth The coordinates of the coordinate system.
  • a reference first electronic device is set among the multiple first electronic devices.
  • the other first electronic devices except the reference first electronic device respectively send the coordinate information of the first coordinate system or the fourth coordinate system of their own devices to the reference first electronic device.
  • the reference first electronic device converts the coordinates of the first coordinate system or the fourth coordinate system based on each first electronic device to the coordinates of the fifth coordinate system, and sends the converted coordinates of the fifth coordinate system to the central device.
  • the second electronic device executes a preset operation.
  • the whole house is divided into one or more rooms and/or one or more areas, which do not overlap with each other.
  • the central device can locate the room or area through the first electronic device, acquire and save the coordinate range of each room or area.
  • the method in (b) of FIG. 14 can be used to obtain the coordinate range of each room or area.
  • the hub device can determine the room or area where each first electronic device and each second electronic device is located.
  • the user can query through the central device, for example, input the device name (the name of the first electronic device, the name of the second electronic device), the room or area where the device is located, and the like.
  • at least one first electronic device is installed in each room or area.
  • the hub device determines the room or area where each first electronic device is located according to user input.
  • the central device determines the room where each first electronic device or each second electronic device is located according to the coordinates of the first electronic device or the second electronic device and the coordinate range of each room or area in the whole house or area.
  • the method shown in (a) of Figure 14 can be used to place the smartphones at points A, B,
  • the coordinates of the four positions of point A, point B, point C and point D in the fifth coordinate system are obtained respectively through the first electronic device 100 and
  • the plumb line passing through position A, the plumb line passing through position B, the plumb line passing through position C and the plumb line passing through position D can determine the area range of a defined area in the room.
  • the coordinates of the second electronic device in the fifth coordinate system are is the coordinate of the projected point Q of the second electronic device in the X h O h Y h plane.
  • represents the vector cross product, express and The vector cross product of , express and The vector cross product of , express and The vector cross product of , express and The vector cross product of two vectors is a scalar.
  • the central device may save a device information table in the whole house, and the device information table includes information of one or more devices (including but not limited to the first electronic device, the second electronic device, etc.) in the whole house.
  • the information of the device includes the name of the device, the room or area (room) where the device is located, etc.; optionally, the coordinates of the device (such as coordinates in the fifth coordinate system) may also be included.
  • the device information table is shown in Table 4.
  • the hub device can also determine the room or area the user is in.
  • at least one first electronic device is installed in each room or area.
  • the hub device determines the room or area where each first electronic device is located.
  • the room or area where each first electronic device is located is the room or area where the user can be detected by the first electronic device.
  • the central device determines the room or area where each user is located according to the coordinates of the user and the coordinate range of each room or area in the whole house.
  • the specific method may refer to the method in which the central device determines the room or area where each second electronic device is located according to the coordinates of the second electronic device and the coordinate range of each room or area in the whole house.
  • the central device periodically acquires the user's coordinates, and determines the room or area where the user is located according to the user's coordinates.
  • the central device obtains the information of the whole house and the whole house according to the floor plan of the whole house, the installation position of the central device, the position of the installation position of the central device in the floor plan of the whole house, the height information of the whole house, etc.
  • the central device can determine that the user enters another room or area from one room or area, or leaves the whole room, or enters the whole room, etc. according to the room or area where the user is currently located and the room or area where the user was located last cycle.
  • the central device can also obtain at least one item of information such as physiological characteristics, identity category, and human body posture from the first electronic device in each room or area, and then can subsequently and at least one item of information such as human body posture, etc., to notify or control the corresponding second electronic device in the corresponding room or corresponding area to perform preset operations.
  • at least one item of information such as physiological characteristics, identity category, and human body posture
  • (b) of FIG. 26 shows an implementation of the automatic control method based on human body perception provided by the embodiment of the present application.
  • the UWB module of the first electronic device locates the devices, rooms, areas, etc. in the whole house, obtains the coordinates of the devices in the whole house and the room or area where they are located, and reports to the central device.
  • the millimeter-wave radar module of the first electronic device tracks the targets of the users in the whole house, and periodically reports the coordinates and the room or area of the users in the whole house to the central device.
  • the central device sends corresponding preset instructions to the second electronic device according to the coordinates of the second electronic device and the coordinates of the user.
  • the second electronic device if it is determined that the relative position between the first user and the second electronic device satisfies a preset condition, the second electronic device is controlled to execute a preset instruction.
  • the coordinates of the first user may be the coordinates of one user or the average value of the coordinates of multiple users.
  • the second electronic device executes preset instructions. In this way, for example, when the user approaches the smart light, the second electronic device can execute a preset instruction through the process shown in (b) of FIG. 26 , such as turning on the smart light.
  • the central device can determine that the second electronic device executes the preset command according to the coordinates of the second electronic device and the coordinates of the user; According to the coordinates of the second electronic device and the coordinates of the user, other devices determine that the second electronic device executes preset instructions; or the central device sends the coordinates of the second electronic device and the coordinates of the user to the second electronic device, and the second electronic device Based on this, it is determined to execute the preset instruction. It can be understood that the embodiment of the present application does not limit the subject of execution of the automatic control method based on human perception.
  • some embodiments involve communication interactions between multiple different second electronic devices and the hub device, and even involve communication interactions between multiple different second electronic devices.
  • the following uses the first device, the second device, the third device, etc. to represent different second electronic devices for description.
  • the automatic control method based on human perception provided in the embodiment of the present application can be refined into a method for automatically waking up a device based on human perception, a method for automatically switching devices based on human perception, and a method based on human perception in specific embodiments.
  • Embodiment 1 relates to FIG. 30A , FIG. 30B , FIG. 30C and FIG. 30D , and provides a method and system for automatically waking up a device based on human perception.
  • Second electronic devices such as smart speakers, smart refrigerators, etc. support voice wake-up. The user can wake up the second electronic device by voice.
  • multiple second electronic devices that support voice wake-up are configured in the whole house, if the user makes a wake-up voice, multiple second electronic devices may wake up and respond. In fact, the user may only want to wake up the second electronic device closest to the user. This annoys the user.
  • a smart speaker 1 and a smart speaker 2 are set up in the whole house or in a certain room (or area).
  • the user sends out the wake-up voice "Xiaoyi Xiaoyi", and both smart speaker 1 and smart speaker 2 receive the wake-up voice "Xiaoyi Xiaoyi”.
  • both the smart speaker 1 and the smart speaker 2 are woken up, and respond by playing the voice "Where are you?"
  • the user may only want to wake up the smart speaker 1 . This brings inconvenience to the user. If different wake-up words are set for multiple different second electronic devices, the user needs to memorize multiple different wake-up words, which puts forward higher memory requirements for the user and is not user-friendly; this is obviously not A reasonable solution.
  • the embodiment of the present application provides an automatic control method based on human perception.
  • the method provided by the embodiment of the present application after one or more second electronic devices receive the wake-up voice input by the user, only the second electronic device with the closest distance to the user responds. In this way, multiple second electronic devices can be prevented from responding to the wake-up language at the same time, thereby improving user experience.
  • smart speakers 1001 and 1002 are installed in the living room
  • smart speakers 1003 are installed in the master bedroom
  • smart speakers 1004 are installed in the second bedroom
  • smart speakers 1005 are installed on the balcony.
  • the device name of smart speaker 1001 is home theater (left)
  • the device name of smart speaker 1002 is home theater (right)
  • the device name of smart speaker 1003 is bedside speaker
  • the device name of smart speaker 1004 is desk speaker.
  • the device name of the speaker 1005 is balcony speaker.
  • the hub device can obtain the coordinates of each smart speaker in the fifth coordinate system and the room or area where each smart speaker is located; the hub device can also obtain the coordinates of each user and the coordinates of each user The room or area in which it is located.
  • the hub device determines the closest location to the user based on the acquired coordinates of each smart speaker, the room or area where each smart speaker is located, the coordinates of each user, and the room or area where each user is located. Smart speakers.
  • the user makes a wake-up voice
  • one or more second electronic devices receive the wake-up voice
  • user 1 sends out the wake-up voice "Xiaoyi Xiaoyi"
  • smart speaker 1001, smart speaker 1002, smart speaker 1003, and smart speaker 1004 respectively receive the wake-up voice from user 1, but only The smart speaker 1001 closest to the user sends out the voice "Are you here" in response.
  • FIG. 30C shows a method for automatically waking up a device based on human perception in a whole-house scenario provided by an embodiment of the present application.
  • one or more second electronic devices receive the wake-up voice, and the method includes:
  • one or more second electronic devices After receiving the wake-up voice, respectively send a first message to the central device.
  • the first message is used to indicate that the second electronic device has received the wake-up voice. For example, in the scenario in FIG. 30B , after user 1 utters the voice "Xiaoyi Xiaoyi", smart speakers 1001, 1002, 1003, 1004, and 1005 respectively send first messages to the central device .
  • each second electronic device may be the same or different.
  • both smart speakers and smart refrigerators support voice wake-up, and both can receive the wake-up voice.
  • each second electronic device may directly send the first message to the central device, or may indirectly forward the first message to the central device via the first electronic device or other devices.
  • the first message includes the identifier of the second electronic device and the first information.
  • the first information is used to indicate that the second electronic device has received the wake-up voice. Further, the first information is also used to indicate the volume of the wake-up voice received by the second electronic device.
  • the central device After receiving the one or more first messages, the central device acquires the coordinates of the one or more second electronic devices and the coordinates of the user who sent the first messages according to the received one or more first messages.
  • the central device receives the first messages sent by the smart speaker 1001, the smart speaker 1002, the smart speaker 1003, the smart speaker 1004, and the smart speaker 1005, and then obtains that the smart speaker 1001, the smart speaker The first message sent by the speaker 1002, the smart speaker 1003, the smart speaker 1004, and the smart speaker 1005. Afterwards, the central device can obtain the coordinates of the smart speaker 1001, the smart speaker 1002, the smart speaker 1003, the smart speaker 1004, and the smart speaker 1005 in the same coordinate system (for example, the fifth coordinate system), and the coordinates of the user in the same coordinate system. (for example, the fifth coordinate system).
  • the central device can obtain the coordinates of the smart speaker 1001, the smart speaker 1002, the smart speaker 1003, the smart speaker 1004, and the smart speaker 1005 in the same coordinate system (for example, the fifth coordinate system), and the coordinates of the user in the same coordinate system. (for example, the fifth coordinate system).
  • the central device acquires coordinates including all second electronic devices in the whole room periodically or in real time.
  • the central device acquires the first message sent by multiple second electronic devices (for example, smart speaker 1001, smart speaker 1002, smart speaker 1003, smart speaker 1004, and smart speaker 1005), all the acquired second electronic devices
  • the coordinates of the above-mentioned multiple second electronic devices are queried from the coordinates of .
  • the central device may also obtain the first message sent by multiple second electronic devices (for example, smart speaker 1001, smart speaker 1002, smart speaker 1003, smart speaker 1004, and smart speaker 1005).
  • the coordinates of multiple second electronic devices for example, the smart speaker 1001, the smart speaker 1002, the smart speaker 1003, the smart speaker 1004, and the smart speaker 1005 and the coordinates of the user. That is, the central device has not obtained the coordinates of all the second electronic devices in the whole room before.
  • the hub device not only obtains the coordinates of the one or more second electronic devices that send the first message, but also obtains the room or area to which the one or more second electronic devices belong.
  • the hub device not only obtains the coordinates of the user, but also obtains the room or area to which the user belongs.
  • the time point when the central device receives the first first message is the starting point of timing, and one or more first messages are received within the first duration (for example, 0.5 seconds, etc.) after the timing starting point .
  • the one or more first messages correspond to one or more second electronic devices (eg, the first device and the second device).
  • the first duration is preset and can be adjusted as required.
  • the central device receives the first time interval from smart speakers 1001, 1002, 1003, 1004, and 1005 respectively within the first duration of the timing starting point.
  • a message, according to the saved device information (for example, the information in Table 4) to obtain the coordinates of the smart speaker 1001 is The coordinates of smart speaker 1002 are The coordinates of smart speaker 1003 are The coordinates of smart speaker 1004 are The coordinates of smart speaker 1005 are
  • the central device determines a second electronic device closest to the user according to the coordinates of the above-mentioned one or more second electronic devices.
  • the hub device determines the location information of the second electronic device that sent the first message and the room or area where it is located. The hub device then obtains the coordinates of the users in the room or area. According to the coordinates of one or more second electronic devices in the room or area where the second electronic device that sent the first message is located, and the coordinates of the user, determine from one or more second electronic devices in the room or area A second electronic device closest to the user.
  • the central device when the central device receives a first message within the first time period from the timing start point, it determines that the second electronic device sending the first message is the second electronic device closest to the user.
  • the central device receives multiple first messages within the first duration from the timing start point, and then the central device obtains multiple second electronic devices (for example, the first device and second device), according to the coordinates of the plurality of second electronic devices and the coordinates of the user, determine a second electronic device closest to the user from the plurality of second electronic devices. For example, if the central device acquires that the distance between the first device and the user is 1 m, and the distance between the second device and the user is 1.5 m, then the first device is selected as the second electronic device closest to the user.
  • the first device, the second device and the user may or may not be located in the same room or area. This example is not limited to this.
  • the central device receives multiple first messages within the first duration from the timing start point, and then the central device obtains multiple second electronic devices (for example, the first device and second device), according to the coordinates of multiple second electronic devices and the room or area where the multiple second electronic devices are located, the coordinates of the user and the room or area where the user is located, determine the distance from the multiple second electronic devices to the user A recent second electronic device.
  • the central device obtains that the user is located in the living room, the second device is located in the master bedroom, the first device is located in the living room, the distance between the second device and the user (the linear distance between two coordinates) is 1m, and the distance between the first device and the user is 1m.
  • the distance to the user is 1.5m; although the second device is closer to the user and the first device is farther away from the user, the second device is located in a different room or area from the user, and the first device is located in the same room or area as the user, so The first device is selected as a second electronic device closest to the user.
  • the hub device determines from the plurality of second electronic devices that there is no A second electronic device closest to the user does not indicate to any second electronic device to respond to the wake-up voice.
  • the central device acquires that the user is located in the living room, the second device is located in the master bedroom, the first device is located in the second bedroom, the distance between the second device and the user (the linear distance between two coordinates) is 1m, and the first device The distance from the user is 1.5m; since the first device and the second device are located in different rooms or areas from the user, no second electronic device is selected as the second electronic device closest to the user, so that the central device does not Indicating a response to the wake-up voice to any second electronic device.
  • the hub device acquires the user's coordinates, or even the room or area where the user is located. If there are S users (S is a positive integer greater than 1), in one embodiment, the central device obtains the volume of the wake-up voice received by the second electronic device through the first information included in the first message , so that the hub device can obtain a second electronic device with the loudest volume indicating the received wake-up voice, and then obtain one or more users in the room or area according to the room or area where the second electronic device is located
  • the coordinates of the one or S1 users are obtained according to the coordinates of the one or S1 users (S1 ⁇ S), and the coordinates of the center positions of the one or S1 users are obtained (for example, obtained by calculating the average value of multiple coordinates).
  • the coordinates of the center position of the user are the coordinates of the user.
  • the coordinates of the center positions of the two users are obtained according to the coordinates of the two users.
  • the central device obtains the coordinates of the central positions of the S users directly according to the coordinates of the S users.
  • the central device obtains the voice wake-up volume received by the smart speaker 1001 through the first information in the first message from the smart speaker 1001 , and determines to send out according to the living room where the smart speaker 1001 is located.
  • the room or area where the user 1 of the voice wake-up command is located is the living room; in addition, the central device obtains the smallest distance between the smart speaker 1001 and the user 1, then the central device determines that the second electronic device in the living room that is closest to the user is the smart speaker 1001 .
  • two second electronic devices for example, smart speaker 1001 and smart speaker 1002
  • one second electronic device for example, smart speaker 1003
  • the second bedroom A second electronic device (for example, smart speaker 1004)
  • a second electronic device for example, smart speaker 1005
  • the user sends out the wake-up voice "Xiaoyi Xiaoyi" in the bathroom
  • the two second electronic devices for example, smart speaker 1001 and smart speaker 1002 in the living room respectively receive the user's wake-up voice.
  • the two second electronic devices respectively send the first message to the central device.
  • the central device receives the two first messages, determines that the room or area where the user is located is the bathroom, and determines that the room or area where the two second electronic devices (for example, smart speaker 1001 and smart speaker 1002) are located is the living room, thereby determining that the user
  • the two second electronic devices (for example, the smart speaker 1001 and the smart speaker 1002) are not in the same room or the same area, so it is determined that there is no room or area in the whole house that has both the second electronic device and the user.
  • the hub device determines the closest second electronic device (for example, a smart speaker 1001).
  • the central device determines from the two second electronic devices that sent the first message that the second electronic device that receives the wake-up voice with the loudest volume is the second electronic device that is closest to the user. Electronic equipment.
  • the central device sends a first indication message to a second electronic device closest to the user, for instructing it to respond to the wake-up voice.
  • the central device sends a first indication message to the smart speaker 1001, which is used to instruct the smart speaker 1001 to respond to the wake-up voice.
  • the central device does not send the first indication message, so that other second electronic devices do not respond to the wake-up voice.
  • a second electronic device closest to the user is woken up in response to the wake-up voice.
  • a second electronic device closest to the user is woken up in response to the wake-up voice after receiving the first indication message. Since the other second electronic devices have not received the first indication message, they do not respond to the wake-up voice, nor are they woken up.
  • the smart speaker 1001 plays the voice "Are you here" in response to receiving the wake-up voice "Xiaoyi Xiaoyi”. Other smart speakers do not respond to the wake-up voice "Xiaoyi Xiaoyi" and will not be woken up.
  • the central device determines a second electronic device closest to the user
  • the second electronic device may determine whether to respond to the wake-up voice.
  • the central device sends the acquired coordinates of all smart speakers and information about the room or area where they are located to each smart speaker.
  • the central device sends device information (for example, the information in Table 4) to each smart speaker (for example, smart speaker 1001, smart speaker 1002, smart speaker 1003, smart speaker 1004 and smart speaker 1005).
  • Each smart speaker can obtain the coordinates of all smart speakers in the whole house and the room or area where they are located.
  • the central device can also send the acquired coordinates of the users in the whole house and the information of the room or area where they are located to each smart speaker.
  • the hub device may send the coordinates of all the smart speakers in a room or area and the coordinates of the user to each smart speaker in the room or area respectively.
  • each smart speaker can obtain the coordinates of all smart speakers and users in the room or area where it is located. Therefore, each smart speaker can obtain whether it is the smart speaker closest to the user. Only the smart speaker closest to the user responds, and the other smart speakers do not.
  • the method includes: each second electronic device that receives the wake-up voice separately determines the number of users in the room or area where it is located. If the number of users in the determined room or area is 0, do not respond to the wake-up voice. If it is determined that the number of users in the room or area is greater than 0, obtain the distance between each second electronic device in the room or area and the user (or the center position of multiple users), and determine the distance from the user in the room or area (or a central location for multiple users) the nearest second electronic device. Specifically, if a second electronic device determines that its distance from the user is greater than the distance from at least one other second electronic device in the room or area where it is located, the second electronic device does not respond to the wake-up voice.
  • a second electronic device may determine that it is the second electronic device closest to the user in the room or area where it is located. Afterwards, the second electronic device sends a first message to the arbitration device, the first message includes first information, and the first information is used to indicate that the second electronic device receives the wake-up voice and the volume of the wake-up voice.
  • the arbitration device receives at least one first message, determines a device from at least one second electronic device according to the volume of the wake-up voice in the at least one first message, and sends the first indication message to the device. After receiving the first indication message, the device responds to the wake-up voice and wakes up.
  • the arbitration device may be a central device, or any first electronic device or second electronic device in the pre-designated whole house.
  • a second electronic device can determine that it is the second electronic device closest to the user in the room or area where it is located, if the second electronic device determines that both the second electronic device and the user's If the number of rooms or areas is 1, the first message is no longer sent to the arbitration device, and the wake-up voice is directly responded.
  • the mediating device receives the first message from at least one second electronic device.
  • the arbitration device starts timing from the time point when the first first message is received, and determines according to at least one first message received before the timing reaches a second duration (for example, 1 second, etc.)
  • a first message with the loudest voice volume is awakened from the at least one first message, so as to determine a device from the at least one second electronic device.
  • the second electronic device determines that there is no room or area in the whole house where both the second electronic device and the user exist. If the second electronic device determines that the number of users in the room or area where it is located is 0, it may send a first message to the arbitration device, and the arbitration device determines that a second electronic device is awakened in response to the wake-up voice.
  • a second electronic device is awakened in response to the user's wake-up voice; quickly and accurately determine a awakened second electronic device, avoiding multiple The second electronic device is woken up by the same wake-up voice.
  • Embodiment 2 relates to FIG. 31 , FIG. 32A , FIG. 32B , FIG. 32C , FIG. 32D , FIG. 32E and FIG. 33 , and provides a method and system for automatically switching devices based on human perception.
  • multiple devices are configured in the whole house, for example, smart speakers, smart TVs, etc. are used to play video and audio. While a device is playing video or audio, the user may leave the device, or even leave the room or area where the device is located, and enter another room or area.
  • a smart speaker 1003 bedside speaker
  • a smart speaker 1004 is configured in the second bedroom.
  • the smart speaker 1003 plays audio, but the smart speaker 1004 does not play any audio. At this time, the user leaves the master bedroom and enters the second bedroom. At this time, the smart speaker 1003 is still playing, and the smart speaker 1004 is not playing. The user listens to the audio played by the smart speaker 1003 in the second bedroom, and the user experience is poor.
  • the method for automatically switching devices based on human body perception provided by the embodiment of the present application, when the first device is playing, according to the user leaving the first room or the first area where the first device is located, entering the second room or the second area where the second device is located. region, automatically switch to the second device to continue playing automatically, and the first device to automatically stop playing.
  • This method allows the user to seamlessly and automatically switch audio or video playback by leaving the first room or the first area and entering the second room or the second area without carrying any equipment during the above-mentioned movement. Thus, the user experience is better.
  • FIG. 32A, Fig. 32B, Fig. 32C, Fig. 32D and Fig. 32E exemplarily show some implementations of the method for automatically switching devices based on human body perception provided by the embodiment of the present application. It should be noted that the method for automatically switching devices based on human perception provided in the embodiment of the present application includes, but is not limited to, the flow charts in FIG. 32A , FIG. 32B , FIG. 32C , FIG. 32D and FIG. 32E .
  • the method for automatically switching devices based on human perception may include:
  • the first device starts to play the first video or the first audio in response to the first user's operation.
  • the first device may be an audio playback device (for example, a smart speaker) or a video playback device (for example, a smart TV).
  • the first user approaches the first device to operate the first device. For example, the user turns on the smart speaker and plays audio by pressing the power switch of the smart speaker. In an example, the user does not approach the first device, and casts the audio on the mobile phone to the smart speaker for playback.
  • the first device sends a first notification message to the central device.
  • the first notification message is used to indicate that the first device starts playing audio or video.
  • the first notification message may include at least one item of identification, version, address, etc. of the video or audio.
  • the central device receives the first notification message, and establishes a binding relationship between the first user and the first video or the first audio.
  • the hub device receives the first notification message, and determines that the first device is the playback device closest to the first user. For example, the user approaches the first device, operates the first device, and the first device plays. The first device is the playback device closest to the first user.
  • the hub device receives the first notification message, and determines according to the first notification message that the first device is playing. Based on the aforementioned principle, the central device periodically or in real time acquires the coordinates and the room or area where the second electronic device is located in the whole house. That is, after receiving the first notification message, the central device can acquire the coordinates and the room or area where the first device is located. The central device also periodically or in real time acquires the coordinates and the room or area of the users in the whole house. The hub device determines that the first device is the playback device closest to the first user, and establishes a binding relationship between the first user and the first video or the first audio.
  • the hub device receives the first notification message, and obtains the user account corresponding to the first device playing the first video or the first audio according to the first notification message. Determine the user identity based on the user account. The hub device determines the current location of the user of the user identity. If the user is located in the room or area where the first device is located, establish a binding relationship between the user (first user) and the first video or first audio currently played by the first device.
  • the hub device receives the first notification message, and obtains the device identifier of the device (for example, a mobile phone used for screen projection) where the source of the first video or the first audio file is located according to the first notification message.
  • the hub device determines the device where the corresponding file source is located according to the device identifier, and obtains the location of the device where the file source is located.
  • the central device also determines the user closest to the device where the file source is located as the first user according to the location of the device where the file source is located and the user location in the whole house; establishes a binding relationship between the first user and the first video or first audio .
  • the hub device acquires that the second device is the playback device closest to the first user.
  • the first user may move a location during listening to or watching content played by the first device.
  • the hub device acquires that the second device is the playback device closest to the first user.
  • the hub device acquires that among the playback devices that are not playing audio or video, the second device is the playback device closest to the first user.
  • the hub device determines to switch to the second device for playing.
  • the central device acquires the positions of the second electronic devices in the whole house periodically or in real time. If it is determined that the first device is still the closest playback device to the first user, S3204a-S3211a are not executed.
  • the hub device sends a first notification message to the first device.
  • the first notification message is used to instruct the first device to stop playing.
  • the first device After receiving the first notification message, the first device automatically stops playing, and generates a first feedback message according to the first video or first audio and its playing progress information.
  • the first feedback message includes an identifier of the first video or the first audio.
  • the first feedback message includes a storage address (such as a network address) of the first video or the first audio.
  • the first feedback message further includes playback progress information.
  • the playing progress information is used to indicate the current playing progress of the first video or the first audio (for example, what frame or second the first video or the first audio is played to).
  • the first device sends a first feedback message to the central device.
  • the hub device receives the first feedback message, and acquires the first video or the first audio and its playing progress information.
  • the hub device sends a second notification message to the second device.
  • the second notification message is used to instruct the second device to start playing.
  • the second notification message includes the information of the first video or the first audio (for example, the identification of the first video or the first audio); optionally, the second notification message also includes the information of the first video or the first audio or the playback progress information of the first audio.
  • the second device receives the second notification message, acquires the first video or first audio and its playing progress information, and continues playing the first video or first audio according to the playing progress information.
  • the second device receives the second notification message, acquires the first video or the first audio according to the information of the first video or the first audio, and starts playing the first video or the first audio.
  • the second notification message includes playing progress information of the first video or the first audio. According to the first video or the first audio and its playing progress information, the second device starts playing the first video or the first audio from the moment or frame indicated by the playing progress information. In this way, seamless switching between the second device and the first device to play video or audio can be realized.
  • the second device sends a second feedback message to the central device.
  • the second device sends a second feedback message to the central device, which is used to indicate that the switching is completed, and the second device starts playing.
  • the hub device updates the stored running information of the first device and the second device.
  • the running information includes running status.
  • the central device updates the running state of the first device to not playing, and updates the running state of the second device to playing.
  • the method for automatically switching devices based on human perception may include:
  • the first device starts to play the first video or the first audio in response to the first user's operation.
  • the first device sends a first notification message to the central device.
  • the central device receives the first notification message, and establishes a binding relationship between the first user and the first video or the first audio.
  • S3201b-S3203b refer to S3201a-S3203a, which will not be repeated here.
  • the first device starts to play the second video or the second audio in response to the second user's operation.
  • the second user When the first device plays the first video or the first audio, the second user operates the first device. For example, the second user approaches the first device, and turns on the first device by pressing the switch button of the first device, so that the first device starts to play. For example, the second user casts the audio on his mobile phone to the first device for playback.
  • the first device sends a second notification message to the central device.
  • the second notification message is used to indicate that the first device switches to play audio or video.
  • the second notification message may include at least one item of identification, version, address, etc. of the switched video or audio.
  • the central device receives the second notification message, and establishes a binding relationship between the second user and the second video or the second audio.
  • the hub device receives the second notification message, and determines that the first device is the playback device closest to the second user. For example, the second user approaches the first device, operates the first device, and the first device plays. The first device is the playback device closest to the second user.
  • the hub device receives the second notification message, and determines that the first device is playing according to the second notification message.
  • the central device periodically or in real time obtains the coordinates and the room or area where the second electronic device is located in the whole house. That is, after receiving the second notification message, the central device can acquire the coordinates and the room or area where the first device is located.
  • the central device also periodically or in real time acquires the coordinates and the room or area of the users in the whole house.
  • the hub device determines that the first device is the playback device closest to the second user, and establishes a binding relationship between the second user and the second video or the second audio currently played by the first device.
  • the hub device receives the second notification message, and acquires the second user account corresponding to the second video or the second audio played by the first device according to the second notification message.
  • the user identity is determined according to the second user account.
  • the central device determines the current location of the user corresponding to the second user account. If the user is located in the room or area where the first device is located, a binding relationship between the user (the second user) and the second video or the second audio is established.
  • the hub device receives the second notification message, and acquires the device identifier of the device (for example, a mobile phone used for screen projection) where the source of the second video or the second audio file is located according to the second notification message.
  • the hub device determines the device where the corresponding file source is located according to the device identifier, and obtains the location of the device where the file source is located.
  • the central device also determines that the user closest to the device where the file source is located is the second user according to the location of the device where the file source is located and the user location in the whole house; establishes a binding relationship between the second user and the second video or second audio .
  • the hub device acquires that the second device is the playback device closest to the second user.
  • the second user may move location during listening to or watching the content played by the first device.
  • the hub device acquires that the second device is the closest playback device to the second user.
  • the hub device acquires that among the playback devices that are not playing audio or video, the second device is the playback device closest to the second user.
  • the hub device determines to switch to the second device for playing.
  • the central device acquires the positions of the second electronic devices in the whole house periodically or in real time. If it is determined that the first device is still the closest playback device to the second user, S3207b-S3215b are not executed.
  • the hub device sends a first notification message to the first device.
  • the first device After receiving the first notification message, the first device automatically stops playing, and generates a first feedback message according to the second video or the second audio and its playing progress information.
  • the first device sends a first feedback message to the central device.
  • the central device receives the first feedback message, and acquires the second video or the second audio and its playing progress information.
  • the hub device sends a second notification message to the second device.
  • the second device receives the second notification message, acquires the second video or the second audio and its playing progress information, and continues playing the second video or the second audio according to the playing progress information.
  • the second device sends a second feedback message to the central device.
  • the central device receives the second feedback message, and acquires that the second device continues to play the second video or the second audio.
  • the method for automatically switching devices based on human perception may include:
  • the first device is located in the first room or the first area, and in response to the operation of the first user, the first device starts to play the first video or the first audio.
  • the first device may be an audio playback device (for example, a smart speaker) or a video playback device (for example, a smart TV).
  • the first device is located in a first room or a first area.
  • the smart speaker 1003 is located in the master bedroom. In response to the first user's operation, the first device starts to play the first video or the first audio.
  • the first device sends a first notification message to the central device.
  • the hub device receives the first notification message, obtains that the first user is located in the first room or the first area, and establishes a binding relationship between the first user and the first video or the first audio.
  • the hub device acquires that the first user leaves the first room or the first area and enters the second room or the second area; acquires that the second device is located in the second room or the second area.
  • the central device periodically or in real time acquires the coordinates and the room or area of the users in the whole house.
  • the hub device is then able to determine whether the first user leaves the first room or the first area and enters the second room or the second area.
  • the hub device acquires that the second device is the playback device closest to the first user in the second room or second area after the first user enters the second room or second area.
  • the hub device acquires playback devices that are not playing audio or video in the second room or the second area, and the second device is the playback device closest to the first user.
  • the hub device determines to switch to the second device for playing.
  • the hub device sends a first notification message to the first device.
  • the first device After receiving the first notification message, the first device automatically stops playing, and generates a first feedback message according to the first video or first audio and its playing progress information.
  • the first device sends a first feedback message to the central device.
  • the central device receives the first feedback message, and acquires the first video or the first audio and its playing progress information.
  • the hub device sends a second notification message to the second device.
  • the second device receives the second notification message, acquires the first video or first audio and its playing progress information, and continues playing the first video or first audio according to the playing progress information.
  • the second device sends a second feedback message to the central device.
  • the method for automatically switching devices based on human perception may include:
  • the first device is located in the first room or the first area, and in response to the operation of the first user, the first device starts to play the first video or the first audio.
  • the first device may be an audio playback device (for example, a smart speaker) or a video playback device (for example, a smart TV).
  • the first device is located in a first room or a first area.
  • the smart speaker 1003 is located in the master bedroom. In response to the first user's operation, the first device starts to play the first video or the first audio.
  • the first device sends a first notification message to the central device.
  • the central device receives the first notification message, and establishes a binding relationship between the first user and the first video or the first audio.
  • the first device In response to the second user's operation, the first device starts to play the second video or the second audio.
  • the first device sends a second notification message to the central device.
  • the central device receives the second notification message, and establishes a binding relationship between the second user and the second video or the second audio.
  • the central device acquires that the second user leaves the first room or the first area and enters the second room or the second area.
  • the central device periodically or in real time acquires the coordinates and the room or area of the users in the whole house.
  • the hub device can then determine whether the second user leaves the first room or the first area and enters the second room or the second area.
  • the hub device acquires that the second device is the playback device closest to the second user in the second room or the second area after the second user enters the second room or the second area.
  • the hub device acquires playback devices that are not playing audio or video in the second room or the second area, and the second device is the playback device closest to the second user.
  • the hub device determines to switch to the second device for playing.
  • the hub device sends a first notification message to the first device.
  • the first device After receiving the first notification message, the first device automatically stops playing, and generates a first feedback message according to the second video or the second audio and its playing progress information.
  • the first device sends a first feedback message to the central device.
  • the central device receives the first feedback message, and acquires the second video or the second audio and its playing progress information.
  • the hub device sends a second notification message to the second device.
  • the second device receives the second notification message, acquires the second video or the second audio and its playing progress information, and continues playing the second video or the second audio according to the playing progress information.
  • the second device sends a second feedback message to the central device.
  • the central device receives the second feedback message, and acquires that the second device continues to play the second video or the second audio.
  • the method for automatically switching devices based on human perception may include:
  • the first device is playing.
  • the user listens to or watches the content played by the first device in the first room or the first area where the first device is located.
  • user 1 utters the voice "play my music” in the master bedroom.
  • the smart speaker 1003 bedside speaker
  • receives the voice "play my music” and starts playing music in response to receiving the voice "play my music”.
  • User 1 listens to music in the master bedroom.
  • the central device determines that the first device is playing.
  • the central device obtains the operation information of each second electronic device periodically or in real time through communication interaction. For example, when the first device starts playing, it sends a first notification message to the hub device; when the first device stops playing, it sends a third notification message to the hub device (the third notification message may include video or audio identification, version, at least one of address, progress information, etc.). The same applies to other second electronic devices such as the second device.
  • the central device determines that the first user leaves the first room or the first area where the first device is located, and enters the second room or the second area.
  • the central device periodically or in real time acquires the coordinates and the room or area where the second electronic device is located in the whole house.
  • the central device also periodically or in real time acquires the coordinates of the users in the whole house and the room or area where they are located.
  • the hub device in turn can determine whether the user leaves the first room or the first area and enters the second room or the second area.
  • the user is the first user.
  • multiple users are included in the first area.
  • the first user who leaves the first area is the first user.
  • the first user leaves the first area and enters the second area, and the hub device stops tracking objects for the remaining users in the first area.
  • the last user who leaves the first area is the first user.
  • the central device determines that the user leaves the master bedroom where the smart speaker 1003 is located, and enters the secondary bedroom where the smart speaker 1004 is located.
  • the central device determines that the user enters the second room or the second area, and also determines the duration of the user's stay in the second room or the second area. If it is determined that the user stays in the second room or the second area for longer than a preset duration threshold (for example, 2 seconds), it is determined to switch the playback device.
  • a preset duration threshold for example, 2 seconds
  • the playback device will not be switched; after that, the central device determines that the user finally enters the second room And if the duration in the second room is greater than or equal to the preset duration threshold, the hub device notifies the first device to stop playing and the second device to start playing.
  • the smart speaker 1003 is playing music
  • the user leaves the master bedroom and enters the second bedroom through the living room.
  • the central device determines that the user leaves the master bedroom and enters the living room, but the time spent passing through the living room is less than the preset duration threshold, and the central device does not notify the smart speaker 1003 to stop playing music.
  • the smart speaker 1003 plays music without switching the smart speaker.
  • the central device determines that the user leaves the living room and enters the second bedroom. After the duration of the second bedroom is greater than the preset duration threshold, the central device notifies the smart speaker 1003 to stop playing music, and notifies the smart speaker 1004 in the second bedroom to start playing music.
  • the hub device sends a first notification message to the first device.
  • the first notification message is used to instruct the first device to stop playing.
  • the first device automatically stops playing after receiving the first notification message.
  • the first device generates a first feedback message according to the currently played first video or first audio and its playing progress information, and sends the first feedback message to the hub device.
  • the first feedback message includes the identification of the first video or the first audio and playing progress information.
  • the hub device sends a second notification message to the second device.
  • the second notification message is used to instruct the second device to start playing. If there are multiple playback devices in the second room or the second area, in an example, the hub device determines that when the first user enters the second room or the second area, the playback device closest to the first user is the second device; In another example, the hub device determines that among the playback devices that are not playing in the second room or the second area, the playback device closest to the first user is the second device; in another example, the hub device Any playback device in the second room or the second area is determined as the second device. Exemplarily, as shown in FIG. 31 , the central device notifies the smart speaker 1003 to stop playing music, and notifies the smart speaker 1004 in the second bedroom to start playing music.
  • the second notification message includes information about the first video or the first audio (for example, the identification of the first video or the first audio);
  • the second notification message also includes information about the first video or the first audio. Audio playback progress information.
  • the second device receives the second notification message and starts playing.
  • the second device obtains the first video or first audio and its playing progress information according to the second notification message, and continues playing the first video or first audio according to the playing progress information. Realize seamless playback between the second device and the first device.
  • the second device also sends a second feedback message to the central device, which is used to indicate that the switching is completed, and the second device starts playing.
  • the hub device receives the second feedback message, determines that the second device starts to play, and updates the saved running information of the first device and the second device.
  • Fig. 32A- Fig. 32E exemplarily introduce some implementations of the method for automatically switching devices based on human body perception provided by the embodiment of the present application.
  • the hub device may maintain a switch that automatically switches device functions. If the switch of the automatic switching device function is turned on, the central device executes the process described in Figure 32A-32E above, and automatically switches the device based on human body perception. If the switch for automatically switching the device function is turned off, the central device does not execute the processes described above in FIG. 32A-FIG. 32E. Users can turn on or off the automatic switching device function by setting parameters on the central device.
  • the hub device includes an input portion and a housing.
  • the input part may include a control panel.
  • Communication between the control panel and the chassis can be wireless or wired.
  • Control panel and chassis can be integrated in one, or placed in different places. Considering factors such as ease of use and aesthetics, the control panel can be placed in a location that is easy for users to touch (for example, on the wall of the living room, etc.); the chassis takes up more space and can be placed in a less conspicuous location (for example, a storage room at home, etc.) ).
  • the hub device does not include an input part, and the hub device obtains input through communication with other devices.
  • one or more control panels are arranged in the whole house, and the control panels can provide input for the whole house system.
  • the control panel is generally arranged at a position convenient for the user to touch (for example, on the wall of the living room, etc.).
  • the user's input on the control panel can be transmitted to any electronic device (including the central device) in the whole house through wireless or wired communication, just select the corresponding electronic device on the human-computer interaction interface of the control panel.
  • the user can enable or disable the automatic device switching function on the human-computer interaction interface of the control panel.
  • the control panel displays a "My Home” interface 3301
  • the "My Home” interface 3301 includes a "TV in the Living Room” control 3302, a "TV in the Master Bedroom” control 3303, and a "Home Theater ( Left)” control 3304, "home theater (right)” control 3305, "bedside speaker” control 3306, “desk speaker” control 3307, etc.
  • the user can click a control to set the second electronic device corresponding to the control. For example, the user can click on the "bedside speaker” control 3306 to set the parameters of the bedside speaker.
  • the hub device displays the "bedside speaker” interface 3308.
  • the "bedside speakers” interface 3308 includes an option 3309 of "automatically switch playback speakers”. The user can click on the option 3309 of "automatically switch playback speakers” to turn on or off the function of automatically switching devices.
  • the user turns on or off the automatic device switching function by voice.
  • the user sends out the voice “automatically switch the playback speaker”
  • the smart speaker 1003 receives the voice “automatically switch the playback speaker” and then notifies the central device to enable the automatic switching device function.
  • the hub device can obtain user input from the control panel, and turn on or off the function of automatically switching devices according to the user input.
  • the central device may also obtain the coordinates of the control panel and the room or area where it is located.
  • the embodiment of the present application further provides a manner of manually switching devices.
  • the hub device obtains an input from the control panel that the user activates the function of automatically switching devices.
  • the hub device obtains the fourth room or the fourth area where the control panel receiving the user input is located. In an example, the hub device determines that the fourth room (or fourth area) is different from the first room (or first area).
  • the hub device sends a first notification message to the first device, instructing the first device to stop playing; sends a second notification message to a third device in the fourth room or in the fourth area, instructing the third device to start playing.
  • the third device is any playback device in the fourth room or the fourth area.
  • the third device is the playback device closest to the control panel in the fourth room or the fourth area. In this way, users manually switch to play video or audio on playback devices in the room or area they are in.
  • Embodiment 3 relates to FIG. 34A , FIG. 34B and FIG. 35 , and provides a stereo automatic adjustment method and system based on human perception.
  • two audio playback devices can establish a stereo system through a wired connection or a wireless connection (such as Bluetooth, Wi-Fi direct connection, etc.).
  • the two audio playback devices can be set to play the stereo left channel and the stereo right channel respectively.
  • the smart speaker 1001 located on the left side of the smart TV is set to play the stereo left channel; the smart speaker 1002 located on the right side of the smart TV is set to play the stereo right channel.
  • two audio playback devices respectively play the stereo left channel and the stereo right channel according to the preset volume (for example, both volumes are the same), attenuation will occur due to the spatial propagation of the sound in the same propagation medium (such as air), The farther the distance, the greater the attenuation.
  • the distance between the user and the two audio playback devices is large, the user receives a large difference in the volume transmitted from the two audio playback devices to the user. For example, when the distance between the user and the two audio playback devices is doubled, the user receives a volume difference of up to 6dB (decibels) transmitted from the two audio playback devices to the user, which is not user-friendly and the user experience is poor.
  • the stereo automatic adjustment method based on human body perception can automatically adjust the audio frequency according to the distance between the two audio playback devices and the user when two audio playback devices that play the stereo left channel and the stereo right channel are playing. Compensate the volume played by the two audio playback devices accordingly.
  • the first device plays the stereo left channel
  • the second device plays the stereo right channel
  • the distance between the first device and the user is the first distance
  • the distance between the second device and the user is the second distance
  • the stereo automatic adjustment method provided in the embodiment can periodically or in real time adjust the first volume played by the first device and the second volume played by the second device according to the first distance and the second distance.
  • the stereo automatic adjustment method based on human perception provided in the embodiment of the present application may include:
  • the central device acquires that the stereo system including the first device and the second device is playing, the first device plays the left stereo channel, and the second device plays the stereo right channel.
  • the first device is the smart speaker 1001 in the living room
  • the second device is the smart speaker 1002 in the living room.
  • the central device acquires that the stereo system including the smart speaker 1001 and the smart speaker 1002 is playing.
  • the hub device acquires the coordinates of the first device and the second device, the first device and the second device are located in the same room or area (or adjacent area), and the The user's coordinates, according to the first device coordinates, the second device coordinates and the user coordinates, obtain the first distance r1 between the first device and the user, and the second distance r2 between the second device and the user.
  • the hub device can obtain the coordinates of the first device and the room or area it belongs to, and the coordinates of the second device and the room or area it belongs to.
  • devices in the same stereo system are located in the same room or area. It should be noted that devices in the same stereo system may also be located in adjacent areas. For example, the living room and the dining room are connected together, with less separation and obstruction between the two, and weaker spatial independence; devices in the same stereo system can be located in the living room and dining room respectively.
  • the central device determines that the smart speaker 1001 (home theater (left)) is set to play the stereo left channel, and the coordinates are The room or area where it is located is the living room; the central device determines that the smart speaker 1002 (home theater (right)) is set to play the stereo right channel, and the coordinates are The room or area in which it is located is the living room.
  • the hub device can also obtain the coordinates of users in the room or area (or adjacent areas).
  • a user is included in the room or zone to which the stereo system belongs.
  • the room or area to which the stereo system belongs includes multiple users, and the acquired coordinates of the user are an average value of the coordinates of the multiple users.
  • the coordinates of user 1 in the living room are The coordinates of user 2 are Then determine the user's coordinates as
  • the hub device acquires the first distance r1 between the first device and the user and the second distance r2 between the second device and the user according to the first device coordinates, the second device coordinates, and the user coordinates.
  • the hub device determines first volume information for the first device and second volume information for the second device according to the first distance and the second distance.
  • K is the distance attenuation factor, indicating the volume attenuation caused by distance.
  • DL w is the directivity factor, indicating the volume attenuation caused by the direction in which the audio signal is emitted;
  • a e is the air absorption attenuation, indicating the volume attenuation caused by the air absorbing the audio when the audio propagates in the air.
  • DL w is the same and A e is the same; that is, the greater the distance r between the audio playback device and the user, the greater the value of K, and the greater the value of A p , that is, the sound The greater the attenuation propagated in space.
  • the hub device uses formula (20) to calculate the values of A p corresponding to the first device and the second device respectively, and determine the playback volumes of the first device and the second device according to the values of A p .
  • the hub device determines that both the first device and the second device adjust the playback volume, and the playback volume of the first device or the second device is respectively increased by A p .
  • the preset value of stereo playback volume is 20dB.
  • the central device calculates that the A p1 value corresponding to the first device is 3dB according to r1, and then determines that the playback volume of the first device is increased by 3dB; the central device calculates that the A p2 value corresponding to the second device is 8dB according to r2, then determines that the second The playback volume of the device is increased by 8dB. That is, the playback volume of the first device is 23dB (20dB+3dB), and the playback volume of the second device is 28dB (20dB+8dB).
  • the hub device determines which one of the first device or the second device has a larger volume attenuation, and adjusts the playback volume. For example, the playback volume attenuation value of the first device is A p1 , the playback volume attenuation value of the second device is A p2 , and A p1 is smaller than A p2 , then the playback volume of the second device with a larger playback volume attenuation value is greater than the preset value ( A p2 -A p1 ). Exemplarily, the stereo playback volume preset value is 20dB.
  • the central device calculates the value of A p1 corresponding to the first device according to r1 to be 3dB, and calculates the value of A p2 corresponding to the second device according to r2 to be 8dB, and then determines that the playback volume of the second device is increased (A p2 -A p1 ); that is The playback volume of the first device is 20dB, and the playback volume of the second device is 25dB (20dB+8dB-3dB).
  • the first volume information is used to indicate the playback volume value of the first device, and the second volume information is used to indicate the playback volume value of the second device.
  • the first volume information is used to instruct the first device to play the volume change value, and the second volume information is used to instruct the second device to play the volume change value.
  • the hub device sends a first play message to the first device, where the first play message includes the first volume information; and sends a second play message to the second device, where the second play message includes the second volume information.
  • the central device determines that the playback volume of the first device is a preset value or the playback volume does not change, the first playback message does not include the first volume information; if the hub device determines that the playback volume of the second device is a preset value or The playing volume does not change, and the second playing message does not include the second volume information.
  • the first device receives the first play message, obtains the first volume information, and plays according to the first volume information; the second device receives the second play message, obtains the second volume information, and plays according to the second volume information.
  • the central device adjusts its playback volume according to the distance between the first device and the second device and the user, so that the playback volume of the first device or the second device that is farther away from the user is louder; in this way, the user receives The volume of the first device is the same as that of the second device, which improves the user's experience of using the stereo system.
  • the user may move position.
  • the room or area where the stereo system is located includes one or more users, wherein at least one user moves in the room or area where the stereo system is located.
  • the number of users in the room or area where the stereo system is located changes, for example, the number of users changes from one to multiple, or from multiple to one.
  • the central device periodically acquires the user coordinates in the room or area where the stereo system is located, and if it is determined that the user coordinates change, S3502-S3505 is executed. In this way, as the user moves or the number of users changes, the playback volume of the first device and/or the second device can be adjusted in real time, so that the user can keep receiving the same volume from the first device and the second device.
  • the stereo automatic adjustment method based on human perception provided in the embodiment of the present application can also be used for more than two audio playback devices. Adjust the playback volume of the audio playback device according to the distance between each audio playback device and the user in multiple audio playback devices (there may be one or more audio playback devices that do not adjust the playback volume), the greater the distance between the audio playback device and the user, the playback The louder the volume, so that the user receives the same playback volume of each audio playback device.
  • the stereo system also includes a third device.
  • the third device plays stereo auxiliary sound (subwoofer).
  • the third device also communicates and interacts with the central device in a manner similar to that of the first device or the second device in Figure 35, and the central device also obtains information related to the third device (for example, the third device is located at The same room or area (or adjacent areas),), not shown in Figure 35.
  • the hub device determines the first volume information for the first device and the volume information for the second device according to the first distance between the first device and the user, the second distance between the second device and the user, and the third distance between the third device and the user. The second volume information of the device and the third volume information for the third device.
  • the hub device sends a first play message, a second play message and a third play message to the first device, the second device and the third device respectively, wherein the third play message includes third volume information.
  • the third volume information is used to instruct the third device to play a volume value.
  • the third volume information is used to instruct the third device to play a volume change value.
  • the third device receives the third play message, and plays according to the third volume information.
  • the stereo automatic adjustment method based on human body perception can also adjust other parameters of the playback device.
  • at least one playback device in the stereo system includes a camera, or the whole-house system includes a camera, and is located in the same room or area as the stereo system.
  • the camera is used to collect user images periodically or in real time and upload them to the central device.
  • the hub device can determine the orientation of the user according to the image of the user. For example, when the user faces the stereo system, the first device plays the stereo left channel, and the second device plays the stereo right channel.
  • the central device determines that the user becomes facing away from the stereo system, then sends the first play information to the first device, and the first play information is used to instruct to play the stereo right channel; sends the second play information to the second device, and the second play information uses Stereo left channel is played as directed.
  • the first device receives the first play information and changes to play the stereo right channel; the second device receives the second play information and changes to play the stereo left channel.
  • Embodiment 4 relates to Fig. 36, Fig. 37A, Fig. 37B, Fig. 38, Fig. 39A, Fig. 39B, Fig. 40A, Fig. 40B, Fig. 40C, Fig. 41 and Fig. 42, and provides a device automatic playback method based on human perception and system.
  • the user watches video and/or listens to audio through the device.
  • Some videos for example, videos containing violence
  • audio are inappropriate for other users or certain categories of users (for example, children), and users do not want other users or certain categories of users to see and/or hear.
  • user 1 watches a video played on smart TV 300 in the living room.
  • User 2 eg, a child
  • the remote controller controls the device to exit the playback is not immediate enough, and the user 2 (for example, a child) may still see part of the picture and/or hear part of the sound, causing adverse effects.
  • the device automatic playback method based on human body perception provided in the embodiment of the present application, if it is determined that other users or specific types of users (such as children) enter the room or area where the device is located, the device will automatically pause playback and/or automatically switch; if it is determined that other users Or a specific category of users (for example, children) leaves the room or area where the device is located, and the device automatically resumes playback; no manual operation is required throughout the process.
  • the user 1 can set a private mode for the device.
  • the private mode of the device during the playback of the device, if it is detected that a user 2 (for example, a child) enters the room or area where the device is located, it will automatically pause the playback and/or automatically switch (for example, switch to a black screen or display other screen saver). If it is detected that the user 2 (for example, a child) leaves the room or area where the device is located, playback will be resumed automatically (for example, switch back to the original video or continue playing the original audio).
  • the method for automatically playing a device based on human perception may include:
  • the hub device acquires that the first room or the first area where the first device is located has the first user but no second user, and acquires that the first device has entered a private mode and is playing.
  • the first device plays video and/or audio.
  • the first user watches video and/or listens to audio in a first room or a first area where the first device is located.
  • the user may set a private mode for the first device on the human-computer interaction interface of the control panel.
  • the control panel displays a "My Home” interface 3801, and the "My Home” interface 3801 includes a "TV in the Living Room” control 3802, a "TV in the Master Bedroom” control 3803, and a "Home Theater ( Left)” control 3804, "home theater (right)” control 3805, “bedside speaker” control 3806, “desk speaker” control 3807, etc.
  • the user can click the "TV in the living room” control 3802 to set the parameters of the TV in the living room.
  • the control panel displays a "TV in living room” interface 3810.
  • the "TV in Living Room” interface 3810 includes a “Private Mode” option 3811.
  • the user can click on the "private mode” option 3811 to turn on or off the private mode of the "TV in the living room".
  • the private mode of the "TV in the living room” is turned on, the smart TV enters the private mode; when the private mode of the "TV in the living room” is turned off, the smart TV exits the private mode.
  • the control panel notifies the central device that the first device enters or exits the private mode according to user input.
  • the hub device obtains that the first room or the first area where the first device is located has the first user but no second user, and obtains that the first device has entered a private mode and is playing.
  • the first user can be a specific user (for example, an adult), or a class of users (a class of people, such as an adult); the second user can be a specific user (for example, a child), or a A class of users (a class of people, such as children).
  • the hub device detects that at least one second user enters the first room or the first area.
  • the central device periodically or in real time obtains the user coordinates and user identities in the first room or the first area.
  • the hub device may detect that the second user enters the first room or the first area.
  • the hub device sends a first notification message to the first device.
  • the first notification message is used to instruct the first device to pause playback and switch to a screen saver.
  • the first device After receiving the first notification message, the first device automatically pauses the playback and automatically switches to a screen saver.
  • Screensavers can be preset pictures or videos.
  • the screen saver may be a preset cartoon picture or video.
  • the first device only plays audio, it may not be switched to a screen saver.
  • the first device sends a first feedback message to the central device.
  • the first feedback message is used to indicate that the first device has paused playback and switched to a screen saver.
  • the hub device receives the first feedback message, and learns that the first device has paused playback and switched to a screen saver.
  • the hub device detects that there is no second user in the first room or in the first area.
  • the hub device detects that there is no second user in the first room or in the first area. In another example, the hub device detects that there is no second user in the first room or the first area and there is the first user in the first room or the first area.
  • the hub device sends a second notification message to the first device.
  • the second notification message is used to instruct the first device to switch to the original video or original audio and resume playing.
  • the first device After receiving the second notification message, the first device automatically switches to the original video or original audio and automatically resumes playing.
  • the hub device if it detects that there is no user in the first room or the first area for more than a preset time period, it sends a third notification message to the first device to instruct the first device to be on standby or shutdown.
  • the first device automatically stands by or shuts down after receiving the third notification message.
  • the central device when the device is in private mode, if the central device detects that a specific type of user (such as a child) enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; if the central device detects a specific type of user (For example, if a child) leaves the room or area where the device is located, it will automatically switch to the original video or audio and resume playback automatically.
  • a specific type of user such as a child
  • the method for automatically playing a device based on human perception may include:
  • the hub device obtains that the first room or the first area where the first device is located has the first user but no second user, and obtains that the first device has entered a private mode and is playing.
  • the hub device detects that a user other than the first user enters the first room or the first area.
  • the central device periodically or in real time obtains the user coordinates and user identities in the first room or the first area.
  • the hub device may detect that a user other than the first user enters the first room or the first area.
  • the hub device sends a first notification message to the first device.
  • the first device After receiving the first notification message, the first device automatically pauses the playback and automatically switches to a screen saver.
  • the first device sends a first feedback message to the central device.
  • the hub device receives the first feedback message, and learns that the first device has paused playback and switched to a screen saver.
  • the hub device detects that there is no user other than the first user in the first room or in the first area.
  • the hub device detects that there is no user other than the first user in the first room or in the first area. In another example, the hub device detects that there is no user other than the first user in the first room or the first area and that there is the first user in the first room or the first area.
  • the hub device sends a second notification message to the first device.
  • the first device After receiving the second notification message, the first device automatically switches to the original video or original audio and automatically resumes playing.
  • the central device when the device is in private mode, if the central device detects that a user other than the first user enters the room or area where the device is located, it will automatically pause the playback and switch to the screen saver; If the user leaves the room or area where the device is located, it will automatically switch to the original video or audio and automatically resume playback.
  • the relative position to the device may change. For example, the user leaves the room or area where the smart TV is located, or approaches or stays away from the smart TV while watching a video played by the smart TV. If the user leaves, the smart TV continues to play, which will cause waste of resources; and the user will miss part of the content played.
  • the device automatically pauses playback, automatically stops playback, or automatically shuts down according to the relative position between the user and the device. No manual operation is required in the whole process.
  • the hub device determines that the user leaves the viewing area of the smart TV, and the smart TV pauses playing.
  • FIG. 39B when the smart TV is playing, the hub device determines that the distance between the user and the smart TV is less than the preset playing distance, then the smart TV pauses and displays a prompt message "You are too close to the TV , please pay attention to healthy eyes and protect your eyesight.”
  • the method for automatically playing a device based on human perception may include:
  • the hub device obtains that the first device is playing video, and obtains the coordinates of the first device, the coordinates of the first viewing area corresponding to the first device, the first room or the first area where the first device is located, and the first device.
  • the first device plays the video.
  • the first user watches the video in a first viewing area in a first room or a first area where the first device is located.
  • the first device when the first device starts to play, it sends the first feedback message to the hub device.
  • the central device receives the first feedback message and acquires that the first device is playing the video.
  • the hub device obtains the coordinates of the first device, the first room or the first area where the first device is located, and the coordinates of the user in the first room or the first area.
  • the hub device also acquires the coordinates of the first viewing area corresponding to the first device.
  • the preset area in the direction in which the screen of the device is facing is the viewing area of the device.
  • the hub device acquires the coordinate range of the viewing area of the device through the UWB module of the first electronic device.
  • the device establishes a third coordinate system.
  • the viewing area corresponding to the device is the spatial area surrounded by the vertical line passing through point A0 , the vertical line passing through point A3 , the vertical line passing through point T0 , and the vertical line passing through point T1 .
  • point A 0 is the lower left vertex of the device
  • point A 3 is the lower right vertex of the device
  • point T 0 and point T 1 are in the direction of the device screen
  • the sides are on the same horizontal plane and parallel to the lateral side of the device
  • the distance between the line connecting point T 0 and point T 1 and the screen of the device is h
  • the angle between the line connecting point T 0 and point A 0 and the Z t axis is the opening angle ⁇ of the viewing area
  • the angle between the line connecting point T 1 and point A 3 and the Z t axis is also the opening angle ⁇ of the viewing area.
  • the hub device obtains the coordinates of the four vertices of the device, that is, the coordinates of point A 0 and point A 3 .
  • the coordinate of the origin (point A 0 ) of the third coordinate system in the first coordinate system is A 0
  • the coordinate of point A 3 in the first coordinate system is A 3 .
  • the attitude transfer matrix from the third coordinate system to the first coordinate system can pass the attitude angle between the third coordinate system and the first coordinate system Expressed as follows:
  • formula (21) can be the form of formula (22), as follows:
  • a 3 (1) represents the component of the coordinate A 3 of the point A 3 in the first coordinate system on the X e axis.
  • formula (23) can be the form of formula (24), as follows:
  • the hub device obtains the coordinates of the point A 0 , the point A 3 , the point T 0 and the point T 1 in the first coordinate system, that is, obtains the coordinate range of the viewing area of the device. Further, the hub device may transform the coordinates of the point A 0 , the point A 3 , the point T 0 and the point T 1 in the first coordinate system into coordinates in the fifth coordinate system.
  • the hub device detects that there is no user in the first viewing area within a first preset time period.
  • a first preset time period eg, 5 minutes
  • the hub device sends a first notification message to the first device.
  • the first notification message is used to instruct the first device to stop playing.
  • the first device stops playing after receiving the first notification message.
  • the first device may stop playing or pause playing.
  • the first device sends a second feedback message to the hub device, which is used to indicate that the first device has stopped playing or paused playing.
  • the central device can acquire the running information of the first device.
  • the hub device detects that there is a user in the first viewing area within a second preset time period.
  • the first user returns to the first viewing area.
  • the hub device sends a second notification message to the first device.
  • the second notification message is used to instruct the first device to resume playing.
  • the first device resumes playing after receiving the second notification message.
  • the first device sends a third feedback message to the hub device, which is used to indicate that the first device has resumed playing.
  • the method for automatically playing a device based on human perception may include:
  • the hub device obtains that the first device is playing video, and obtains the coordinates of the first device, the coordinates of the first viewing area corresponding to the first device, the first room or the first area where the first device is located, and the first device. The coordinates of the user in a room or first area.
  • the central device detects that within a third preset time period, the distance between at least one user and the first device is smaller than the first preset distance.
  • the first preset distance is u
  • u is smaller than h.
  • a straight line with a distance of u from the device screen and on the same horizontal plane as the lower lateral edge of the device and parallel to the lower lateral edge of the device, the intersection point of the line connecting point T 0 and point A 0 is U 0 ;
  • the distance from the device screen U 1 is the intersection point of the straight line that is u and is on the same horizontal plane as the lower lateral side of the device and parallel to the lower lateral side of the device, and the line connecting point T 1 and point A 3 .
  • the central device When the central device detects that the user enters the space area enclosed by the plumb line passing through point A0 , the plumb line passing through point A3 , the plumb line passing through point U0 , and the plumb line passing through point U1 , it determines The distance between the user and the device is less than a first preset distance.
  • the hub device sends a third notification message to the first device.
  • the third notification message is used to instruct the first device to stop playing and output reminder information.
  • the first device After receiving the third notification message, the first device stops playing and outputs reminder information.
  • the smart TV stops playing and displays a reminder message "You are too close to the TV, please pay attention to healthy eye use and protect eyesight".
  • the central device detects that the distance between no user and the first device is less than the first preset distance within the fourth preset time period, and there is a user in the first viewing area.
  • the central device detects that within the fourth preset time period, no user passes through the plumb line of point A0 , the plumb line of point A3 , the plumb line of point U0 , and The space area enclosed by the plumb line passing through point U1 ; and at least one user passes through the plumb line of point U0 , the plumb line passing through point U1 , the plumb line passing through point T0 and , The space area enclosed by the plumb line passing through the point T1 .
  • the central device sends a fourth notification message to the first device.
  • the fourth notification message is used to instruct the first device to resume playing.
  • the first device resumes playing after receiving the fourth notification message.
  • the method for automatically playing a device based on human perception may include:
  • the hub device obtains that the first device is playing video, and obtains the coordinates of the first device, the coordinates of the first viewing area corresponding to the first device, the first room or the first area where the first device is located, and the first device. The coordinates of the user in a room or first area.
  • the central device detects that within a third preset time period, the distance between at least one user and the first device is smaller than the first preset distance.
  • the hub device sends a third notification message to the first device.
  • the first device After receiving the third notification message, the first device stops playing and outputs reminder information.
  • the central device detects that the distance between no user and the first device is less than the first preset distance within the fourth preset time period, and there is no user in the first viewing area.
  • the user is too close to the device and directly leaves the viewing area of the device after seeing the reminder message.
  • the central device sends a fifth notification message to the first device.
  • the fifth notification message is used to instruct the first device to sleep.
  • the first device automatically sleeps after receiving the fifth notification message.
  • FIG. 40A , FIG. 40B , and FIG. 40C may be executed independently or in any combination, which is not limited in this embodiment of the present application.
  • the video playback device pauses playback; if the user enters the viewing area of the video playback device, the video playback device continues to play. If it is detected that the distance between the user and the video playback device is less than the preset distance, a prompt message is issued, the video is paused, and so on. According to the relative position of the user and the video playback device, the video playback device is automatically controlled to pause or continue playing, so as to improve the convenience for the user to use the video playback device.
  • each device in the above-mentioned whole-house system includes a corresponding hardware structure and/or software module for performing each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the example units and algorithm steps described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the embodiments of the present application.
  • the embodiments of the present application may divide the above-mentioned device into functional modules according to the above-mentioned method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 43 shows a schematic structural diagram of a possible structure of the first electronic device involved in the foregoing embodiments.
  • the electronic device 4300 includes: a processor 4310 and a memory 4320 .
  • the processor 4310 is configured to control and manage the actions of the electronic device 4300 .
  • it can be used to calculate the coordinates of the user and the room or area where the user is located; calculate the coordinates of the second electronic device and the room or area where it is located; calculate the distance between the user and the second electronic device; determine the instructions executed by the second electronic device; and/or Other procedures for the techniques described herein.
  • the memory 4320 is used to store program codes and data of the electronic device 4300 .
  • the unit modules in the above electronic device 4300 include but not limited to the above processor 4310 and memory 4320 .
  • electronic device 4300 may further include a power supply unit and the like.
  • the power supply unit is used to supply power to the electronic device 4300 .
  • FIG. 44 shows a schematic structural diagram of a possible structure of the second electronic device involved in the above embodiment.
  • the electronic device 4400 includes: a processor 4410 , a memory 4420 and a display screen 4430 .
  • the processor 4410 is configured to control and manage the actions of the electronic device 4400 .
  • the memory 4420 is used to store program codes and data of the electronic device 4400 .
  • the display screen 4430 is used for displaying information, images, videos, etc. of the electronic device 4400 .
  • the unit modules in the above electronic device 4400 include but not limited to the above processor 4410 , memory 4420 and display screen 4430 .
  • electronic device 4400 may further include a power supply unit and the like.
  • the power supply unit is used to supply power to the electronic device 4400 .
  • the embodiment of the present application also provides a computer-readable storage medium, in which computer program code is stored, and when the processor of the electronic device executes the computer program code, the electronic device executes the method in the foregoing embodiments.
  • the embodiment of the present application also provides a computer program product, which causes the computer to execute the method in the foregoing embodiments when the computer program product is run on the computer.
  • the electronic device 4300, the electronic device 4400, the computer-readable storage medium or the computer program product provided in the embodiment of the present application are all used to execute the corresponding method provided above. Therefore, the beneficial effects that it can achieve can refer to the above The beneficial effects of the corresponding method provided herein will not be repeated here.
  • the disclosed electronic device and method can be implemented in other ways.
  • the above-described electronic device embodiments are only illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods such as multiple units or components May be incorporated or may be integrated into another electronic device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of electronic devices or units may be in electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: various media capable of storing program codes such as U disks, mobile hard disks, ROMs, magnetic disks, or optical disks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

La présente demande relève du domaine de la commande automatique. Elle concerne un procédé de commande automatique sur la base d'une détection d'un corps humain, ainsi qu'un premier dispositif électronique et un système. Le système comprend un dispositif central ainsi que des premier et second dispositifs électroniques. Le premier dispositif électronique comprend un premier module à bande ultra-large et un module de radar à ondes millimétriques. Le second dispositif électronique comprend au moins un dispositif mobile. Sur la base d'une mesure d'une position du dispositif mobile par le premier dispositif électronique, ainsi que de la mesure et de la conversion d'une position d'un corps humain, le dispositif central acquiert des informations sur la position du second dispositif électronique et des informations sur la position d'un utilisateur. En fonction des informations sur la position de l'utilisateur et des informations sur la position du second dispositif électronique, au moins un second dispositif électronique exécute automatiquement une opération prédéfinie. Grâce à la présente demande, un utilisateur n'a pas à transporter un dispositif et la commande automatique d'un dispositif IoT est assurée, ce qui permet commodité et rapidité.
PCT/CN2022/118440 2021-10-25 2022-09-13 Procédé de commande automatique sur la base d'une détection d'un corps humain, premier dispositif électronique et système WO2023071565A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111243798.9 2021-10-25
CN202111243798.9A CN116033331A (zh) 2021-10-25 2021-10-25 一种基于人体感知的自动控制方法、第一电子设备及系统

Publications (1)

Publication Number Publication Date
WO2023071565A1 true WO2023071565A1 (fr) 2023-05-04

Family

ID=86076455

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118440 WO2023071565A1 (fr) 2021-10-25 2022-09-13 Procédé de commande automatique sur la base d'une détection d'un corps humain, premier dispositif électronique et système

Country Status (2)

Country Link
CN (1) CN116033331A (fr)
WO (1) WO2023071565A1 (fr)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747346A (zh) * 2014-01-23 2014-04-23 中国联合网络通信集团有限公司 一种多媒体视频播放的控制方法及多媒体视频播放器
CN103945301A (zh) * 2014-04-24 2014-07-23 Tcl集团股份有限公司 一种音响系统平衡调节方法及装置
US20160301373A1 (en) * 2015-04-08 2016-10-13 Google Inc. Dynamic Volume Adjustment
CN106303806A (zh) * 2016-09-22 2017-01-04 北京小米移动软件有限公司 音响系统的音量平衡控制方法及装置
CN107223337A (zh) * 2017-04-01 2017-09-29 深圳市智晟达科技有限公司 一种自动暂停视频播放的方法和数字电视
US20170328997A1 (en) * 2016-05-13 2017-11-16 Google Inc. Systems, Methods, and Devices for Utilizing Radar with Smart Devices
CN107483989A (zh) * 2017-08-23 2017-12-15 深圳市优品壹电子有限公司 一种身份验证方法及终端
US20190392834A1 (en) * 2019-06-25 2019-12-26 Lg Electronics Inc. Method and apparatus for selecting voice-enabled device
CN111726689A (zh) * 2020-06-30 2020-09-29 北京奇艺世纪科技有限公司 一种视频播放控制方法及装置
CN112130918A (zh) * 2020-09-25 2020-12-25 深圳市欧瑞博科技股份有限公司 智能设备唤醒方法、装置、系统及智能设备
CN112702633A (zh) * 2020-12-21 2021-04-23 深圳市欧瑞博科技股份有限公司 多媒体智能播放方法、装置、播放设备以及存储介质
CN113207028A (zh) * 2021-03-30 2021-08-03 当趣网络科技(杭州)有限公司 历史记录管理方法

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747346A (zh) * 2014-01-23 2014-04-23 中国联合网络通信集团有限公司 一种多媒体视频播放的控制方法及多媒体视频播放器
CN103945301A (zh) * 2014-04-24 2014-07-23 Tcl集团股份有限公司 一种音响系统平衡调节方法及装置
US20160301373A1 (en) * 2015-04-08 2016-10-13 Google Inc. Dynamic Volume Adjustment
US20170328997A1 (en) * 2016-05-13 2017-11-16 Google Inc. Systems, Methods, and Devices for Utilizing Radar with Smart Devices
CN106303806A (zh) * 2016-09-22 2017-01-04 北京小米移动软件有限公司 音响系统的音量平衡控制方法及装置
CN107223337A (zh) * 2017-04-01 2017-09-29 深圳市智晟达科技有限公司 一种自动暂停视频播放的方法和数字电视
CN107483989A (zh) * 2017-08-23 2017-12-15 深圳市优品壹电子有限公司 一种身份验证方法及终端
US20190392834A1 (en) * 2019-06-25 2019-12-26 Lg Electronics Inc. Method and apparatus for selecting voice-enabled device
CN111726689A (zh) * 2020-06-30 2020-09-29 北京奇艺世纪科技有限公司 一种视频播放控制方法及装置
CN112130918A (zh) * 2020-09-25 2020-12-25 深圳市欧瑞博科技股份有限公司 智能设备唤醒方法、装置、系统及智能设备
CN112702633A (zh) * 2020-12-21 2021-04-23 深圳市欧瑞博科技股份有限公司 多媒体智能播放方法、装置、播放设备以及存储介质
CN113207028A (zh) * 2021-03-30 2021-08-03 当趣网络科技(杭州)有限公司 历史记录管理方法

Also Published As

Publication number Publication date
CN116033331A (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
JP6847968B2 (ja) スマートデバイスでレーダを利用するためのシステム、方法およびデバイス
Borriello et al. Walrus: wireless acoustic location with room-level resolution using ultrasound
US20160323863A1 (en) Service sharing device and method
US20130073681A1 (en) Creating interactive zones
CN108139460A (zh) 使用超声脉冲和无线电信号的云协调定位系统
US11736555B2 (en) IOT interaction system
KR20200110639A (ko) 위치 결정 시스템에서 사용하기 위한 전송 장치
CN111970639B (zh) 保持安全距离的方法、装置、终端设备和存储介质
CN109495840B (zh) 一种无线通信方法、装置、系统和存储介质
WO2022007944A1 (fr) Procédé de commande de dispositif et appareil associé
WO2022088935A1 (fr) Procédé de commande pour dispositif intelligent, ainsi qu'étiquette, dispositif, terminal, et support de stockage
EP4376470A1 (fr) Procédé et appareil de détection, terminal et dispositif de réseau
US20230217210A1 (en) UWB Automation Experiences Controller
WO2023071398A1 (fr) Procédé de commande automatique basé sur la perception du corps humain, dispositif électronique et système
WO2023071565A1 (fr) Procédé de commande automatique sur la base d'une détection d'un corps humain, premier dispositif électronique et système
US20220078578A1 (en) Techniques for changing frequency of ranging based on location of mobile device
WO2023071498A1 (fr) Procédé de commande automatique basé sur la détection de corps humain, et premier dispositif électronique ainsi que système
CN114466304B (zh) 智能家居设备的控制方法、移动终端及智能家居平台
WO2023071484A1 (fr) Procédé de commande automatique basé sur la détection de corps humain, et dispositif électronique et système
EP4383031A1 (fr) Procédé de commande automatique basé sur la détection de corps humain, et dispositif électronique et système
WO2023071547A1 (fr) Procédé de commande automatique sur la base d'une perception d'un corps humain, premier dispositif électronique et système
TW202018321A (zh) 一種混合型室內定位架構及其方法
CN115695063B (zh) 一种控制智能设备的方法及装置
WO2024051545A1 (fr) Procédé d'envoi d'informations de mesure, procédé de réception d'informations de mesure et dispositif de communication
US20230169839A1 (en) Object Contextual Control Based On UWB Radios

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885452

Country of ref document: EP

Kind code of ref document: A1