WO2023065799A1 - Human-computer interaction control method and device and storage medium - Google Patents

Human-computer interaction control method and device and storage medium Download PDF

Info

Publication number
WO2023065799A1
WO2023065799A1 PCT/CN2022/113401 CN2022113401W WO2023065799A1 WO 2023065799 A1 WO2023065799 A1 WO 2023065799A1 CN 2022113401 W CN2022113401 W CN 2022113401W WO 2023065799 A1 WO2023065799 A1 WO 2023065799A1
Authority
WO
WIPO (PCT)
Prior art keywords
work
information
smart
human
playback device
Prior art date
Application number
PCT/CN2022/113401
Other languages
French (fr)
Chinese (zh)
Inventor
李冰
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023065799A1 publication Critical patent/WO2023065799A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present disclosure relates to the field of communication technologies, and in particular to a method, device and storage medium for man-machine interaction control.
  • the main purpose of the present disclosure is to provide a human-computer interaction control method, device and storage medium, aiming at instructing different smart devices to work according to the interaction information and location information of the target object, so as to realize the combination of multiple smart devices for the user Provide services, and then complete effective collaborative interaction with users.
  • the present disclosure provides a human-computer interaction control method, the method includes: detecting interaction information of a target object in real time; determining each second smart device used for work according to the interaction information and position information of each first smart device; A work order is issued to each of the second smart devices used for work, so as to instruct each of the second smart devices used for work to perform cooperative work based on the work order.
  • the present disclosure also provides a human-computer interaction control device, including: a memory and a processor; the memory is used to store a computer program; the processor is used to execute the computer program and realize the above first aspect when executing the computer program The steps of the human-computer interaction control method.
  • the present disclosure also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor implements the steps of the human-computer interaction control method in the first aspect.
  • Fig. 1 is a schematic structural diagram of a human-computer interaction control system provided in some cases
  • FIG. 2 is a schematic structural diagram of the human-computer interaction control system provided by the present disclosure
  • FIG. 3 is a schematic diagram of the implementation flow of the human-computer interaction control method provided by the present disclosure
  • FIG. 4 is a schematic diagram of an application scenario of the human-computer interaction control method provided by the present disclosure.
  • Fig. 5 is a schematic structural diagram of a human-computer interaction control device provided by the present disclosure.
  • human-computer interaction refers to the effective transmission of information between humans and computers through intelligent input and output devices. It can be seen from Figure 1 that in the existing human-computer interaction technology, it is common for a person to interact with a single smart device of the same category (such as smart device 1, smart device 2, ...
  • smart device 1 is an audio playback device and smart device 2 is a video playback device
  • the user inputs a voice signal to the audio playback device or video playback device
  • the audio playback device receives the voice signal, it outputs and
  • the received voice signal matches the information
  • the user inputs a voice signal to the video playback device
  • the video playback device activates the display screen after receiving the voice signal.
  • the present disclosure provides a human-computer interaction control method, device, storage medium and system. It should be noted that, with reference to the accompanying drawings, the implementation principle and process of the human-computer interaction control method provided by some embodiments of the present disclosure will be exemplarily described below. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
  • FIG. 2 is a schematic structural diagram of the human-computer interaction control system provided by the present disclosure.
  • the human-computer interaction control system 20 provided in this embodiment includes: a plurality of smart devices 201 and electronic devices 202 of different types.
  • the multiple smart devices 201 of different categories may be the smart device 1 of the first category, the smart device 2 of the second category, . . . , the smart device n of the nth category, and so on.
  • the smart device 201 corresponding to each category can perform interactive work according to the user's state, behavior, operation and other interaction information detected by the electronic device 202 .
  • the electronic device 202 after the electronic device 202 detects the interaction information of the target object (user), according to the interaction information and the location information of each smart device 201, it can deduce the smart device that needs to interact with the user, and then control each Smart devices work together.
  • the user can interact with multiple smart devices without any burden in the smart space composed of smart devices, so that multiple smart devices can work together.
  • a detection device 2021 may be provided on the electronic device 202 .
  • the electronic device 202 may be a server or a terminal device, and the server may be a single server or a server cluster; the terminal device includes but is not limited to a mobile phone terminal, a tablet computer, a desktop computer, a robot, a wearable smart device, and the like.
  • the detection device 2021 may be a preset type of sensor or detector disposed on the electronic device 202, such as an infrared detection sensor, a radar detector, or a photon detector.
  • the detection device 2021 may also be other electronic equipment connected in communication with the electronic equipment 202 or a sensor installed at a preset distance from the electronic equipment 202 .
  • the detection device 2021 can be a plurality of information sensing modules distributed in the entire environmental space where the smart device is installed, and the plurality of information sensing modules can be evenly distributed in the environmental space where the smart device is installed, and can fully cover the entire environmental space.
  • each information sensing module covered in the environment space where the smart device is installed is connected to the electronic device 202 in communication.
  • the detection device 2021 may be a plurality of different types of sensors, detectors or other information perception modules, etc., of course, may also be a plurality of sensors, detectors or information perception modules of the same type.
  • the detection device 2021 includes but is not limited to a camera, a temperature sensor, an audio receiver, and an action recognizer.
  • the detecting device 2021 is other electronic equipment connected to the electronic equipment 202, the other electronic equipment may also be a server or a terminal equipment.
  • the detecting means 2021 is configured to detect the interaction information made by the target object in real time.
  • the target object is a user in a space preset with at least two smart devices, and the interaction information includes user's state, behavior, operation and other information.
  • the detection device 2021 can also be used to sense various information in the current environment and obtain the sensed information. It can also send the sensed information to the electronic device 202. After the electronic device 202 analyzes the sensed information, Send control instructions to each smart device according to the analysis results.
  • the detecting device 2021 is a photon detector disposed on the electronic device 202, and the photon detector can be used to detect an operation behavior of a target object (user), such as a gesture.
  • a photon detector includes an optical camera and a sensor. Wherein, the optical camera is used to capture gestures of the target object, and the sensor is used to transmit the user's gestures captured by the optical camera to the electronic device 202 .
  • the photon detector may further include a structured light device, which may be used to collect user location information, and may transmit the collected user location information to the electronic device 202 through the sensor.
  • the electronic device 202 is connected to each smart device 201 in communication, and is used to determine the smart device used for work from each smart device 201 according to the interaction information made by the target object and the location information of each smart device 201 .
  • each smart device 201 in the space where the target object is located is marked as a first smart device
  • each smart device determined to be used for work is marked as a second smart device.
  • the electronic device 202 issues a work order to each second smart device used for work, so as to instruct each second smart device to perform cooperative work based on the work order.
  • the first smart device can be a smart device of a different category, and certainly can be a smart device of the same category.
  • the electronic device 202 determines that each second smart device used to interact with the user is an audio playback device according to the gesture action of the target object.
  • the location information of the audio playback device determines that the audio playback device specifically used to send the audio signal includes the first audio playback device and the second audio playback device, and then sends the work to the determined first audio playback device and the second audio playback device respectively. Instructions to instruct the first audio playback device and the second audio playback device to perform coordinated work.
  • the work instruction carries the signal strengths of the sound signals respectively output by the first audio playback device and the second audio playback device. After the first audio playback device and the second audio playback device receive their respective work instructions, they respectively output sound signals with corresponding signal strengths within a specified time. It can be understood that the sound signal intensity output by the first audio playback device and the second audio playback device can be controlled to be different in different time periods, or the first audio playback device and the second audio playback device can be controlled to output sound signals alternately, and The intensity of the sound signal output during the alternation is different.
  • the electronic device 202 may determine that the second smart device used for work may include smart devices of different types according to the interaction information of the user. For example, the electronic device 202 determines that the second smart device used for work includes at least one audio playback device and at least one video playback device according to the user's voice information and/or the user's gesture action.
  • the detection device 2021 to be used includes an audio receiver and/or a camera; in an exemplary embodiment, the electronic device 202 receives After the audio receiver detects the voice information of the target object and/or the user's gestures captured by the camera, it analyzes the voice information or recognizes the gestures, and determines that the second smart device that needs to interact with the target object is an audio playback device and a video playback device , and then according to the relative position information between the target object and each audio playback device and video playback device, issue work instructions to each audio playback device and video playback device respectively, control each audio playback device to output audio signals, and each video playback device Start the video function.
  • the work instruction carries a first time for instructing the audio playback device to output audio signals and a second time for the video playback device to play video information.
  • the intensity of the audio signal output by the audio device relatively far away from the target object is gradually weakened at the first time, and the audio signal output by the audio device relatively close to the target object is controlled at the first time.
  • the intensity of the device gradually increases, and the real-time position relative to the target object is determined according to the image information of the target object captured by the camera in real time, and the video playback device in front of the target object starts the video function at a second time.
  • the first smart device includes, but is not limited to, an air conditioner, an audio device, a smell output device, a video playback device, and the like.
  • the human-computer interaction control system provided by the present disclosure first detects the interaction information of the target object, and realizes determining the corresponding category from at least two types of first smart devices according to the detected interaction information of the target object.
  • the second smart device determine the second smart device used for work according to the position information of each second smart device; and then issue work instructions to each second smart device used for work to instruct each
  • the second smart device performs work based on the work order.
  • different smart devices are instructed to work, and different smart devices are combined to provide services for users, and then complete effective interaction with users.
  • FIG. 3 is a schematic diagram of an implementation flow of a human-computer interaction control method provided by the present disclosure. As shown in FIG. 3 , the human-computer interaction control method is applied to the electronic device 202 shown in FIG. 2 , including steps S301 to S303. Details are as follows.
  • the electronic device detects the interaction information of the target object in real time through the detection device.
  • the target object is a user who enters the environment where the human-computer interaction control system is located.
  • the electronic device can make preset interaction information, such as gestures or For voice information, the electronic device detects the interaction information made by the target object in real time, and determines the smart device that matches the interaction information of the target object.
  • the interaction information may also be other user interaction information, such as touch interaction information, which is not specifically limited herein.
  • the electronic device can also control the corresponding smart device to work through the environmental information.
  • the electronic device can control the air conditioner to work according to the detected ambient temperature information, and control different air conditioners to perform different tasks according to the movement of the target object. temperature control etc.
  • S302. Determine each second smart device used for work according to the interaction information and the location information of each first smart device.
  • the electronic device can determine which type of first smart device the target object needs to interact with according to the interaction information, and then determine the user information from the corresponding type of first smart device according to the position information of each first smart device of the determined type.
  • the electronic device detects the gesture action, it determines the device that the target object needs to interact with from the plurality of first smart devices according to the gesture action.
  • the multiple first smart devices may be of the same type or of different types, for example including but not limited to, video playback devices, audio devices, air conditioners, refrigerators, smell output devices, and the like.
  • the interaction information includes at least one of gesture actions, voice information, and touch information
  • each second smart device used for work is determined according to the interaction information and the position information of each first smart device
  • the method includes: analyzing the interaction information to obtain each first smart device matching the interaction information; acquiring location information of each first smart device, and determining each second smart device for working according to the location information of each first smart device.
  • the interaction information is a gesture action
  • the gesture action is used to indicate that a device capable of inputting and outputting the interaction information needs to be activated.
  • the gesture action can be analyzed in the electronic device to obtain the semantic information corresponding to the gesture action, and the association mapping relationship between the semantic information and the device category is pre-stored in the electronic device. After the semantic information corresponding to the gesture action of the target object is analyzed, each first smart device matching the semantic information corresponding to the gesture action is determined from among the first smart devices according to the association mapping relationship.
  • the electronic device further includes a voice recognition function. After detecting the voice information of the target object, the electronic device recognizes the voice information to determine each first smart device that matches the voice information.
  • the electronic device obtains each first smart device that matches the interaction information of the target object, it obtains the location information of each first smart device, and determines each first smart device for working from each first smart device according to the location information. 2. Smart devices.
  • determining each second smart device for work may include: according to the position information, respectively determining the relative position information between the target object and each first smart device; according to the relative position information, and select each second smart device for work from each first smart device.
  • the position information of each first smart device is pre-stored in the electronic device.
  • the electronic device The position information calculates the relative position information between the target object and each first smart device; according to the calculated relative position information, each second smart device used for work can be determined from each first smart device.
  • each second smart device used for work refers to each second smart device that performs cooperative interaction with the target object.
  • the relative position between each second smart device and the target object for collaborative interaction with the target object is within the preset communication range, and as the target object moves, when the target object moves to the first position Determining the relative position information between the second smart device used for work and the target object is the same as determining the relative position information between the second smart device used for work and the target object when the target object moves to the second position. That is to say, as the location information of the target object is constantly changing, it is necessary to continuously determine the second smart device used for work from each first smart device, so as to ensure that the target object can receive the same Information. For example, with the movement of the target object, the strength of the sound signal received by the target object, or the clarity of the screen display picture can remain unchanged.
  • the relative position information between the target object and the determined second smart device used for work is related to the target object moving to the second position. position, the relative position information between the target object and the determined second smart device used for work may be different.
  • the target object is at the first location, it corresponds to the output signal strength of the second smart device for work, and when the target object is at the second location, it corresponds to the output signal strength of the second smart device for work.
  • selecting each second smart device for work from the first smart devices includes: if there is a first smart device and If the relative position information between the target objects satisfies the preset position information evaluation condition, it is determined that the first smart device meeting the preset position information evaluation condition is the second smart device used for work, wherein the preset position information evaluation The condition is the relative position information between the target object at the first position and the second smart device determined to be used for work, and the relative position information between the target object at the second position and the second smart device determined to be used for work.
  • the location information is the same.
  • the work instruction carries different instruction information.
  • the work instruction carries information for instructing each audio playback device to coordinate output
  • the strength value of the audio signal, respectively issuing work instructions to each second smart device used for work may include: determining each second smart device used for work according to the relative position information between each second smart device and the target object device, and the output signal strength of each second smart device used for work; sending work instructions with their respective output signal strengths to each second smart device used for work.
  • the second smart device is an audio playback device
  • the output signal strength of each second smart device is expressed as Pn
  • An represents the relative position information between the nth audio playback device and the target object (in an exemplary embodiment, is the distance between the nth audio playback device and the target object), and Pn represents the nth audio playback device
  • the signal strength (in one exemplary embodiment, the sound signal strength) output by the device.
  • the second smart device may include an audio playback device and a video playback device
  • the work instruction carries the first time for instructing the audio playback device to output audio signals and the second time for the video playback device to play video information, and the audio playback device to output The strength value of the audio signal.
  • the audio playback device and the video playback device can be controlled to work together.
  • the first time may include the second time.
  • the video playback device may be a video playback device with a display screen. It can be understood that when the position information of the target object changes, according to the relative position information between the target object and the display screens of each video playback device, different video playback devices can be controlled to switch and display, or two or more video playback devices can be controlled. Two video playback devices display at the same time to ensure that as the target user moves, the display screen of the video playback device is always in front of the user, so that the user can clearly know the content displayed on the display screen at any time.
  • the video playback device may also include a camera, and the work order may also carry instruction information for activating the camera. According to the work instructions, the video playback device can be controlled to start the display screen and the camera at different time periods, and the purpose of controlling the display screen and the camera to work interactively has been achieved.
  • the process of starting the video playback device is the same as that of starting the audio playback device, and details will not be repeated here.
  • the second smart device can also be other types of smart devices with information interaction functions, such as smart home appliances or lamps such as air conditioners, refrigerators, and washing machines. , TV, switch, etc., are not specifically limited here.
  • the human-computer interaction control method detects the interaction information of the target object in real time, and determines each second smart device used for work according to the interaction information and the position information of each first smart device, respectively.
  • a work order is issued to each second smart device used for work, so as to instruct each second smart device used for work to perform work based on the work order.
  • different smart devices are instructed to work, and different smart devices are combined to provide services for users, and then complete effective collaborative interaction with users.
  • the human-computer interaction control system 20 includes an electronic device 202, two detection devices 203 (in this embodiment, the detection device 201 is connected to the electronic device in communication) and two first intelligent devices 201 .
  • the detection device 203 is a sound detection device, for example, a microphone device;
  • the first smart device 201 is a sound output device, for example, an audio playback device.
  • the microphone device is used to detect the sound signal output by the target object 400, and send the detected sound signal to the electronic device 202, and the electronic device 202 determines that the target object 400 needs to communicate with the After the audio playback device interacts, obtain the relative position information (for example, the relative position information is the relative distance) between the target object 400 and each audio playback device, and determine from each audio playback device according to the determined relative position information.
  • the audio playback device for the sound signal and the sound signal intensity output by each audio playback device used to output the sound signal.
  • the microphone device can detect the sound source position information of the sound signal emitted by the target object 400, and send the sound source position information to the electronic device 202, and the electronic device 202 can use the received sound source position information and the pre-stored Position information, determine the relative position information (distance) of the target object 400 relative to each audio playback device, and finally determine the audio playback device outputting the sound signal and the sound signal strength output by each audio playback device according to the determined relative position information.
  • audio playback devices at different locations can be controlled to output sound signals of different intensities, so as to ensure that the target object can receive stable sound signals when moving in the corresponding space. Realized the combination of multiple audio playback devices to complete effective interaction with users.
  • FIG. 5 is a schematic structural diagram of a human-computer interaction control device provided by the present disclosure.
  • the electronic device 202 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected through a bus 503, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 501 is used to provide computing and control capabilities to support the operation of the entire electronic device 202 .
  • the processor 501 can be a central processing unit (Central Processing Unit, CPU), and the processor 501 can also be other general processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC) ), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • FIG. 5 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the electronic device 202 to which the disclosed solution is applied.
  • the specific electronic device 202 may include more or fewer components than shown, or combine certain components, or have a different arrangement of components.
  • the processor is used to run the computer program stored in the memory, and realize the functions of the electronic device 202 provided by the present disclosure when executing the computer program.
  • the processor is used to run a computer program stored in the memory, and implement the following steps when executing the computer program: detect the interaction information of the target object in real time; according to the interaction information and the position information of each first smart device, determine Each second smart device used for work; sends a work order to each second smart device used for work respectively, so as to instruct each second smart device used for work to perform cooperative work based on the work order.
  • the interaction information includes at least one of gesture action, voice information, and touch information.
  • determining each second smart device used for work includes: parsing the interaction information to obtain each first smart device that matches the interaction information; obtaining The location information of each first smart device determines each second smart device used for work according to the location information of each first smart device.
  • determining each second smart device for work includes: according to the position information, respectively determining the relative position information between the target object and each first smart device; According to the relative position information, each second smart device for working is selected from each first smart device.
  • each second smart device used for work is reselected; wherein, the relative position information of the reselected second smart device and the target object is the same as that of the second smart device and the target object before reselection The relative position information is the same.
  • the second smart device used for work includes at least one of a video playback device, a camera device, an audio playback device, and a video playback device.
  • the second smart device used for work includes an audio playback device, and the work instruction carries an intensity value for instructing each audio playback device to output an audio signal cooperatively.
  • the second smart device used for work includes an audio playback device and a video playback device
  • the work instruction carries the first time for instructing the audio playback device to output audio signals and the second time for the video playback device to play video information , and the intensity value of the audio signal output by the audio playback device.
  • the human-computer interaction control method, device, and storage medium provided by the present disclosure first detect the interaction information of the target object in real time, and then determine each second smart device for work according to the interaction information and the position information of each first smart device, Finally, a work order is issued to each second smart device used for work, so as to instruct each second smart device used for work to perform cooperative work based on the work order. According to the interaction information of the target object and the location information of each smart device, different smart devices are instructed to work together, and different smart devices are combined to provide services for users, and then complete effective collaborative interaction with users.
  • the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit .
  • Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a human-computer interaction control method and device and a storage medium. The method comprises: detecting interaction information of a target object in real time; then determining, according to the interaction information as well as position information of first intelligent devices, second intelligent devices used for work; and finally, respectively issuing work instructions to the second intelligent devices used for work so as to instruct the second intelligent devices used for work to cooperatively work on the basis of the work instructions.

Description

人机交互控制方法、设备及存储介质Human-computer interaction control method, device and storage medium
相关申请的交叉引用Cross References to Related Applications
本公开要求享有2021年10月21日提交的名称为“人机交互控制方法、设备及存储介质”的中国专利申请CN202111227973.5的优先权,其全部内容通过引用并入本公开中。This disclosure claims priority to the Chinese patent application CN202111227973.5 filed on October 21, 2021, entitled "Human-computer interaction control method, device and storage medium", the entire content of which is incorporated into this disclosure by reference.
技术领域technical field
本公开涉及通信技术领域,尤其涉及一种人机交互控制方法、设备及存储介质。The present disclosure relates to the field of communication technologies, and in particular to a method, device and storage medium for man-machine interaction control.
背景技术Background technique
随着信息技术和互联网技术的不断发展,智能设备在人们的生活中越来越普遍。例如,可以与用户直接进行交互的音频播放设备、自动选择连接设备的电子设备等。但是,现有的智能设备通常是相互独立的,在复杂的环境状态下无法有效结合起来为用户提供服务,与用户完成有效的交互。With the continuous development of information technology and Internet technology, smart devices are becoming more and more common in people's lives. For example, an audio playback device that can directly interact with the user, an electronic device that automatically selects a connected device, etc. However, existing smart devices are usually independent of each other, and cannot be effectively combined to provide services for users and complete effective interaction with users in complex environmental conditions.
发明内容Contents of the invention
本公开的主要目的在于提供一种人机交互控制方法、设备及存储介质,旨在根据目标对象的交互信息和位置信息,指示不同的智能设备进行工作,实现将多个智能设备结合起来为用户提供服务,进而与用户完成有效的协同交互。The main purpose of the present disclosure is to provide a human-computer interaction control method, device and storage medium, aiming at instructing different smart devices to work according to the interaction information and location information of the target object, so as to realize the combination of multiple smart devices for the user Provide services, and then complete effective collaborative interaction with users.
第一方面,本公开提供一种人机交互控制方法,方法包括:实时检测目标对象的交互信息;根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备;分别向用于工作的各个第二智能设备下发工作指令,以指示用于工作的各个第二智能设备基于工作指令进行协同工作。In a first aspect, the present disclosure provides a human-computer interaction control method, the method includes: detecting interaction information of a target object in real time; determining each second smart device used for work according to the interaction information and position information of each first smart device; A work order is issued to each of the second smart devices used for work, so as to instruct each of the second smart devices used for work to perform cooperative work based on the work order.
第二方面,本公开还提供一种人机交互控制设备,包括:存储器和处理器;存储器用于存储计算机程序;处理器,用于执行计算机程序并在执行计算机程序时实现如上第一方面的人机交互控制方法的步骤。In a second aspect, the present disclosure also provides a human-computer interaction control device, including: a memory and a processor; the memory is used to store a computer program; the processor is used to execute the computer program and realize the above first aspect when executing the computer program The steps of the human-computer interaction control method.
第三方面,本公开还提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时使处理器实现如第一方面的人机交 互控制方法的步骤。In a third aspect, the present disclosure also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the processor implements the steps of the human-computer interaction control method in the first aspect.
附图说明Description of drawings
为了更清楚地说明本公开技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solution of the present disclosure more clearly, the following will briefly introduce the accompanying drawings used in the description of the embodiments. Obviously, the accompanying drawings in the following description are some embodiments of the present disclosure. As far as people are concerned, other drawings can also be obtained based on these drawings on the premise of not paying creative work.
图1是一些情形提供的人机交互控制系统的结构示意图;Fig. 1 is a schematic structural diagram of a human-computer interaction control system provided in some cases;
图2是本公开提供的人机交互控制系统的结构示意图;FIG. 2 is a schematic structural diagram of the human-computer interaction control system provided by the present disclosure;
图3是本公开提供的人机交互控制方法的实现流程示意图;FIG. 3 is a schematic diagram of the implementation flow of the human-computer interaction control method provided by the present disclosure;
图4是本公开提供的人机交互控制方法的一应用场景示意图;FIG. 4 is a schematic diagram of an application scenario of the human-computer interaction control method provided by the present disclosure;
图5是本公开提供的人机交互控制设备的结构示意图。Fig. 5 is a schematic structural diagram of a human-computer interaction control device provided by the present disclosure.
具体实施方式Detailed ways
下面将结合本公开中的附图,对本公开中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solution in the present disclosure will be clearly and completely described below with reference to the drawings in the present disclosure. Apparently, the described embodiments are some of the embodiments of the present disclosure, but not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present disclosure.
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flow charts shown in the drawings are just illustrations, and do not necessarily include all contents and operations/steps, nor must they be performed in the order described. For example, some operations/steps can be decomposed, combined or partly combined, so the actual order of execution may be changed according to the actual situation.
应当理解,在此本公开说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本公开。如在本公开说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should be understood that the terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. As used in this disclosure and the appended claims, the singular forms "a", "an" and "the" are intended to include plural referents unless the context clearly dictates otherwise.
在说明本公开提供的人机交互控制方法、设备及存储介质之前,结合图1对人机交互的原理以及现有的智能交互系统存在的技术问题进行示例性地说明。首先,人机交互指的是通过智能输入、输出设备,以有效的方式实现人机之间的信息传递。由图1可知,在现有的人机交互技术中,常见的是人与同一类别的单个智能设备(如智能设备1、智能设备2、......或者根据智能设备n中的任一智能 设备)进行交互,例如,假设智能设备1为音频播放设备,智能设备2为视频播放设备,用户向音频播放设备或视频播放设备输入语音信号,音频播放设备接收到语音信号之后,输出与接收到的语音信号相匹配的信息,或者用户向视频播放设备输入语音信号,视频播放设备接收到语音信号之后,启动显示屏幕。但是,若在同一空间内的不同位置存在多个同一类别的智能设备,当用户需要与该类别的智能设备进行交互时,不仅无法与多个智能设备同时进行有效的交互,且与单个该类别的智能设备的交互也会受周围该类别的其它智能设备的影响,导致出现交互失效的问题。因此,现有的人机交互过程中,存在复杂环境下无法将多个智能设备有效结合起来为用户提供服务,以及人机交互失效的问题。Before describing the human-computer interaction control method, device and storage medium provided by the present disclosure, the principle of human-computer interaction and the technical problems existing in the existing intelligent interaction system will be exemplarily described with reference to FIG. 1 . First of all, human-computer interaction refers to the effective transmission of information between humans and computers through intelligent input and output devices. It can be seen from Figure 1 that in the existing human-computer interaction technology, it is common for a person to interact with a single smart device of the same category (such as smart device 1, smart device 2, ... or according to any of the smart devices n For example, assuming that smart device 1 is an audio playback device and smart device 2 is a video playback device, the user inputs a voice signal to the audio playback device or video playback device, and after the audio playback device receives the voice signal, it outputs and The received voice signal matches the information, or the user inputs a voice signal to the video playback device, and the video playback device activates the display screen after receiving the voice signal. However, if there are multiple smart devices of the same category in different locations in the same space, when the user needs to interact with the smart devices of this category, not only cannot effectively interact with multiple smart devices at the same time, but also interact with a single The interaction of smart devices in this category will also be affected by other smart devices of the same category around them, resulting in the problem of interaction failure. Therefore, in the existing human-computer interaction process, there are problems that multiple smart devices cannot be effectively combined to provide services for users in a complex environment, and human-computer interaction is invalid.
本公开为了解决上述技术问题,提供了一种人机交互控制方法、设备及存储介质及系统。需要说明的是,下面结合附图,对本公开一些实施例提供的人机交互控制方法的实现原理及过程进行示例性地说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。In order to solve the above technical problems, the present disclosure provides a human-computer interaction control method, device, storage medium and system. It should be noted that, with reference to the accompanying drawings, the implementation principle and process of the human-computer interaction control method provided by some embodiments of the present disclosure will be exemplarily described below. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
请参照图2所示,图2是本公开提供的人机交互控制系统的结构示意图。由图2可知,本实施例提供的人机交互控制系统20包括:不同类别的多个智能设备201和电子设备202。其中,不同类别的多个智能设备201可以是第一类别的智能设备1、第二类别的智能设备2,...,第n类别的智能设备n等。对应各类别的智能设备201可以根据电子设备202检测到的用户的状态、行为、操作等交互信息进行交互工作。在一示例性实施例中,电子设备202在检测到目标对象(用户)的交互信息后,根据交互信息与各智能设备201的位置信息,可以推断出需要与用户交互的智能设备,而后控制各智能设备进行协同工作。达到用户在由各智能设备构成的智能空间中可以无负担地与多个智能设备进行交互,使得多个智能设备能够协同工作。Please refer to FIG. 2 , which is a schematic structural diagram of the human-computer interaction control system provided by the present disclosure. It can be seen from FIG. 2 that the human-computer interaction control system 20 provided in this embodiment includes: a plurality of smart devices 201 and electronic devices 202 of different types. Wherein, the multiple smart devices 201 of different categories may be the smart device 1 of the first category, the smart device 2 of the second category, . . . , the smart device n of the nth category, and so on. The smart device 201 corresponding to each category can perform interactive work according to the user's state, behavior, operation and other interaction information detected by the electronic device 202 . In an exemplary embodiment, after the electronic device 202 detects the interaction information of the target object (user), according to the interaction information and the location information of each smart device 201, it can deduce the smart device that needs to interact with the user, and then control each Smart devices work together. The user can interact with multiple smart devices without any burden in the smart space composed of smart devices, so that multiple smart devices can work together.
其中,在电子设备202上可以设置有检测装置2021。其中,电子设备202可以是服务器或者终端设备,服务器可以是单个服务器或者服务器集群;终端设备包括但不限于手机终端、平板电脑、台式电脑、机器人、可穿戴智能设备等。在一示例性实施例中,检测装置2021可以是设置在电子设备202上的预设类别传感器或探测器等,如红外探测传感器、雷达探测器或者光子探测器等。当然,在本公开的其它一些实施例中,检测装置2021也可以是与电子设备202通讯连接的其它电子设备或者是安装在与电子设备202具有预设距离的传感器等。此 外,检测装置2021可以是分布在设置有智能设备的整个环境空间中的多个信息感知模块,多个信息感知模块可以均匀分布在设置有智能设备的环境空间,能够全面覆盖整个环境空间。当然,覆盖在设置有智能设备的环境空间中的各个信息感知模块与电子设备202均通信连接。在一示例性实施例中,检测装置2021可以是多个不同类别的传感器、探测器或者其它信息感知模组等,当然也可以是同一类别的多个传感器、探测器或者信息感知模组等。例如,检测装置2021包括但不限于摄像头、温度传感器、音频接收器以及动作识别器等。Wherein, a detection device 2021 may be provided on the electronic device 202 . Wherein, the electronic device 202 may be a server or a terminal device, and the server may be a single server or a server cluster; the terminal device includes but is not limited to a mobile phone terminal, a tablet computer, a desktop computer, a robot, a wearable smart device, and the like. In an exemplary embodiment, the detection device 2021 may be a preset type of sensor or detector disposed on the electronic device 202, such as an infrared detection sensor, a radar detector, or a photon detector. Certainly, in some other embodiments of the present disclosure, the detection device 2021 may also be other electronic equipment connected in communication with the electronic equipment 202 or a sensor installed at a preset distance from the electronic equipment 202 . In addition, the detection device 2021 can be a plurality of information sensing modules distributed in the entire environmental space where the smart device is installed, and the plurality of information sensing modules can be evenly distributed in the environmental space where the smart device is installed, and can fully cover the entire environmental space. Of course, each information sensing module covered in the environment space where the smart device is installed is connected to the electronic device 202 in communication. In an exemplary embodiment, the detection device 2021 may be a plurality of different types of sensors, detectors or other information perception modules, etc., of course, may also be a plurality of sensors, detectors or information perception modules of the same type. For example, the detection device 2021 includes but is not limited to a camera, a temperature sensor, an audio receiver, and an action recognizer.
可以理解地,若检测装置2021为与电子设备202连接的其它电子设备时,其它电子设备同样可以是服务器或者终端设备。在一示例性实施例中,在本公开的实施例中,检测装置2021,用于实时检测目标对象做出的交互信息。其中,目标对象为处于预设有至少两个智能设备的空间内的用户,交互信息包括用户的状态、行为、操作等信息。当然,检测装置2021还可以用于感知当前环境下的各种信息并获取感知到的信息,同样可以将感知到的信息发送至电子设备202,通过电子设备202对感知到的信息进行分析后,根据分析结果向各智能设备下发控制指令。具体可以是用户做出的手势动作或发出的语音信息。在一示例性实施例中,假设检测装置2021为设置在电子设备202上的光子探测器,该光子探测器可以用于检测目标对象(用户)的操作行为,如手势动作。在一示例性实施例中,光子探测器包括光学摄像头和传感器。其中,光学摄像头用于拍摄目标对象的手势动作,传感器用于将光学摄像头采集的用户的手势动作传输至电子设备202。在一示例性实施例中,光子探测器还可以包括有结构光器件,该结构光器件可以用于采集用户的位置信息,并可以将采集的用户的位置信息通过传感器传输至电子设备202。Understandably, if the detecting device 2021 is other electronic equipment connected to the electronic equipment 202, the other electronic equipment may also be a server or a terminal equipment. In an exemplary embodiment, in the embodiment of the present disclosure, the detecting means 2021 is configured to detect the interaction information made by the target object in real time. Wherein, the target object is a user in a space preset with at least two smart devices, and the interaction information includes user's state, behavior, operation and other information. Of course, the detection device 2021 can also be used to sense various information in the current environment and obtain the sensed information. It can also send the sensed information to the electronic device 202. After the electronic device 202 analyzes the sensed information, Send control instructions to each smart device according to the analysis results. Specifically, it may be a gesture action made by the user or a voice message sent by the user. In an exemplary embodiment, it is assumed that the detecting device 2021 is a photon detector disposed on the electronic device 202, and the photon detector can be used to detect an operation behavior of a target object (user), such as a gesture. In an exemplary embodiment, a photon detector includes an optical camera and a sensor. Wherein, the optical camera is used to capture gestures of the target object, and the sensor is used to transmit the user's gestures captured by the optical camera to the electronic device 202 . In an exemplary embodiment, the photon detector may further include a structured light device, which may be used to collect user location information, and may transmit the collected user location information to the electronic device 202 through the sensor.
其中,电子设备202分别与各个智能设备201通讯连接,用于根据目标对象做出的交互信息和各个智能设备201的位置信息,从各个智能设备201中确定用于工作的智能设备。在一示例性实施例中,为了便于区别,将目标对象所在空间内的各个智能设备201标记为第一智能设备,将确定的用于工作的各个智能设备记为第二智能设备。电子设备202在确定了用于工作的各个第二智能设备后,分别向用于工作的各个第二智能设备下发工作指令,以指示各个第二智能设备基于工作指令进行协同工作。Among them, the electronic device 202 is connected to each smart device 201 in communication, and is used to determine the smart device used for work from each smart device 201 according to the interaction information made by the target object and the location information of each smart device 201 . In an exemplary embodiment, for ease of distinction, each smart device 201 in the space where the target object is located is marked as a first smart device, and each smart device determined to be used for work is marked as a second smart device. After determining each second smart device used for work, the electronic device 202 issues a work order to each second smart device used for work, so as to instruct each second smart device to perform cooperative work based on the work order.
其中,第一智能设备可以是不同类别的智能设备,当然也可以是同一类别的 智能设备。在本实施例中,在一示例性实施例中,假设电子设备202根据目标对象的手势动作,确定出用于与用户交互的各个第二智能设备为音频播放设备,假设,电子设备202根据各个音频播放设备的位置信息确定出具体用于发出音频信号的音频播放设备包括第一音频播放设备和第二音频播放设备,则向确定的第一音频播放设备和第二音频播放设备分别下发工作指令,以指示第一音频播放设备和第二音频播放设备进行协调工作。在一示例性实施例中,工作指令携带有第一音频播放设备和第二音频播放设备分别输出声音信号的信号强度。第一音频播放设备和第二音频播放设备在接收到各自的工作指令后,分别在指定的时间内输出各自对应的信号强度的声音信号。可以理解地,可以控制第一音频播放设备和第二音频播放设备在不同的时间段输出的声音信号强度不同,也可以控制第一音频播放设备和第二音频播放设备进行交替输出声音信号,且交替过程中输出的声音信号强度不同等。Wherein, the first smart device can be a smart device of a different category, and certainly can be a smart device of the same category. In this embodiment, in an exemplary embodiment, it is assumed that the electronic device 202 determines that each second smart device used to interact with the user is an audio playback device according to the gesture action of the target object. The location information of the audio playback device determines that the audio playback device specifically used to send the audio signal includes the first audio playback device and the second audio playback device, and then sends the work to the determined first audio playback device and the second audio playback device respectively. Instructions to instruct the first audio playback device and the second audio playback device to perform coordinated work. In an exemplary embodiment, the work instruction carries the signal strengths of the sound signals respectively output by the first audio playback device and the second audio playback device. After the first audio playback device and the second audio playback device receive their respective work instructions, they respectively output sound signals with corresponding signal strengths within a specified time. It can be understood that the sound signal intensity output by the first audio playback device and the second audio playback device can be controlled to be different in different time periods, or the first audio playback device and the second audio playback device can be controlled to output sound signals alternately, and The intensity of the sound signal output during the alternation is different.
当然,电子设备202根据用户的交互信息,可以确定出用于工作的第二智能设备可以包括不同类别的智能设备。例如,电子设备202根据用户的语音信息和/或用户的手势动作,确定出用于工作的第二智能设备包括至少一个音频播放设备和至少一个视频播放设备。在一示例性实施例中,假设目标用户需要进行视频电话,在该应用场景下,需要使用的检测装置2021包括音频接收器和/或摄像头;在一示例性实施例中,电子设备202接收到音频接收器检测到目标对象的语音信息和/或摄像头拍摄的用户手势动作后,解析语音信息或者对手势动作进行识别,确定需要与目标对象交互的第二智能设备为音频播放设备和视频播放设备,然后根据目标对象与各音频播放设备和视频播放设备之间的相对位置信息,分别向各音频播放设备和视频播放设备下发工作指令,控制各音频播放设备输出音频信号,以及各视频播放设备启动视频功能。可以理解地,需要实时根据各音频播放设备与目标对象之间的相对位置信息,确定用于需要输出音频信号的音频播放设备和需启动视频功能的视频播放设备,以及实时确定各需要输出音频信号的音频播放设备输出的音频信号强度,以达到当目标对象在预设空间内移动时,保证目标对象始终得到固定强度的音频信号以及启动视频功能的视频播放设备始终位于目标对象的前方。在一示例性实施例中,工作指令携带有用于指示音频播放设备输出音频信号的第一时间和视频播放设备播放视频信息的第二时间。例如随着目标对象的移动,控制与目标对象相对位置较远的音频设备在第一时间输出的音 频信号的强度逐渐减弱,与目标对象相对位置较近的音频设备在第一时间输出的音频信号的强度逐渐增强,并实时根据摄像头拍摄到的目标对象的图像信息确定相对于目标对象的实时位置而言,位于目标对象前方的视频播放设备在第二时间启动视频功能。Of course, the electronic device 202 may determine that the second smart device used for work may include smart devices of different types according to the interaction information of the user. For example, the electronic device 202 determines that the second smart device used for work includes at least one audio playback device and at least one video playback device according to the user's voice information and/or the user's gesture action. In an exemplary embodiment, assuming that the target user needs to make a video call, in this application scenario, the detection device 2021 to be used includes an audio receiver and/or a camera; in an exemplary embodiment, the electronic device 202 receives After the audio receiver detects the voice information of the target object and/or the user's gestures captured by the camera, it analyzes the voice information or recognizes the gestures, and determines that the second smart device that needs to interact with the target object is an audio playback device and a video playback device , and then according to the relative position information between the target object and each audio playback device and video playback device, issue work instructions to each audio playback device and video playback device respectively, control each audio playback device to output audio signals, and each video playback device Start the video function. It can be understood that it is necessary to determine in real time the audio playback device that needs to output audio signals and the video playback device that needs to start the video function according to the relative position information between each audio playback device and the target object, and determine in real time each audio signal that needs to be output. The intensity of the audio signal output by the audio playback device, so that when the target object moves in the preset space, it is guaranteed that the target object always receives a fixed-strength audio signal and the video playback device that activates the video function is always in front of the target object. In an exemplary embodiment, the work instruction carries a first time for instructing the audio playback device to output audio signals and a second time for the video playback device to play video information. For example, as the target object moves, the intensity of the audio signal output by the audio device relatively far away from the target object is gradually weakened at the first time, and the audio signal output by the audio device relatively close to the target object is controlled at the first time. The intensity of the device gradually increases, and the real-time position relative to the target object is determined according to the image information of the target object captured by the camera in real time, and the video playback device in front of the target object starts the video function at a second time.
可以理解地,第一智能设备包括但不限于空调、音频设备、气味输出设备、视频播放设备等。Understandably, the first smart device includes, but is not limited to, an air conditioner, an audio device, a smell output device, a video playback device, and the like.
通过上述实施例可知,本公开提供的人机交互控制系统,首先通过检测目标对象的交互信息,实现根据检测到的目标对象的交互信息,从至少两类第一智能设备中确定出对应类别的第二智能设备;然后根据各个第二智能设备的位置信息确定出用于工作的第二智能设备;再分别向用于工作的各个第二智能设备下发工作指令,以指示用于工作的各个第二智能设备基于工作指令进行工作。实现了根据目标对象的交互信息和位置信息,指示不同的智能设备进行工作,实现将不同智能设备结合起来为用户提供服务,进而与用户完成有效的交互。It can be known from the above-mentioned embodiments that the human-computer interaction control system provided by the present disclosure first detects the interaction information of the target object, and realizes determining the corresponding category from at least two types of first smart devices according to the detected interaction information of the target object. The second smart device; then determine the second smart device used for work according to the position information of each second smart device; and then issue work instructions to each second smart device used for work to instruct each The second smart device performs work based on the work order. According to the interaction information and location information of the target object, different smart devices are instructed to work, and different smart devices are combined to provide services for users, and then complete effective interaction with users.
请参阅图3所示,图3是本公开提供的一人机交互控制方法的实现流程示意图。如图3所示,该人机交互控制方法应用于图2所示的电子设备202,包括S301至S303。详述如下。Please refer to FIG. 3 . FIG. 3 is a schematic diagram of an implementation flow of a human-computer interaction control method provided by the present disclosure. As shown in FIG. 3 , the human-computer interaction control method is applied to the electronic device 202 shown in FIG. 2 , including steps S301 to S303. Details are as follows.
S301,实时检测目标对象的交互信息。S301. Detect interaction information of a target object in real time.
在本实施例中,电子设备通过检测装置实时检测目标对象的交互信息。其中,目标对象为进入人机交互控制系统所处环境的用户,当该目标对象需要与当前所处环境内的第一智能设备进行交互时,可以做出预设的交互信息,例如手势动作或者语音信息,电子设备通过实时检测目标对象做出的交互信息,确定与目标对象的交互信息相匹配的智能设备。可以理解地,交互信息除了包括手势信息或者语音信息之外,还可以是其它的用户交互信息,例如触控交互信息等,具体在此不做限定。当然,电子设备还可以通过环境信息,控制对应的智能设备进行工作,例如电子设备可以根据检测到的环境温度信息,控制空调设备进行工作,并根据目标对象的移动,控制不同的空调设备进行不同的温度调控等。In this embodiment, the electronic device detects the interaction information of the target object in real time through the detection device. Wherein, the target object is a user who enters the environment where the human-computer interaction control system is located. When the target object needs to interact with the first smart device in the current environment, it can make preset interaction information, such as gestures or For voice information, the electronic device detects the interaction information made by the target object in real time, and determines the smart device that matches the interaction information of the target object. It can be understood that, in addition to gesture information or voice information, the interaction information may also be other user interaction information, such as touch interaction information, which is not specifically limited herein. Of course, the electronic device can also control the corresponding smart device to work through the environmental information. For example, the electronic device can control the air conditioner to work according to the detected ambient temperature information, and control different air conditioners to perform different tasks according to the movement of the target object. temperature control etc.
S302,根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备。S302. Determine each second smart device used for work according to the interaction information and the location information of each first smart device.
其中,电子设备根据交互信息可以确定目标对象需要与什么类别的第一智能设备进行交互,再根据确定该类别的各个第一智能设备的位置信息可以从对应类 别的第一智能设备中确定出用于工作的第二智能设备。Wherein, the electronic device can determine which type of first smart device the target object needs to interact with according to the interaction information, and then determine the user information from the corresponding type of first smart device according to the position information of each first smart device of the determined type. A second smart device for work.
例如,假设目标对象的交互信息为预设的启动视频通话的手势动作,则电子设备在检测到该手势动作后,根据该手势动作从多个第一智能设备中确定出目标对象需要交互的设备为用于视频通话的视频播放设备。其中,多个第一智能设备可以是同类别的智能设备也可以是不同类别的智能设备,例如包括但不限于,视频播放设备、音频设备、空调、冰箱、气味输出设备等。For example, assuming that the interaction information of the target object is a preset gesture action for starting a video call, after the electronic device detects the gesture action, it determines the device that the target object needs to interact with from the plurality of first smart devices according to the gesture action. A video playback device for video calls. Wherein, the multiple first smart devices may be of the same type or of different types, for example including but not limited to, video playback devices, audio devices, air conditioners, refrigerators, smell output devices, and the like.
在一示例性实施例中,交互信息包括手势动作、语音信息、触控信息中的至少一种,根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,包括:解析交互信息,得到与交互信息相匹配的各个第一智能设备;获取各个第一智能设备的位置信息,根据各个第一智能设备的位置信息,确定用于工作的各个第二智能设备。In an exemplary embodiment, the interaction information includes at least one of gesture actions, voice information, and touch information, and each second smart device used for work is determined according to the interaction information and the position information of each first smart device, The method includes: analyzing the interaction information to obtain each first smart device matching the interaction information; acquiring location information of each first smart device, and determining each second smart device for working according to the location information of each first smart device.
例如,交互信息为手势动作,该手势动作用于指示需启动能够输入及输出交互信息的设备。在一示例性实施例中,在电子设备中可以对手势动作进行解析,得到手势动作对应的语义信息,并在电子设备中预先存储有语义信息与设备类别之间的关联映射关系,电子设备在解析出目标对象的手势动作对应的语义信息之后,根据关联映射关系,从各第一智能设备中确定出与手势动作对应的语义信息相匹配的各个第一智能设备。For example, the interaction information is a gesture action, and the gesture action is used to indicate that a device capable of inputting and outputting the interaction information needs to be activated. In an exemplary embodiment, the gesture action can be analyzed in the electronic device to obtain the semantic information corresponding to the gesture action, and the association mapping relationship between the semantic information and the device category is pre-stored in the electronic device. After the semantic information corresponding to the gesture action of the target object is analyzed, each first smart device matching the semantic information corresponding to the gesture action is determined from among the first smart devices according to the association mapping relationship.
此外,电子设备进一步包括语音识别功能,电子设备在检测到目标对象的语音信息后,对语音信息进行识别,以确定出与语音信息相匹配的各个第一智能设备。In addition, the electronic device further includes a voice recognition function. After detecting the voice information of the target object, the electronic device recognizes the voice information to determine each first smart device that matches the voice information.
对应地,电子设备在得到与目标对象的交互信息相匹配的各个第一智能设备后,获取各个第一智能设备的位置信息,根据位置信息从各个第一智能设备中确定用于工作的各个第二智能设备。Correspondingly, after the electronic device obtains each first smart device that matches the interaction information of the target object, it obtains the location information of each first smart device, and determines each first smart device for working from each first smart device according to the location information. 2. Smart devices.
其中,根据各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,可以包括:根据位置信息,分别确定目标对象与各个第一智能设备之间的相对位置信息;根据相对位置信息,从各个第一智能设备中选择用于工作的各个第二智能设备。Wherein, according to the position information of each first smart device, determining each second smart device for work may include: according to the position information, respectively determining the relative position information between the target object and each first smart device; according to the relative position information, and select each second smart device for work from each first smart device.
在一示例性实施例中,在电子设备中预先存储有各个第一智能设备的位置信息,在本公开的实施例中,电子设备根据目标对象的位置信息和预先存储的各个第一智能设备的位置信息计算出目标对象与各个第一智能设备之间的相对位置 信息;根据计算得到的相对位置信息,可以从各个第一智能设备中确定出用于工作的各个第二智能设备。在一示例性实施例中,用于工作的各个第二智能设备指的是与目标对象进行协同交互的各个第二智能设备。其中,用于与目标对象进行协同交互的各个第二智能设备与目标对象之间的相对位置在预设的通信范围之内,且随着目标对象的移动,当目标对象移动至第一位置时,确定用于工作的第二智能设备与目标对象之间的相对位置信息,与目标对象移动至第二位置时,确定用于工作的第二智能设备与目标对象之间的相对位置信息相同。也就是说,随着目标对象的位置信息不断变化,需要从各个第一智能设备不断确定出用于工作的第二智能设备,以保证目标对象在第一位置和第二位置时能够接收到相同的信息。例如,随着目标对象的移动,目标对象所接收的声音信号强度,或者是屏幕显示画面的清晰度能够保持不变。In an exemplary embodiment, the position information of each first smart device is pre-stored in the electronic device. In an embodiment of the present disclosure, the electronic device The position information calculates the relative position information between the target object and each first smart device; according to the calculated relative position information, each second smart device used for work can be determined from each first smart device. In an exemplary embodiment, each second smart device used for work refers to each second smart device that performs cooperative interaction with the target object. Wherein, the relative position between each second smart device and the target object for collaborative interaction with the target object is within the preset communication range, and as the target object moves, when the target object moves to the first position Determining the relative position information between the second smart device used for work and the target object is the same as determining the relative position information between the second smart device used for work and the target object when the target object moves to the second position. That is to say, as the location information of the target object is constantly changing, it is necessary to continuously determine the second smart device used for work from each first smart device, so as to ensure that the target object can receive the same Information. For example, with the movement of the target object, the strength of the sound signal received by the target object, or the clarity of the screen display picture can remain unchanged.
在一示例性实施例中,在实际应用中,当目标对象移动到第一位置时,目标对象与确定的用于工作的第二智能设备之间的相对位置信息,与目标对象移动至第二位置时,目标对象与确定的用于工作的第二智能设备之间的相对位置信息可能不同,此时为了保证用户在第一位置和第二位置时,能够接收到相同的信号,可以分别确定目标对象在第一位置时,对应用于工作的第二智能设备的输出信号强度,和目标对象在第二位置时,对应用于工作的第二智能设备的输出信号强度。通过调整对应不同位置的第二智能设备的输出信号强度,以保证用户在第一位置和第二位置时,接收到相同的信号。In an exemplary embodiment, in practical applications, when the target object moves to the first position, the relative position information between the target object and the determined second smart device used for work is related to the target object moving to the second position. position, the relative position information between the target object and the determined second smart device used for work may be different. When the target object is at the first location, it corresponds to the output signal strength of the second smart device for work, and when the target object is at the second location, it corresponds to the output signal strength of the second smart device for work. By adjusting the output signal strength of the second smart device corresponding to different positions, it is ensured that the user receives the same signal when the user is at the first position and the second position.
在一示例性实施例中,根据各个第二智能设备与目标对象之间的相对位置信息,从第一智能设备中选择用于工作的各个第二智能设备,包括:若有第一智能设备与目标对象之间的相对位置信息满足预设的位置信息评估条件,则确定满足预设的位置信息评估条件的第一智能设备为用于工作的第二智能设备,其中,预设的位置信息评估条件为目标对象位于第一位置时,与确定用于工作的第二智能设备之间的相对位置信息,与目标对象位于第二位置时,与确定用于工作的第二智能设备之间的相对位置信息相同。In an exemplary embodiment, according to the relative position information between each second smart device and the target object, selecting each second smart device for work from the first smart devices includes: if there is a first smart device and If the relative position information between the target objects satisfies the preset position information evaluation condition, it is determined that the first smart device meeting the preset position information evaluation condition is the second smart device used for work, wherein the preset position information evaluation The condition is the relative position information between the target object at the first position and the second smart device determined to be used for work, and the relative position information between the target object at the second position and the second smart device determined to be used for work. The location information is the same.
S303,分别向用于工作的各个第二智能设备下发工作指令,以指示用于工作的各个第二智能设备基于工作指令进行协同工作。S303. Issue a work instruction to each second smart device used for work respectively, so as to instruct each second smart device used for work to perform cooperative work based on the work order.
在一示例性实施例中,随着第二智能设备的不同,工作指令携带有不同的指令信息,例如假设第二智能设备包括音频播放设备,则工作指令携带有用于指示 各个音频播放设备协同输出音频信号的强度值,分别向用于工作的各个第二智能设备下发工作指令,可以包括:根据各个第二智能设备与目标对象之间的相对位置信息,确定用于工作的各个第二智能设备,以及用于工作的各个第二智能设备的输出信号强度;向用于工作的各个第二智能设备分别下发携带有各自输出信号强度的工作指令。In an exemplary embodiment, as the second smart device is different, the work instruction carries different instruction information. For example, assuming that the second smart device includes an audio playback device, the work instruction carries information for instructing each audio playback device to coordinate output The strength value of the audio signal, respectively issuing work instructions to each second smart device used for work may include: determining each second smart device used for work according to the relative position information between each second smart device and the target object device, and the output signal strength of each second smart device used for work; sending work instructions with their respective output signal strengths to each second smart device used for work.
在一示例性实施例中,假设第二智能设备为音频播放设备,在本实施例中,根据各个音频播放设备与目标对象之间的相对位置信息,确定各个第二智能设备输出信号强度表示为Pn;在一示例性实施例中,In an exemplary embodiment, assuming that the second smart device is an audio playback device, in this embodiment, according to the relative position information between each audio playback device and the target object, it is determined that the output signal strength of each second smart device is expressed as Pn; in an exemplary embodiment,
Figure PCTCN2022113401-appb-000001
Figure PCTCN2022113401-appb-000001
其中,An表示第n个音频播放设备与目标对象之间的相对位置信息(在一示例性实施例中为第n个音频播放设备与目标对象之间的距离),Pn表示第n个音频播放设备输出的信号强度(在一示例性实施例中为声音信号强度)。Wherein, An represents the relative position information between the nth audio playback device and the target object (in an exemplary embodiment, is the distance between the nth audio playback device and the target object), and Pn represents the nth audio playback device The signal strength (in one exemplary embodiment, the sound signal strength) output by the device.
又如,第二智能设备可以包括音频播放设备和视频播放设备,工作指令携带有用于指示音频播放设备输出音频信号的第一时间和视频播放设备播放视频信息的第二时间,以及音频播放设备输出音频信号的强度值。基于该工作指令,可以控制音频播放设备和视频播放设备进行协同工作。其中,第一时间可以包括第二时间。As another example, the second smart device may include an audio playback device and a video playback device, and the work instruction carries the first time for instructing the audio playback device to output audio signals and the second time for the video playback device to play video information, and the audio playback device to output The strength value of the audio signal. Based on the work instruction, the audio playback device and the video playback device can be controlled to work together. Wherein, the first time may include the second time.
其中,视频播放设备可以为具有显示屏的视频播放设备。可以理解地,当目标对象的位置信息发生变化时,根据目标对象与各视频播放设备的显示屏之间的相对位置信息,可以控制不同的视频播放设备进行切换显示,也可以控制两个或多个视频播放设备同时进行显示,用来保证随着目标用户的移动,视频播放设备的显示屏始终在用户的前方,使得用户能够随时清楚地知道显示屏显示的内容。Wherein, the video playback device may be a video playback device with a display screen. It can be understood that when the position information of the target object changes, according to the relative position information between the target object and the display screens of each video playback device, different video playback devices can be controlled to switch and display, or two or more video playback devices can be controlled. Two video playback devices display at the same time to ensure that as the target user moves, the display screen of the video playback device is always in front of the user, so that the user can clearly know the content displayed on the display screen at any time.
此外,视频播放设备还可以包括摄像头,工作指令还可以携带有启动摄像头的指令信息。根据工作指令可以控制视频播放设备在不同的时间段分别启动显示屏和摄像头,已达到控制显示屏和摄像头进行交互工作的目的。In addition, the video playback device may also include a camera, and the work order may also carry instruction information for activating the camera. According to the work instructions, the video playback device can be controlled to start the display screen and the camera at different time periods, and the purpose of controlling the display screen and the camera to work interactively has been achieved.
在一示例性实施例中,启动视频播放设备与启动音频播放设备的过程相同,在此不再赘述。可以理解地,第二智能设备除了是上述实施例描述的音频播放设备或视频播放设备之外,还可以是其它类别的具有信息交互功能的智能设备,例如空调、冰箱、洗衣机等智能家电或者灯、电视、开关等,在此不做具体限定。In an exemplary embodiment, the process of starting the video playback device is the same as that of starting the audio playback device, and details will not be repeated here. It can be understood that, in addition to the audio playback device or video playback device described in the above embodiments, the second smart device can also be other types of smart devices with information interaction functions, such as smart home appliances or lamps such as air conditioners, refrigerators, and washing machines. , TV, switch, etc., are not specifically limited here.
通过上述分析可知,本公开提供的人机交互控制方法,通过实时检测目标对象的交互信息,实现根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,分别向用于工作的各个第二智能设备下发工作指令,以指示用于工作的各个第二智能设备基于工作指令进行工作。实现了根据目标对象的交互信息和位置信息,指示不同的智能设备进行工作,实现将不同智能设备结合起来为用户提供服务,进而与用户完成有效的协同交互。Through the above analysis, it can be seen that the human-computer interaction control method provided by the present disclosure detects the interaction information of the target object in real time, and determines each second smart device used for work according to the interaction information and the position information of each first smart device, respectively. A work order is issued to each second smart device used for work, so as to instruct each second smart device used for work to perform work based on the work order. According to the interaction information and location information of the target object, different smart devices are instructed to work, and different smart devices are combined to provide services for users, and then complete effective collaborative interaction with users.
请参阅图4所示,图4是本公开提供的人机交互控制方法的一应用场景示意图。由图4可知,在本实施例中,人机交互控制系统20包括电子设备202,两个检测设备203(本实施例中,检测设备201与电子设备通讯连接)和两个第一智能设备201。在一示例性实施例中,检测设备203为声音检测设备,例如,麦克风设备;第一智能设备201为声音输出设备,例如,音频播放设备。在一示例性实施例中,麦克风设备用于检测目的对象400输出的声音信号,并将检测到的声音信号发送至电子设备202,电子设备202根据接收到的声音信号,确定目标对象400需要与音频播放设备进行交互后,获取目标对象400与各个音频播放设备之间的相对位置信息(例如,相对位置信息为相对距离),根据确定的相对位置信息,从各个音频播放设备中确定用于输出声音信号的音频播放设备以及用于输出声音信号的各个音频播放设备输出的声音信号强度。其中,麦克风设备可以检测出目标对象400发出声音信号的声源位置信息,将声源位置信息发送至电子设备202,电子设备202根据接收到的声源位置信息和预先存储的各个音频播放设备的位置信息,确定目标对象400相对于各个音频播放设备的相对位置信息(距离),最后根据确定的相对位置信息,确定输出声音信号的音频播放设备以及各个音频播放设备输出的声音信号强度。例如,可以控制不同位置的音频播放设备输出不同强度的声音信号,以保证目标对象在对应空间内移动时,能够接收到稳定的声音信号。实现了将多个音频播放设备结合起来与用户完成了有效的交互。Please refer to FIG. 4 , which is a schematic diagram of an application scenario of the human-computer interaction control method provided by the present disclosure. As can be seen from FIG. 4, in this embodiment, the human-computer interaction control system 20 includes an electronic device 202, two detection devices 203 (in this embodiment, the detection device 201 is connected to the electronic device in communication) and two first intelligent devices 201 . In an exemplary embodiment, the detection device 203 is a sound detection device, for example, a microphone device; the first smart device 201 is a sound output device, for example, an audio playback device. In an exemplary embodiment, the microphone device is used to detect the sound signal output by the target object 400, and send the detected sound signal to the electronic device 202, and the electronic device 202 determines that the target object 400 needs to communicate with the After the audio playback device interacts, obtain the relative position information (for example, the relative position information is the relative distance) between the target object 400 and each audio playback device, and determine from each audio playback device according to the determined relative position information. The audio playback device for the sound signal and the sound signal intensity output by each audio playback device used to output the sound signal. Wherein, the microphone device can detect the sound source position information of the sound signal emitted by the target object 400, and send the sound source position information to the electronic device 202, and the electronic device 202 can use the received sound source position information and the pre-stored Position information, determine the relative position information (distance) of the target object 400 relative to each audio playback device, and finally determine the audio playback device outputting the sound signal and the sound signal strength output by each audio playback device according to the determined relative position information. For example, audio playback devices at different locations can be controlled to output sound signals of different intensities, so as to ensure that the target object can receive stable sound signals when moving in the corresponding space. Realized the combination of multiple audio playback devices to complete effective interaction with users.
请参阅图5,图5是本公开提供的人机交互控制设备的结构示意图。如图5 所示,在本实施例中,电子设备202包括处理器501和存储器502,处理器501和存储器502通过总线503连接,该总线比如为I2C(Inter-integrated Circuit)总线。Please refer to FIG. 5 . FIG. 5 is a schematic structural diagram of a human-computer interaction control device provided by the present disclosure. As shown in FIG. 5, in this embodiment, the electronic device 202 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected through a bus 503, such as an I2C (Inter-integrated Circuit) bus.
在一示例性实施例中,处理器501用于提供计算和控制能力,支撑整个电子设备202的运行。处理器501可以是中央处理单元(Central Processing Unit,CPU),该处理器501还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。In an exemplary embodiment, the processor 501 is used to provide computing and control capabilities to support the operation of the entire electronic device 202 . The processor 501 can be a central processing unit (Central Processing Unit, CPU), and the processor 501 can also be other general processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC) ), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. Wherein, the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
在一示例性实施例中,存储器502可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。In an exemplary embodiment, the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
本领域技术人员可以理解,图5中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的电子设备202的限定,具体的电子设备202可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 5 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation on the electronic device 202 to which the disclosed solution is applied. The specific electronic device 202 may include more or fewer components than shown, or combine certain components, or have a different arrangement of components.
其中,处理器用于运行存储在存储器中的计算机程序,并在执行计算机程序时实现本公开提供的电子设备202功能。Wherein, the processor is used to run the computer program stored in the memory, and realize the functions of the electronic device 202 provided by the present disclosure when executing the computer program.
在一实施例中,处理器用于运行存储在存储器中的计算机程序,并在执行计算机程序时实现如下步骤:实时检测目标对象的交互信息;根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备;分别向用于工作的各个第二智能设备下发工作指令,以指示用于工作的各个第二智能设备基于工作指令进行协同工作。In one embodiment, the processor is used to run a computer program stored in the memory, and implement the following steps when executing the computer program: detect the interaction information of the target object in real time; according to the interaction information and the position information of each first smart device, determine Each second smart device used for work; sends a work order to each second smart device used for work respectively, so as to instruct each second smart device used for work to perform cooperative work based on the work order.
在一实施例中,交互信息包括手势动作、语音信息、触控信息中的至少一种。In an embodiment, the interaction information includes at least one of gesture action, voice information, and touch information.
在一实施例中,根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,包括:解析交互信息,得到与交互信息相匹配的各个第一智能设备;获取各个第一智能设备的位置信息,根据各个第一智能设备的位置信息,确定用于工作的各个第二智能设备。In an embodiment, according to the interaction information and the location information of each first smart device, determining each second smart device used for work includes: parsing the interaction information to obtain each first smart device that matches the interaction information; obtaining The location information of each first smart device determines each second smart device used for work according to the location information of each first smart device.
在一实施例中,根据各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,包括:根据位置信息,分别确定目标对象与各个第一智能设备之间 的相对位置信息;根据相对位置信息,从各个第一智能设备中选择用于工作的各个第二智能设备。In an embodiment, according to the position information of each first smart device, determining each second smart device for work includes: according to the position information, respectively determining the relative position information between the target object and each first smart device; According to the relative position information, each second smart device for working is selected from each first smart device.
在一实施例中,在根据各个第一智能设备与目标对象之间的相对位置信息,从各个第一智能设备中选择用于工作的各个第二智能设备之后,进一步包括:在检测到目标对象所处位置发生变化的情况下,重新选择用于工作的各个第二智能设备;其中,重新选择的第二智能设备与目标对象的相对位置信息,与重新选择前的第二智能设备与目标对象的相对位置信息相同。In an embodiment, after selecting each second smart device for working from each first smart device according to the relative position information between each first smart device and the target object, further comprising: after detecting the target object When the location changes, each second smart device used for work is reselected; wherein, the relative position information of the reselected second smart device and the target object is the same as that of the second smart device and the target object before reselection The relative position information is the same.
在一实施例中,用于工作的第二智能设备包括视频播放设备、摄像设备、音频播放设备和视频播放设备中的至少一种。In an embodiment, the second smart device used for work includes at least one of a video playback device, a camera device, an audio playback device, and a video playback device.
在一实施例中,用于工作的第二智能设备包括音频播放设备,工作指令携带有用于指示各个音频播放设备协同输出音频信号的强度值。In an embodiment, the second smart device used for work includes an audio playback device, and the work instruction carries an intensity value for instructing each audio playback device to output an audio signal cooperatively.
在一实施例中,用于工作的第二智能设备包括音频播放设备和视频播放设备,工作指令携带有用于指示音频播放设备输出音频信号的第一时间和视频播放设备播放视频信息的第二时间,以及和音频播放设备输出音频信号的强度值。In one embodiment, the second smart device used for work includes an audio playback device and a video playback device, and the work instruction carries the first time for instructing the audio playback device to output audio signals and the second time for the video playback device to play video information , and the intensity value of the audio signal output by the audio playback device.
本公开提供的人机交互控制方法、设备及存储介质,首先通过实时检测目标对象的交互信息,然后根据交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,最后分别向用于工作的各个第二智能设备下发工作指令,以指示用于工作的各个第二智能设备基于工作指令进行协同工作。实现了根据目标对象的交互信息和各个智能设备的位置信息,指示不同的智能设备进行协同工作,实现将不同智能设备结合起来为用户提供服务,进而与用户完成有效的协同交互。The human-computer interaction control method, device, and storage medium provided by the present disclosure first detect the interaction information of the target object in real time, and then determine each second smart device for work according to the interaction information and the position information of each first smart device, Finally, a work order is issued to each second smart device used for work, so as to instruct each second smart device used for work to perform cooperative work based on the work order. According to the interaction information of the target object and the location information of each smart device, different smart devices are instructed to work together, and different smart devices are combined to provide services for users, and then complete effective collaborative interaction with users.
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的电子设备的工作过程,可以参考前述TCP连接建立方法实施例中的对应对电子设备功能的描述过程,在此不再赘述。It should be noted that those skilled in the art can clearly understand that for the convenience and brevity of the description, the working process of the electronic device described above can refer to the description of the function of the electronic device in the foregoing TCP connection establishment method embodiment The process will not be repeated here.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施例中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬 件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those of ordinary skill in the art can understand that all or some of the steps in the methods disclosed above, the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof. In hardware embodiments, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit . Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer. In addition, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
应当理解,在本公开说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且进一步包括没有明确列出的其他要素,或者是进一步包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be understood that the term "and/or" used in the present disclosure and the appended claims refers to any combination of one or more of the associated listed items and all possible combinations, and includes these combinations. It should be noted that, as used herein, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or system comprising a set of elements includes not only those elements, It further includes other elements not expressly listed, or further includes elements inherent in such a process, method, article, or system. Without further limitations, an element defined by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article or system comprising that element.
上述本公开序号仅仅为了描述,不代表实施例的优劣。以上,仅为本公开的实施例,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。The above serial numbers of the present disclosure are for description only, and do not represent the advantages and disadvantages of the embodiments. The above is only an embodiment of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person skilled in the art can easily think of various equivalent modifications or replacements within the technical scope of the present disclosure. , these modifications or replacements should all fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure should be determined by the protection scope of the claims.

Claims (10)

  1. 一种人机交互控制方法,所述方法包括:A human-computer interaction control method, the method comprising:
    实时检测目标对象的交互信息;Real-time detection of interactive information of target objects;
    根据所述交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备;Determine each second smart device used for work according to the interaction information and the location information of each first smart device;
    分别向所述用于工作的各个第二智能设备下发工作指令,以指示所述用于工作的各个第二智能设备基于所述工作指令进行协同工作。Issue work instructions to the respective second smart devices used for work, so as to instruct the respective second smart devices used for work to perform coordinated work based on the work instructions.
  2. 根据权利要求1所述的人机交互控制方法,其中,所述交互信息包括手势动作、语音信息、触控信息中的至少一种。The human-computer interaction control method according to claim 1, wherein the interaction information includes at least one of gesture actions, voice information, and touch information.
  3. 根据权利要求2所述的人机交互控制方法,其中,根据所述交互信息和各个第一智能设备的位置信息,确定用于工作的各个第二智能设备,包括:The human-computer interaction control method according to claim 2, wherein, according to the interaction information and the position information of each first smart device, determining each second smart device for work includes:
    解析所述交互信息,得到与所述交互信息相匹配的所述各个第一智能设备;Analyzing the interaction information to obtain each of the first smart devices that match the interaction information;
    根据与所述交互信息相匹配的所述各个第一智能设备的位置信息,确定所述用于工作的各个第二智能设备。The respective second smart devices used for work are determined according to the location information of the respective first smart devices matched with the interaction information.
  4. 根据权利要求1至3任一项所述的人机交互控制方法,其中,所述根据各个所述第一智能设备的位置信息,确定所述用于工作的各个第二智能设备,包括:According to the human-computer interaction control method according to any one of claims 1 to 3, wherein, according to the location information of each of the first smart devices, determining each of the second smart devices for work includes:
    根据所述位置信息,分别确定所述目标对象与各个所述第一智能设备之间的相对位置信息;Determining relative position information between the target object and each of the first smart devices respectively according to the position information;
    根据所述相对位置信息,从各个所述第一智能设备中选择所述用于工作的各个第二智能设备。According to the relative position information, each of the second smart devices used for work is selected from each of the first smart devices.
  5. 根据权利要求4所述的人机交互方法,其中,在所述根据所述相对位置信息,从各个所述第一智能设备中选择所述用于工作的各个第二智能设备之后,进一步包括:The human-computer interaction method according to claim 4, wherein, after selecting each of the second smart devices for work from each of the first smart devices according to the relative position information, further comprising:
    在检测到所述目标对象所处位置发生变化的情况下,重新选择所述用于工作的各个第二智能设备;Reselecting each of the second smart devices used for work when it is detected that the location of the target object changes;
    其中,重新选择的所述第二智能设备与所述目标对象的相对位置信息,与重新选择前的所述第二智能设备与所述目标对象的相对位置信息相同。Wherein, the reselected relative position information of the second smart device and the target object is the same as the relative position information of the second smart device and the target object before reselection.
  6. 根据权利要求5所述的人机交互控制方法,其中,所述第二智能设备包 括视频播放设备、摄像设备、音频播放设备和视频播放设备中的至少一种。The human-computer interaction control method according to claim 5, wherein the second smart device comprises at least one of a video playback device, a camera device, an audio playback device and a video playback device.
  7. 根据权利要求5所述的人机交互控制方法,其中,所述第二智能设备包括音频播放设备,所述工作指令携带有用于指示各个音频播放设备协同输出音频信号的强度值。The human-computer interaction control method according to claim 5, wherein the second smart device includes an audio playback device, and the work instruction carries an intensity value for instructing each audio playback device to output an audio signal cooperatively.
  8. 根据权利要求5所述的人机交互控制方法,其中,所述第二智能设备包括音频播放设备和视频播放设备,所述工作指令携带有用于指示所述音频播放设备输出音频信号的第一时间和所述视频播放设备播放视频信息的第二时间,以及和所述音频播放设备输出音频信号的强度值。The human-computer interaction control method according to claim 5, wherein the second smart device includes an audio playback device and a video playback device, and the work order carries a first time for instructing the audio playback device to output an audio signal and the second time when the video playback device plays the video information, and the intensity value of the audio signal output by the audio playback device.
  9. 一种人机交互控制设备,包括:A human-computer interaction control device, comprising:
    存储器和处理器;memory and processor;
    所述存储器用于存储计算机程序;The memory is used to store computer programs;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时实现如权利要求1至8中任一项所述的人机交互控制方法的步骤。The processor is configured to execute the computer program and realize the steps of the human-computer interaction control method according to any one of claims 1 to 8 when executing the computer program.
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至8中任一项所述的人机交互控制方法的步骤。A computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the man-machine according to any one of claims 1 to 8 The steps of the interactive control method.
PCT/CN2022/113401 2021-10-21 2022-08-18 Human-computer interaction control method and device and storage medium WO2023065799A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111227973.5 2021-10-21
CN202111227973.5A CN116009753A (en) 2021-10-21 2021-10-21 Man-machine interaction control method, device and storage medium

Publications (1)

Publication Number Publication Date
WO2023065799A1 true WO2023065799A1 (en) 2023-04-27

Family

ID=86028477

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113401 WO2023065799A1 (en) 2021-10-21 2022-08-18 Human-computer interaction control method and device and storage medium

Country Status (2)

Country Link
CN (1) CN116009753A (en)
WO (1) WO2023065799A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945253A (en) * 2014-04-04 2014-07-23 北京智谷睿拓技术服务有限公司 Volume control method and device and multimedia playing control method and device
CN106126182A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 Data output method and electronic equipment
US20190129675A1 (en) * 2016-03-30 2019-05-02 Nec Corporation Plant management system, plant management method, plant management apparatus, and plant management program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945253A (en) * 2014-04-04 2014-07-23 北京智谷睿拓技术服务有限公司 Volume control method and device and multimedia playing control method and device
US20190129675A1 (en) * 2016-03-30 2019-05-02 Nec Corporation Plant management system, plant management method, plant management apparatus, and plant management program
CN106126182A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 Data output method and electronic equipment

Also Published As

Publication number Publication date
CN116009753A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US10564833B2 (en) Method and apparatus for controlling devices
RU2633367C2 (en) Method and device for operating and controlling intelligent device
US9965039B2 (en) Device and method for displaying user interface of virtual input device based on motion recognition
KR101541561B1 (en) User interface device, user interface method, and recording medium
US11934848B2 (en) Control display method and electronic device
US20170220242A1 (en) Home security system with touch-sensitive control panel
EP3656094B1 (en) Controlling a device based on processing of image data that captures the device and/or an installation environment of the device
EP3419020B1 (en) Information processing device, information processing method and program
US10311830B2 (en) Operating method, related touch display device and related semiconductor device
CN111045344A (en) Control method of household equipment and electronic equipment
KR101735755B1 (en) Method and apparatus for prompting device connection
CN107741814B (en) Display control method and mobile terminal
US11848007B2 (en) Method for operating voice recognition service and electronic device supporting same
KR20220108161A (en) How the camera works and electronics
CN112106016A (en) Information processing apparatus, information processing method, and recording medium
CN111599273B (en) Display screen control method and device, terminal equipment and storage medium
TW202004432A (en) Electronic device and operation control method thereof
KR102258742B1 (en) Touch signal processing method, apparatus and medium
JP2023502414A (en) Target display method and electronic equipment
WO2020015529A1 (en) Terminal device control method and terminal device
CN108737731B (en) Focusing method and terminal equipment
JP2023511156A (en) Shooting method and electronic equipment
WO2023065799A1 (en) Human-computer interaction control method and device and storage medium
JP2019061334A (en) Equipment control device, equipment control method and equipment control system
CN109857305B (en) Input response method and mobile terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882418

Country of ref document: EP

Kind code of ref document: A1