CN116009753A - Man-machine interaction control method, device and storage medium - Google Patents

Man-machine interaction control method, device and storage medium Download PDF

Info

Publication number
CN116009753A
CN116009753A CN202111227973.5A CN202111227973A CN116009753A CN 116009753 A CN116009753 A CN 116009753A CN 202111227973 A CN202111227973 A CN 202111227973A CN 116009753 A CN116009753 A CN 116009753A
Authority
CN
China
Prior art keywords
information
intelligent
working
target object
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111227973.5A
Other languages
Chinese (zh)
Inventor
李冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202111227973.5A priority Critical patent/CN116009753A/en
Priority to PCT/CN2022/113401 priority patent/WO2023065799A1/en
Publication of CN116009753A publication Critical patent/CN116009753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

According to the man-machine interaction control method, the man-machine interaction control device and the storage medium, interaction information of a target object is detected in real time, then each second intelligent device used for working is determined according to the interaction information and the position information of each first intelligent device, and finally working instructions are issued to each second intelligent device used for working respectively to instruct each second intelligent device used for working to conduct collaborative work based on the working instructions. According to the method and the device, the different intelligent devices are instructed to perform cooperative work according to the interaction information of the target object and the position information of each intelligent device, and the different intelligent devices are combined to provide services for the user so as to complete effective cooperative interaction with the user.

Description

Man-machine interaction control method, device and storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method and apparatus for controlling human-computer interaction, and a storage medium.
Background
With the continuous development of information technology and internet technology, intelligent devices are becoming more and more popular in people's lives. For example, an audio playback device that can interact directly with the user, an electronic device that automatically selects a connection device, etc. However, the existing intelligent devices are usually independent, and cannot be effectively combined to provide services for users in complex environment states, so that effective interaction with the users is completed.
Disclosure of Invention
The main purpose of the embodiments of the present application is to provide a man-machine interaction control method, a device and a storage medium, which aim to instruct different intelligent devices to work according to interaction information and position information of a target object, so as to combine a plurality of intelligent devices to provide services for users, and further complete effective collaborative interaction with the users.
In a first aspect, an embodiment of the present application provides a method for controlling human-computer interaction, where the method includes:
detecting interaction information of a target object in real time;
determining each second intelligent device for work according to the interaction information and the position information of each first intelligent device;
and respectively issuing working instructions to the second intelligent devices for working to instruct the second intelligent devices for working to perform cooperative working based on the working instructions.
In a second aspect, an embodiment of the present application further provides a human-computer interaction control apparatus, including:
a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and implement the steps of the man-machine interaction control method according to the first aspect above when the computer program is executed.
In a third aspect, embodiments of the present application further provide a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the steps of the human-computer interaction control method according to any one of claims 1 to 7.
According to the man-machine interaction control method, the man-machine interaction control device and the storage medium, interaction information of a target object is detected in real time, then each second intelligent device used for working is determined according to the interaction information and the position information of each first intelligent device, and finally working instructions are issued to each second intelligent device used for working respectively to instruct each second intelligent device used for working to conduct collaborative work based on the working instructions. According to the method and the device, the different intelligent devices are instructed to perform cooperative work according to the interaction information of the target object and the position information of each intelligent device, and the different intelligent devices are combined to provide services for the user so as to complete effective cooperative interaction with the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a man-machine interaction control system provided in the prior art;
fig. 2 is a schematic structural diagram of a man-machine interaction control system provided in an embodiment of the present application;
fig. 3 is a schematic implementation flow chart of a man-machine interaction control method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an application scenario of a man-machine interaction control method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a man-machine interaction control device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Before explaining the man-machine interaction control method, the device and the storage medium provided in the embodiments of the present application, the principle of man-machine interaction and the technical problems existing in the existing intelligent interaction system are exemplarily described with reference to fig. 1. Firstly, man-machine interaction refers to information transfer between man-machine in an effective mode through intelligent input and output equipment. As can be seen from fig. 1, in the existing man-machine interaction technology, it is common for a person to interact with a single smart device of the same category (e.g. smart device 1, smart device 2, or any smart device according to smart device n), for example, assuming that smart device 1 is an audio playing device, smart device 2 is a video playing device, the user inputs voice signals to the audio playing device or the video playing device, after the audio playing device receives the voice signals, the information matched with the received voice signals is output, or the user inputs voice signals to the video playing device, and after the video playing device receives the voice signals, the display screen is started. However, if multiple intelligent devices of the same category exist at different positions in the same space, when a user needs to interact with the intelligent devices of the category, effective interaction cannot be performed simultaneously with the multiple intelligent devices, and interaction with a single intelligent device of the category is also affected by other intelligent devices of the category around, so that the problem of interaction failure occurs. Therefore, in the existing man-machine interaction process, the problems that a plurality of intelligent devices cannot be effectively combined to provide services for users in a complex environment and the man-machine interaction fails exist.
In order to solve the technical problems, the embodiment of the application provides a man-machine interaction control method, equipment, a storage medium and a system. The implementation principle and the process of the man-machine interaction control method provided in some embodiments of the present application are described below by way of example with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a man-machine interaction control system according to an embodiment of the present application. As can be seen from fig. 2, the man-machine interaction control system 20 provided in this embodiment includes: a plurality of smart devices 201 and electronic devices 202 of different categories. Wherein the plurality of smart devices 201 of different categories may be a first category of smart devices 1, a second category of smart devices 2, an n-th category of smart devices n, etc. The smart devices 201 corresponding to the respective categories may perform an interworking based on the interaction information such as the state, behavior, operation, etc. of the user detected by the electronic device 202. Specifically, after detecting the interaction information of the target object (user), the electronic device 202 may infer the intelligent devices that need to interact with the user according to the interaction information and the location information of each intelligent device 201, and then control each intelligent device to perform cooperative work. The intelligent space formed by the intelligent devices can be used for enabling a user to interact with the intelligent devices in an unbearable manner, so that the intelligent devices can work cooperatively.
The electronic device 202 may be provided with a detection device 2021. Wherein the electronic device 202 may be a server or a terminal device, and the server may be a single server or a server cluster; the terminal device includes, but is not limited to, a mobile phone terminal, a tablet computer, a desktop computer, a robot, a wearable intelligent device, and the like. Specifically, the detecting device 2021 may be a preset type of sensor or detector, such as an infrared detection sensor, a radar detector, or a photon detector, which is disposed on the electronic apparatus 202. Of course, in other embodiments of the present application, the detecting device 2021 may be other electronic devices communicatively connected to the electronic device 202, or a sensor installed at a preset distance from the electronic device 202, or the like. Furthermore, the detecting apparatus 2021 may be a plurality of information sensing modules distributed in the entire environment space where the smart device is provided, and the plurality of information sensing modules may be uniformly distributed in the environment space where the smart device is provided, so that the entire environment space can be covered. Of course, each information sensing module covered in the environment space where the smart device is provided is communicatively connected to the electronic device 202. Specifically, the detecting device 2021 may be a plurality of sensors, detectors or other information sensing modules of different types, or may be a plurality of sensors, detectors or information sensing modules of the same type. For example, the detection device 2021 includes, but is not limited to, a camera, a temperature sensor, an audio receiver, an action identifier, and the like.
It will be appreciated that if the detecting device 2021 is another electronic device connected to the electronic device 202, the other electronic device may be a server or a terminal device. Specifically, in the embodiment of the present application, the detection device 2021 is configured to detect, in real time, interaction information made by the target object. The target object is a user in a space where at least two intelligent devices are preset, and the interaction information comprises information such as states, behaviors and operations of the user. Of course, the detecting device 2021 may also be configured to sense various information in the current environment and obtain the sensed information, and may also send the sensed information to the electronic device 202, analyze the sensed information by the electronic device 202, and then issue a control instruction to each intelligent device according to the analysis result. Specifically, the method can be gesture actions made by a user or voice information sent out by the user. Illustratively, it is assumed that the detection apparatus 2021 is a photon detector provided on the electronic device 202, which may be used to detect an operational behavior of the target object (user), such as a gesture motion. Specifically, the photon detector includes an optical camera and a sensor. The optical camera is used for shooting gesture actions of a target object, and the sensor is used for transmitting gesture actions of a user acquired by the optical camera to the electronic device 202. Optionally, the photon detector may also include a structured light device that may be used to collect user position information and may transmit the collected user position information to the electronic device 202 via the sensor.
The electronic devices 202 are respectively connected with the intelligent devices 201 in a communication manner, and are used for determining intelligent devices for working from the intelligent devices 201 according to interaction information made by the target objects and position information of the intelligent devices 201. Specifically, for convenience of distinction, each smart device 201 in the space where the target object is located is denoted as a first smart device, and each smart device determined for work is denoted as a second smart device. After determining each second intelligent device for working, the electronic device 202 issues working instructions to each second intelligent device for working, so as to instruct each second intelligent device to perform cooperative working based on the working instructions.
The first intelligent devices may be different types of intelligent devices, and of course, may also be the same type of intelligent device. In this embodiment, it is assumed that, by way of example, the electronic device 202 determines, according to a gesture of a target object, that each second smart device for interacting with a user is an audio playing device, and, by way of example, if the electronic device 202 determines, according to location information of each audio playing device, that an audio playing device specifically used for sending an audio signal includes a first audio playing device and a second audio playing device, a working instruction is respectively issued to the determined first audio playing device and second audio playing device, so as to instruct the first audio playing device and the second audio playing device to perform coordinated work. Specifically, the working instruction carries the signal intensity of the sound signals output by the first audio playing device and the second audio playing device respectively. After receiving the respective working instructions, the first audio playing device and the second audio playing device respectively output sound signals with respective corresponding signal intensities in designated time. It can be understood that the first audio playing device and the second audio playing device can be controlled to output different sound signal intensities in different time periods, and the first audio playing device and the second audio playing device can be controlled to output different sound signals alternately, and the sound signal intensities output in the alternating process are different.
Of course, the electronic device 202 may determine that the second smart device for work may include a different class of smart devices based on the interaction information of the user. For example, the electronic device 202 determines that the second smart device for operation includes at least one audio playback device and at least one video playback device based on the user's voice information and/or the user's gesture. Specifically, assuming that the target user needs to make a video phone call, in this application scenario, the detection device 2021 that needs to be used includes an audio receiver and/or a camera; specifically, after receiving the voice information of the target object and/or the gesture action of the user shot by the camera, the electronic device 202 analyzes the voice information or identifies the gesture action, determines that the second intelligent device which needs to interact with the target object is an audio playing device and a video playing device, and then issues working instructions to each audio playing device and each video playing device according to the relative position information between the target object and each audio playing device and video playing device, controls each audio playing device to output audio signals, and enables each video playing device to start a video function. It can be understood that, it is required to determine, in real time, according to the relative position information between each audio playing device and the target object, the audio playing device for outputting the audio signal and the video playing device for starting the video function, and determine, in real time, the audio signal intensity output by each audio playing device for outputting the audio signal, so as to ensure that when the target object moves in the preset space, the target object always obtains the audio signal with the fixed intensity, and the video playing device for starting the video function is always located in front of the target object. Specifically, the working instruction carries a first time for instructing the audio playing device to output an audio signal and a second time for instructing the video playing device to play video information. For example, along with the movement of the target object, the intensity of the audio signal output by the audio device which is far away from the target object and is at the first time is controlled to be gradually reduced, the intensity of the audio signal output by the audio device which is close to the target object and is at the first time is gradually increased, the real-time position relative to the target object is determined according to the image information of the target object shot by the camera, and the video playing device positioned in front of the target object starts a video function at the second time.
It is understood that the first smart device includes, but is not limited to, an air conditioner, an audio device, a scent output device, a video playing device, and the like.
According to the man-machine interaction control system provided by the embodiment of the application, firstly, the interaction information of the target object is detected, and the second intelligent devices of the corresponding types are determined from at least two types of first intelligent devices according to the detected interaction information of the target object; then determining second intelligent devices for work according to the position information of each second intelligent device; and respectively issuing working instructions to the second intelligent devices for working to instruct the second intelligent devices for working to work based on the working instructions. According to the interaction information and the position information of the target object, different intelligent devices are instructed to work, and the different intelligent devices are combined to provide services for the user so as to complete effective interaction with the user.
Referring to fig. 3, fig. 3 is a schematic implementation flow chart of a man-machine interaction control method according to an embodiment of the present application. As shown in fig. 3, the man-machine interaction control method is applied to the electronic device 202 shown in fig. 2, and includes S301 to S303. The details are as follows:
s301, interaction information of the target object is detected in real time.
In this embodiment, the electronic device detects, in real time, the interaction information of the target object through the detection device. When the target object needs to interact with the first intelligent device in the current environment, preset interaction information such as gesture motion or voice information can be made, and the electronic device determines the intelligent device matched with the interaction information of the target object by detecting the interaction information made by the target object in real time. It will be appreciated that the interaction information may include gesture information or voice information, and other user interaction information, such as touch interaction information, etc., which is not limited herein. Of course, the electronic device may also control the corresponding intelligent device to operate through the environmental information, for example, the electronic device may control the air conditioning device to operate according to the detected environmental temperature information, and control different air conditioning devices to perform different temperature regulation and control according to the movement of the target object.
S302, determining each second intelligent device for working according to the interaction information and the position information of each first intelligent device.
The electronic device can determine what type of first intelligent devices the target object needs to interact with according to the interaction information, and then can determine second intelligent devices for working from the corresponding type of first intelligent devices according to the determined position information of each type of first intelligent devices.
For example, if the interaction information of the target object is a preset gesture action for starting the video call, after detecting the gesture action, the electronic device determines, according to the gesture action, that the device that needs to be interacted by the target object from the plurality of first intelligent devices is a video playing device for the video call. The plurality of first intelligent devices may be the same type of intelligent devices or different types of intelligent devices, for example, including but not limited to video playing devices, audio devices, air conditioners, refrigerators, smell output devices, and the like.
Optionally, the interaction information includes at least one of gesture action, voice information and touch information, and determining, according to the interaction information and the location information of each first intelligent device, each second intelligent device for working includes: analyzing the interaction information to obtain each first intelligent device matched with the interaction information; and acquiring the position information of each first intelligent device, and determining each second intelligent device for working according to the position information of each first intelligent device.
For example, the interaction information is a gesture, and the gesture is used for indicating that the device capable of inputting and outputting the interaction information needs to be started. Specifically, the gesture action can be analyzed in the electronic device to obtain semantic information corresponding to the gesture action, and an association mapping relation between the semantic information and the device class is prestored in the electronic device, and after the semantic information corresponding to the gesture action of the target object is analyzed, the electronic device determines each first intelligent device matched with the semantic information corresponding to the gesture action from each first intelligent device according to the association mapping relation.
In addition, the electronic equipment further comprises a voice recognition function, and after the electronic equipment detects the voice information of the target object, the electronic equipment recognizes the voice information to determine each first intelligent equipment matched with the voice information.
Correspondingly, after obtaining each first intelligent device matched with the interaction information of the target object, the electronic device obtains the position information of each first intelligent device, and determines each second intelligent device used for working from each first intelligent device according to the position information.
Wherein, according to the position information of each first intelligent device, determining each second intelligent device for working may include: determining relative position information between the target object and each first intelligent device according to the position information; and selecting each second intelligent device for working from each first intelligent device according to the relative position information.
Specifically, the electronic device stores the position information of each first intelligent device in advance, and in the embodiment of the application, the electronic device calculates the relative position information between the target object and each first intelligent device according to the position information of the target object and the position information of each first intelligent device stored in advance; and according to the calculated relative position information, determining each second intelligent device used for working from each first intelligent device. Specifically, each second smart device for work refers to each second smart device that performs cooperative interaction with the target object. The relative position between each second intelligent device for cooperative interaction with the target object is within a preset communication range, and along with the movement of the target object, when the target object moves to the first position, the relative position information between the second intelligent device for work and the target object is determined, and when the target object moves to the second position, the relative position information between the second intelligent device for work and the target object is determined to be the same. That is, as the location information of the target object changes continuously, the second smart device for operation needs to be continuously determined from each first smart device, so as to ensure that the target object can receive the same information when in the first location and the second location. For example, as the target object moves, the intensity of the sound signal received by the target object, or the sharpness of the screen display, can remain unchanged.
Optionally, in practical application, when the target object moves to the first position, the relative position information between the target object and the determined second intelligent device for working may be different from the relative position information between the target object and the determined second intelligent device for working when the target object moves to the second position, so as to ensure that the same signals can be received when the user is in the first position and the second position, the output signal intensity corresponding to the second intelligent device for working when the target object is in the first position and the output signal intensity corresponding to the second intelligent device for working when the target object is in the second position may be determined respectively. The output signal strength of the second intelligent device corresponding to different positions is adjusted so as to ensure that the same signal is received when the user is in the first position and the second position.
Specifically, the selecting, according to the relative position information between each second smart device and the target object, each second smart device for working from the first smart devices includes: if the relative position information between the first intelligent device and the target object meets the preset position information evaluation condition, determining that the first intelligent device meeting the preset position information evaluation condition is the second intelligent device for work, wherein the preset position information evaluation condition is that when the target object is located at a first position, the relative position information between the first intelligent device for work and the second intelligent device for work is determined, and when the target object is located at a second position, the relative position information between the first intelligent device for work and the second intelligent device for work is determined to be the same.
S303, respectively issuing working instructions to the second intelligent devices for working to instruct the second intelligent devices for working to perform cooperative working based on the working instructions.
For example, the working instruction carries different instruction information according to the second intelligent device, for example, assuming that the second intelligent device includes an audio playing device, the working instruction carries an intensity value for instructing each audio playing device to cooperatively output an audio signal, and the issuing the working instruction to each second intelligent device for working respectively may include: determining each second intelligent device for work and the output signal intensity of each second intelligent device for work according to the relative position information between each second intelligent device and the target object; and respectively issuing working instructions carrying respective output signal intensities to the second intelligent devices for working.
Specifically, assuming that the second intelligent devices are audio playing devices, in this embodiment, determining that the output signal intensity of each second intelligent device is denoted as Pn according to the relative position information between each audio playing device and the target object; in particular, the method comprises the steps of,
Figure BDA0003314975830000071
where An represents relative position information between the nth audio playback apparatus and the target object (specifically, a distance between the nth audio playback apparatus and the target object), and Pn represents a signal intensity (specifically, a sound signal intensity) output by the nth audio playback apparatus.
As another example, the second smart device may include an audio playing device and a video playing device, where the working instruction carries a first time for instructing the audio playing device to output an audio signal and a second time for the video playing device to play video information, and an intensity value of the audio playing device to output the audio signal. Based on the working instruction, the audio playing device and the video playing device can be controlled to perform cooperative work. Wherein the first time may include the second time.
The video playing device may be a video playing device with a display screen. It can be understood that when the position information of the target object changes, according to the relative position information between the target object and the display screen of each video playing device, different video playing devices can be controlled to switch and display, and two or more video playing devices can be controlled to display simultaneously, so as to ensure that the display screen of the video playing device is always in front of the user along with the movement of the target user, so that the user can clearly know the content displayed by the display screen at any time.
In addition, the video playing device can further comprise a camera, and the working instruction can further carry instruction information for starting the camera. According to the working instruction, the video playing device can be controlled to respectively start the display screen and the camera in different time periods, so that the aim of controlling the display screen and the camera to perform interactive work is achieved.
Specifically, the process of starting the video playing device is the same as that of starting the audio playing device, and will not be described herein. It will be appreciated that the second smart device may be other types of smart devices with information interaction functions, such as smart home appliances including air conditioners, refrigerators, washing machines, etc., or lamps, televisions, switches, etc., besides the audio playing device or the video playing device described in the above embodiments, which are not limited herein.
According to the man-machine interaction control method, through the analysis, the interaction information of the target object is detected in real time, and therefore each second intelligent device used for working is determined according to the interaction information and the position information of each first intelligent device, and working instructions are respectively issued to each second intelligent device used for working to instruct each second intelligent device used for working to work based on the working instructions. According to the interaction information and the position information of the target object, different intelligent devices are instructed to work, and the different intelligent devices are combined to provide services for the user so as to complete effective collaborative interaction with the user.
Referring to fig. 4, fig. 4 is a schematic diagram of an application scenario of the man-machine interaction control method according to the embodiment of the present application. As can be seen from fig. 4, in the present embodiment, the human-computer interaction control system 20 includes an electronic device 202, two detection devices 203 (in the present embodiment, the detection devices 201 are communicatively connected to the electronic device), and two first smart devices 201. Specifically, the detection device 203 is a sound detection device, for example, a microphone device; the first smart device 201 is a sound output device, for example, an audio playback device. Specifically, the microphone device is configured to detect a sound signal output by the target object 400, and send the detected sound signal to the electronic device 202, where the electronic device 202 determines, according to the received sound signal, that the target object 400 needs to interact with the audio playing devices, obtains relative position information (for example, the relative position information is a relative distance) between the target object 400 and each audio playing device, and determines, according to the determined relative position information, an audio playing device for outputting the sound signal and a sound signal intensity output by each audio playing device for outputting the sound signal from each audio playing device. The microphone device may detect sound source position information of the sound signal sent by the target object 400, send the sound source position information to the electronic device 202, the electronic device 202 determines relative position information (distance) of the target object 400 with respect to each audio playing device according to the received sound source position information and the pre-stored position information of each audio playing device, and finally determines the audio playing device outputting the sound signal and the sound signal intensity output by each audio playing device according to the determined relative position information. For example, the audio playing devices at different positions can be controlled to output sound signals with different intensities, so that stable sound signals can be received when the target object moves in the corresponding space. The method realizes that a plurality of audio playing devices are combined to complete effective interaction with a user.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a man-machine interaction control device provided in the present application. As shown in fig. 5, in the present embodiment, the electronic device 202 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected by a bus 503, such as an I2C (Inter-integrated Circuit) bus.
In particular, the processor 501 is used to provide computing and control capabilities to support the operation of the overall electronic device 202. The processor 501 may be a central processing unit (Central Processing Unit, CPU), the processor 501 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 502 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of a portion of the structure associated with an embodiment of the present application and is not limiting of the electronic device 202 to which an embodiment of the present application may be applied, and that a particular electronic device 202 may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor is configured to run a computer program stored in the memory, and implement the functions of the electronic device 202 provided in the embodiments of the present application when the computer program is executed.
In an embodiment, the processor is configured to run a computer program stored in a memory and to implement the following steps when executing the computer program:
detecting interaction information of a target object in real time;
determining each second intelligent device for work according to the interaction information and the position information of each first intelligent device;
and respectively issuing working instructions to the second intelligent devices for working to instruct the second intelligent devices for working to perform cooperative working based on the working instructions.
In an embodiment, the interaction information includes at least one of gesture motion, voice information, and touch information.
In an embodiment, determining each second smart device for working according to the interaction information and the location information of each first smart device includes:
analyzing the interaction information to obtain each first intelligent device matched with the interaction information;
and acquiring the position information of each first intelligent device, and determining each second intelligent device for working according to the position information of each first intelligent device.
In an embodiment, the determining, according to the location information of each first smart device, each second smart device for operation includes:
determining relative position information between the target object and each first intelligent device according to the position information;
and selecting each second intelligent device for working from each first intelligent device according to the relative position information.
In an embodiment, after the selecting the second smart devices for operation from the first smart devices according to the relative position information between the first smart devices and the target object, the method further includes:
reselecting each second intelligent device for work under the condition that the position of the target object is detected to be changed; and the relative position information of the second intelligent device and the target object, which are reselected, is the same as the relative position information of the second intelligent device and the target object before reselection.
In an embodiment, the second smart device for operation includes at least one of a video playing device, a camera device, an audio playing device, and a video playing device.
In an embodiment, the second intelligent device for working includes an audio playing device, and the working instruction carries an intensity value for instructing each audio playing device to cooperatively output an audio signal.
In an embodiment, the second intelligent device for working includes an audio playing device and a video playing device, and the working instruction carries a first time for instructing the audio playing device to output an audio signal, a second time for instructing the video playing device to play video information, and an intensity value of the audio playing device to output the audio signal.
It should be noted that, for convenience and brevity of description, specific working processes of the electronic device described above may refer to corresponding description processes of functions of the electronic device in the foregoing embodiments of the TCP connection establishment method, which are not described herein again.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A human-computer interaction control method, characterized in that the method comprises:
detecting interaction information of a target object in real time;
determining each second intelligent device for work according to the interaction information and the position information of each first intelligent device;
and respectively issuing working instructions to the second intelligent devices for working to instruct the second intelligent devices for working to perform cooperative working based on the working instructions.
2. The human-computer interaction control method according to claim 1, wherein the interaction information comprises at least one of gesture motion, voice information and touch information.
3. The man-machine interaction control method according to claim 2, wherein determining each second smart device for operation based on the interaction information and the location information of each first smart device comprises:
analyzing the interaction information to obtain each first intelligent device matched with the interaction information;
and determining each second intelligent device for work according to the position information of each first intelligent device matched with the interaction information.
4. A man-machine interaction control method according to any one of claims 1 to 3, wherein said determining each second smart device for operation based on the location information of each first smart device comprises:
determining relative position information between the target object and each first intelligent device according to the position information;
and selecting each second intelligent device for working from each first intelligent device according to the relative position information.
5. The human-computer interaction method according to claim 4, further comprising, after said selecting each second smart device for operation from each first smart device according to the relative position information:
reselecting each second intelligent device for work under the condition that the position of the target object is detected to be changed;
and the relative position information of the second intelligent device and the target object, which are reselected, is the same as the relative position information of the second intelligent device and the target object before reselection.
6. The human-machine interaction control method according to claim 5, wherein the second smart device includes at least one of a video playing device, a camera device, an audio playing device, and a video playing device.
7. The human-computer interaction control method according to claim 5, wherein the second intelligent device comprises an audio playing device, and the working instruction carries an intensity value for instructing each audio playing device to cooperatively output an audio signal.
8. The human-computer interaction control method according to claim 5, wherein the second intelligent device comprises an audio playing device and a video playing device, and the working instruction carries a first time for instructing the audio playing device to output an audio signal and a second time for instructing the video playing device to play video information, and an intensity value of the audio playing device to output the audio signal.
9. A human-computer interaction control apparatus, characterized by comprising:
a memory and a processor;
the memory is used for storing a computer program;
the processor being adapted to execute the computer program and to implement the steps of the human-machine interaction control method according to any one of claims 1 to 8 when the computer program is executed.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the steps of the man-machine interaction control method according to any one of claims 1 to 8.
CN202111227973.5A 2021-10-21 2021-10-21 Man-machine interaction control method, device and storage medium Pending CN116009753A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111227973.5A CN116009753A (en) 2021-10-21 2021-10-21 Man-machine interaction control method, device and storage medium
PCT/CN2022/113401 WO2023065799A1 (en) 2021-10-21 2022-08-18 Human-computer interaction control method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111227973.5A CN116009753A (en) 2021-10-21 2021-10-21 Man-machine interaction control method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116009753A true CN116009753A (en) 2023-04-25

Family

ID=86028477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111227973.5A Pending CN116009753A (en) 2021-10-21 2021-10-21 Man-machine interaction control method, device and storage medium

Country Status (2)

Country Link
CN (1) CN116009753A (en)
WO (1) WO2023065799A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945253B (en) * 2014-04-04 2018-02-02 北京智谷睿拓技术服务有限公司 Method for controlling volume and equipment, control method for playing multimedia and equipment
WO2017169276A1 (en) * 2016-03-30 2017-10-05 日本電気株式会社 Plant management system, plant management method, plant management device, and plant management program
CN106126182B (en) * 2016-06-30 2022-06-24 联想(北京)有限公司 Data output method and electronic equipment

Also Published As

Publication number Publication date
WO2023065799A1 (en) 2023-04-27

Similar Documents

Publication Publication Date Title
EP3797521B1 (en) Identifying and controlling smart devices
CN111752442B (en) Method, device, terminal and storage medium for displaying operation guide information
CN107135443B (en) Signal processing method and electronic equipment
US10564833B2 (en) Method and apparatus for controlling devices
KR102469753B1 (en) method of providing a service based on a location of a sound source and a speech recognition device thereof
US10349171B2 (en) Electronic device, peripheral devices and control method therefor
RU2638778C2 (en) Method and device for correlation with intelligent device group in intellectual house system
US11233671B2 (en) Smart internet of things menus with cameras
US20170330439A1 (en) Alarm method and device, control device and sensing device
KR102338899B1 (en) Method and device for controlling home device
CN105959461B (en) Screen induction control method and device and terminal equipment
CN105278986A (en) Control method and apparatus of electronic device
EP3916535A2 (en) Gesture identification method and device
EP3419020B1 (en) Information processing device, information processing method and program
KR20150100523A (en) Proximity detection of candidate companion display device in same room as primary display using wi-fi or bluetooth signal strength
US11444799B2 (en) Method and system of controlling device using real-time indoor image
EP3932046A1 (en) User proximity sensing for automatic cross-device content transfer
JP2021536069A (en) Signal indicator status detection method and device, operation control method and device
EP3892069B1 (en) Determining a control mechanism based on a surrounding of a remote controllable device
CN105630239B (en) Operate detection method and device
CN108398127A (en) A kind of indoor orientation method and device
CN113253897A (en) Application window switching method, device and equipment
CN116009753A (en) Man-machine interaction control method, device and storage medium
JP2019061334A (en) Equipment control device, equipment control method and equipment control system
KR101640710B1 (en) Proximity detection of candidate companion display device in same room as primary display using camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination