CN114384848A - Interaction method, interaction device, electronic equipment and storage medium - Google Patents

Interaction method, interaction device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114384848A
CN114384848A CN202210042513.3A CN202210042513A CN114384848A CN 114384848 A CN114384848 A CN 114384848A CN 202210042513 A CN202210042513 A CN 202210042513A CN 114384848 A CN114384848 A CN 114384848A
Authority
CN
China
Prior art keywords
target
hand
state
equipment
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210042513.3A
Other languages
Chinese (zh)
Inventor
徐持衡
赵阳阳
李通
周舒岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202210042513.3A priority Critical patent/CN114384848A/en
Publication of CN114384848A publication Critical patent/CN114384848A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to an interaction method, an interaction apparatus, an electronic device, and a storage medium, where the interaction method includes: acquiring a target image containing a target object, and determining an operation area in the target image; identifying position information of a hand of the target object in the operation area and a state of the hand in the target image; determining a selected target device in a scene where the target object is located based on position information of a hand of the target object in the operation area and the state of the hand; controlling the target device to perform feedback, the feedback indicating that the target device has been selected. The embodiment of the disclosure can reduce the possibility that the control instruction disturbs others in a computer vision mode, and can intuitively display the selected target equipment for the user. In addition, because the user does not need to touch the application program in the electronic device in the process of selecting the target device, the process of selecting the target device is more convenient and faster.

Description

Interaction method, interaction device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of human-computer interaction, and in particular, to an interaction method, an interaction apparatus, an electronic device, and a storage medium.
Background
With the development of the internet of things, more and more devices can be controlled through networking, for example: lamps, air conditioners, switches and the like, and when multiple devices are located in a space, the directivity of a user when sending a control instruction is generally not intuitive enough, for example: under the condition that a plurality of intelligent air conditioners exist in an office area, a user needs to make a plurality of attempts in mobile phone application and observe the working states of the plurality of intelligent air conditioners to confirm which intelligent air conditioner executes the sent control instruction, and the process complexity of controlling the equipment is greatly increased.
Disclosure of Invention
The present disclosure proposes an interactive technical solution.
According to an aspect of the present disclosure, there is provided an interaction method, including: acquiring a target image containing a target object, and determining an operation area in the target image; identifying position information of a hand of the target object in the operation area and a state of the hand in the target image; determining a selected target device in a scene where the target object is located based on position information of a hand of the target object in the operation area and the state of the hand; controlling the target device to perform feedback, the feedback indicating that the target device has been selected.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area and the state of the hand, a selected target device in a scene where the target object is located includes: determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state; and determining the selected target equipment in the scene where the target object is positioned based on the position information of the hand of the target object in the operation area.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area, a selected target device in a scene where the target object is located includes: determining a device area corresponding to the position information in the operation area based on the position information; wherein the equipment area corresponds to at least one piece of equipment; and taking the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located.
In a possible implementation, the determining, based on the location information, a device area corresponding to the location information in the operation area includes: acquiring the area coordinates of an equipment area corresponding to at least one piece of equipment; mapping the region coordinates to the operating region to determine at least one device region in the operating region.
In one possible implementation, the interaction method further includes: and controlling the target equipment to execute corresponding actions according to the state of the hand in the target image.
In a possible implementation manner, the controlling the target device to perform a corresponding action according to the state of the hand in the target image includes: in response to the state of the hand being in a first state, determining an adjustment parameter for the target device based on the state parameter for which the hand is in the first state; and/or determining an adjustment time for the target device based on a duration of time the hand is in the first state.
In one possible embodiment, the adjustment parameters include: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction.
In a possible implementation, the target device is provided with a feedback component, and the controlling the target device to perform feedback includes: and controlling a feedback component corresponding to the target equipment to perform feedback.
In one possible implementation, the interaction method further includes: and displaying the state of the hand corresponding to the execution action of the target equipment through a screen.
In one possible implementation, the interaction method further includes: and displaying the working state corresponding to the target equipment through a screen.
According to an aspect of the present disclosure, there is provided an interaction apparatus, including: the target image acquisition module is used for acquiring a target image containing a target object and determining an operation area in the target image; an information identification module, configured to identify position information of a hand of the target object in the operation region in the target image and a state of the hand; the target device determination module is used for determining a selected target device in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand; and the target equipment control module is used for controlling the target equipment to perform feedback, and the feedback represents that the target equipment is selected.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described interaction method.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described interaction method.
In the embodiment of the disclosure, a target image containing a target object is acquired, an operation area in the target image is determined, that is, the possibility that a control instruction disturbs others is reduced in a computer vision mode, and then position information of a hand of the target object in the operation area and the state of the hand in the target image are identified; then, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located is determined; and finally, controlling the target equipment to feed back so as to visually display the selected target equipment for the user. In addition, because the user does not need to touch the application program in the electronic device in the process of selecting the target device, the process of selecting the target device is more convenient and faster.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of an interaction method according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of an interaction method according to an embodiment of the present disclosure.
FIG. 3 shows a reference schematic of a spatial scene prior to two-dimensional modeling according to an embodiment of the present disclosure.
FIG. 4 shows a reference schematic diagram of a two-dimensional modeled spatial scene according to an embodiment of the disclosure.
Fig. 5 shows a block diagram of an interaction device according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, the control of the smart device depends on an independent account and an application program, and when other people want to control the smart device, the control is usually realized through an original switch or a specific device provided with the application program. With the above arrangement, the following problems occur: 1. the directivity of the control instruction is not obvious, that is, the original switch or the virtual switch in the specific device correspondingly controls which intelligent device in reality, and the user can hardly know which intelligent device in reality the user can know only by continuously trying the original switch or the virtual switch. 2. The user interaction path is long, that is, the user needs to open a specific application program first and then needs to select a control instruction menu of the target device in the application program, and the flow is not convenient enough. In the related art, a technical scheme of voice control is provided, but the problem of fuzzy control instruction directivity still exists, and meanwhile, voice control is easy to disturb others when used in a multi-person space.
In view of this, the present disclosure provides an interaction method, which obtains a target image including a target object, determines an operation area in the target image, that is, reduces a possibility that a control command disturbs others in a computer vision manner, and then identifies position information of a hand of the target object in the operation area and a state of the hand in the target image; then, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located is determined; and finally, controlling the target equipment to feed back so as to visually display the selected target equipment for the user. In addition, because the user does not need to touch the application program in the electronic device in the process of selecting the target device, the process of selecting the target device is more convenient and faster.
Illustratively, the above interaction method may be performed by an electronic device such as a terminal device or a server, the terminal device (also referred to as a computing device) may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. For example: the terminal equipment can be an intelligent control terminal under the scene of the Internet of things, and the intelligent control terminal can send control instructions to intelligent equipment under the same Internet of things.
Referring to fig. 1, fig. 1 shows a flowchart of an interaction method according to an embodiment of the present disclosure, the interaction method including:
step S100, acquiring a target image containing a target object, and determining an operation area in the target image. For example, the target image may be captured by a camera, and the target object may be a target person, for example: a person in the target image who is located in the foreground, or a specified person in the target image, or the like. In one example, the operation area may be an effective area for performing a corresponding operation based on the hand of the target object, in other words, when the hand of the target object is in the operation area, subsequent various types of operations may be performed.
Step S200 of recognizing position information of the hand of the target object in the operation region and a state of the hand in the target image. For example, the target image may be detected by a gesture detection algorithm or a machine learning model in the related art, and then the position information and the state of the hand are determined.
Step S300, determining the selected target device in the scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand.
The target device may be an intelligent device connected to the electronic device executing the embodiment of the present application through the internet of things, for example: lamps, curtains, switches, air conditioners, sound boxes, televisions, and the like. The target image can be collected by a camera, and the camera can be a built-in camera of the electronic equipment or an external camera connected with the electronic equipment so as to collect the target image. For example, if the stability of the operation of the apparatus is taken into consideration, at least one of the following restrictions may be added: 1. only one gesture is included in the target image. 2. At the same time, the number of devices that can achieve effective interaction is 1.
Referring to fig. 2, fig. 2 shows a flowchart of an interaction method according to an embodiment of the present disclosure, in a possible implementation, step S300 may include:
step S310, responding to the state of the hand in the operation area being in a preset state, and determining the position information of the hand in the operation area. For example, the preset state may be a single-selection state (e.g., a forefinger click in the above) for selecting one target device, or a multiple-selection state (e.g., a palm slide) for successively selecting multiple target devices, or a click state (e.g., an articulation click) for selecting multiple target devices at intervals, and different preset states may be preset to respectively correspond to the single-selection state, the multiple-selection state, and the click state, so as to determine the operation mode of the user at the time by recognizing different gestures.
For example, if a plurality of target devices can be selected at one time, a specific multi-selection state can be set, such as: when the hand is in a multi-selection state, different devices can be selected at one time by moving the hand, and then the adjustment is carried out through the same operation gesture in the subsequent steps. For example: if the meeting room needs to use the projector, and a plurality of intelligent curtains exist in the meeting room, a user can select all the intelligent curtains (that is, the moving path of the multi-selection gesture crosses a plurality of device areas) by moving the hand in the multi-selection state (for example, moving the hand in the palm opening state) in front of the camera, and then drop all the intelligent curtains through one operation gesture (that is, the state of the hand corresponding to the execution action of the target device, which is described later), so as to save the time of the user for adjusting the devices.
For example, a specific click state may also be set, and the hand in the click state is moved to sequentially select devices with non-adjacent device regions, and then the adjustment is performed by the same operation gesture in the subsequent steps. For example: in the meeting room scene, if the device areas of other devices exist between the device areas of the intelligent curtain, and the user cannot continuously select the devices by using a multi-selection gesture, the user can sequentially select a plurality of curtains by moving the hand (such as finger joint clicking) in the clicking state, and then drop all the intelligent curtains by operating the gesture so as to reduce the probability of misoperation of other devices.
Step S320, determining a selected target device in the scene where the target object is located based on the position information of the hand of the target object in the operation area. Illustratively, the position information may include coordinate information such as: a coordinate system may be established based on the operation area, the center position of the hand is regarded as a coordinate point, and the target device may be determined by determining which device area corresponds to the target device the coordinate point falls into. In one example, this step may include: determining a device area corresponding to the position information in the operation area based on the position information; wherein the equipment area corresponds to at least one piece of equipment; and taking the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located. In the embodiment of the present disclosure, the operation area may include at least one device area, each device area corresponds to one target device, and the target device that the user wants to operate may be determined by comparing the device area with the position information of the hand, so as to implement intelligent operation of the device.
With reference to fig. 3 and 4, fig. 3 shows a reference schematic diagram of a spatial scene before two-dimensional modeling according to an embodiment of the disclosure, and fig. 4 shows a reference schematic diagram of a spatial scene after two-dimensional modeling according to an embodiment of the disclosure. In the embodiment of the present disclosure, a user may set a viewing angle (e.g., a shooting viewing angle in fig. 3) suitable for interacting with multiple devices in a current spatial scene, and a user or a manufacturer may perform two-dimensional (e.g., a television area in fig. 4, multiple window covering areas, etc.) or three-dimensional modeling on the multiple devices according to the viewing angle, and a space in which the multiple device areas are located is correspondingly modeled in two dimensions or three dimensions. For example: in two-dimensional modeling, a space including a plurality of device regions may be represented as an ellipse or a polygon (as shown in fig. 4). In three-dimensional modeling, a parcel space comprising multiple equipment regions may appear as an ellipsoid, polyhedron, or the like. The specific shape of each equipment area is not limited in the embodiment of the disclosure, and can be determined according to actual modeling requirements, so that each equipment area can be ensured to correspond to one equipment. In one example, the storage space may be divided into a corresponding number of device regions according to a position of each device in the storage space. In other words, the plurality of equipment areas can completely or partially fill the object space according to modeling requirements; the two-dimensional or three-dimensional object placing space is used as the operation area and is mapped in the target image, and the proportional relationship between the operation area and the target image is not limited in the embodiment of the disclosure, and a user or a developer can determine the object placing space according to the actual situation.
In one example, the device region in the operating region may be determined by: acquiring the area coordinates of an equipment area corresponding to at least one piece of equipment; mapping the region coordinates to the operating region to determine at least one device region in the operating region. The area coordinates may be coordinates of boundary points of the device areas in the object space (e.g., coordinates of vertices in a polygon), that is, the boundary points of the device areas are mapped to the operation area, so that the target image may be divided into a plurality of device areas, and then which device area the hand in the preset state is located in may be determined. For example, when the first size of the two-dimensional space scene formed by each device region is not consistent with the second size of the operation region, the boundary point coordinates of each device region may be scaled first, the scaling ratio may be determined according to the ratio of the second size to the first size, and then the scaled boundary point coordinates are mapped onto the operation region, thereby determining the device region in the operation region. In one example, if the shape of the object space is different from the shape of the operation region (e.g., the object space is a two-dimensional ellipse, and the operation region is a rectangle), the centers of the two may be overlapped, and then the relationship between the device region and the operation region in the object space is established. In one example, the shooting orientation of the camera can be set so that the visual angle of the two-dimensional space scene can be kept parallel to the visual angle of the target image, and therefore the mapping accuracy can be improved.
For example, whether the hand is located in the device region may be determined according to a preset rule, for example, if the hand is in an index finger click state, the hand may be determined to be located in a certain device region when the position of the index finger tip is within the device region.
And step S400, controlling the target equipment to feed back. The feedback indicates that the target device has been selected.
In one possible implementation, step S400 may include: and controlling a feedback component corresponding to the target equipment to perform feedback. Illustratively, the feedback component may include: speakers, screens, light strips, etc., embodiments of the present disclosure are not limited thereto. In one example, the feedback part may be a light emitting part, and the controlling of the target device for feedback may include controlling the light emitting part of the target device to emit light in a mode different from other devices. The light emitting component can comprise a light strip, the light strip can surround the target device, and interactive feedback between the target device and a user can be achieved in an independent control mode. For example, after the target device is determined, the light strip of the target device may continuously emit light, and the light strips of the other devices may not emit light, or the light strip of the target device may blink, and the light strips of the other devices may not emit light, or may remain to emit light continuously. In this way, even when the user does not know the specific location of the "device area", it is possible to clearly know which device the user currently selects based on the feedback from the light emitting section of the device.
In one example, existing components of the target device may also be multiplexed for feedback, such as: the display screen of the television can be used as a feedback component or the sound box of the television can be used as a feedback component, namely, the user can be prompted that the target equipment is selected by lighting the screen or giving out prompt tones.
In operation, the user can keep the hand in the preset state, observe the light-emitting condition of the light-emitting component of each device, determine which device is selected currently, if the selected device is not the device which the user wants to operate, the user can move the position and then make the hand in the preset state again, and continue to observe the light-emitting condition of the light-emitting component of each device until the device which the user wants to operate is selected.
Illustratively, a layout of the device area, such as that shown in fig. 4, may also be displayed via the display screen to assist the user in quickly selecting a device that the user desires to operate.
In one example, the position of the hand may also be displayed in the layout of the device region through the display screen, that is, the user may confirm whether the target device is successfully selected in the operation in the display screen. For example: in the case where the light emitting device is damaged, the user can know whether the target device is selected by this operation through the display screen.
According to the method and the device, the target image containing the target object is acquired, the operation area in the target image is determined, namely the possibility that a control instruction disturbs others is reduced in a computer vision mode, and then the position information of the hand of the target object in the operation area and the state of the hand in the target image are identified; then, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located is determined; and finally, controlling the target equipment to feed back so as to visually display the selected target equipment for the user. In addition, because the user does not need to touch the application program in the electronic device in the process of selecting the target device, the process of selecting the target device is more convenient and faster.
With continued reference to fig. 2, in a possible implementation, the interaction method may further include:
step S500, controlling the target device to execute corresponding action according to the state of the hand in the target image. For example, the target image may be detected via a gesture detection algorithm or a machine learning model in the related art, and then it is determined whether the target image includes a hand in a specific state to determine a specific operation to be performed on the target device. The embodiments of the present disclosure are not limited to the specific states described above. For example, if the electronic device is provided with a display screen, the electronic device may display a state of a hand corresponding to an execution action of the target device through the screen to prompt a user of what state the target device can be operated by the hand, thereby lowering a threshold for using the above interaction method. Illustratively, the lighting component of the target device may be controlled to light continuously or provide light feedback to the user in the form of a breathing light while the target device performs the action. In one example, feedback from the target device (e.g., turning off the light emitting component) may be turned off when a hand in a particular state is no longer present in the target image.
In one possible implementation, step S500 may include: in response to the state of the hand being in a first state, determining an adjustment parameter for the target device based on the state parameter for which the hand is in the first state; and/or determining an adjustment time for the target device based on a duration of time the hand is in the first state. For example, the first state may be a rotation state, a translation state, a grip state, or the like. The above-mentioned adjusting parameters include: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction. For example: the brightness can include the brightness of lamplight, the brightness of display and the like, the volume can include the volume of playing and the like, the wind speed can include the wind speed of an air conditioner and the like, and the temperature can include the temperature of the air conditioner and the like. The moving object positions include: the shielding position after the curtain is adjusted, the air outlet position after the air conditioner is adjusted, the shooting position after the camera is adjusted, the illumination position after the lamp is adjusted and the like. The moving direction comprises; the moving direction of the curtain, the rotating direction of the air-conditioning air outlet baffle, the rotating direction of the camera, the rotating direction of the light-emitting component in the lamp and the like. In other words, the target device may adjust the region of action of the target device by means of movements such as translation, rotation of its own components, etc. The disclosed embodiments are not limited herein. Embodiments of the present disclosure provide herein for reference a different first state.
In one possible implementation, step S500 may include: in response to the state of the hand being in a rotated state, determining an adjustment magnitude of an adjustment parameter of the target device based on a rotation angle at which the hand is in the rotated state; determining an adjustment time for the target device based on a duration of time the hand is in a rotated state. The above-mentioned adjusting parameters may include: at least one of light brightness, playing volume, air conditioning wind speed and air conditioning temperature. For example: the above-mentioned rotating state can be a hand state in which the forearm is kept still and the index finger is rotated clockwise/counterclockwise. For example, the rotation angle may be detected based on a three-axis coordinate system (i.e., a horizontal axis, a vertical axis, and an end-to-end circular axis crossing the two axes) in the related art, i.e., the rotation angle may be expressed as a translation distance of the end of the index finger on the circular axis (or a waving angle of the index finger on three axes) in the three-axis coordinate system. For example, the rotation angle may also be detected based on a two-axis coordinate system (i.e., a horizontal axis and a vertical axis of the image), that is, the rotation angle may be expressed as an angle of the tip of the index finger stroking in the two-axis coordinate system, and the determination manner of the rotation angle is not limited in this embodiment of the disclosure. According to the embodiment of the disclosure, the adjusting process can be more intuitive and controllable when the adjusting parameter is a continuous variable by setting the rotating angle and the duration.
For example, the rotation angle may be positively correlated with the adjustment range of the adjustment parameter. For example: if the rotation angle is 50 degrees, the duration is 5 seconds, and the device is a smart sound box, the volume of the smart sound box is increased (or decreased) within 5 seconds according to 5 decibels/second when the finger of the user stays. In the same scene, if the rotation angle is 90 degrees and the duration is 5 seconds, the volume of the smart sound box is increased (or decreased) within 5 seconds according to 10 db/s when the user's finger stops. The above numerical values are only used as exemplary expressions, and a specific adjustment amplitude manufacturer or user can be determined according to actual situations.
In one possible implementation, step S500 may include: in response to the state of the hand being in a translated state, determining an adjustment magnitude of an adjustment parameter of the target device based on a movement distance of the hand being in the translated state, determining an adjustment time of the target device based on a duration of time the hand is in the translated state. For example: the translation state may be translating the index finger. For example, the moving distance may also be determined by the three-axis coordinate system or the two-axis coordinate system, which is not described herein again in this disclosure. According to the embodiment of the disclosure, the adjustment of the adjustment parameter as the continuous variable can be more intuitive and controllable by setting the moving distance and the duration.
For example, the moving distance may be positively correlated with the adjustment range of the adjustment parameter. For example: if the moving distance is 100px (pixels), the duration is 5 seconds, and the device is a smart speaker, the smart speaker increases (or decreases) the volume within 5 seconds according to 5 db/sec when the user's finger stays. In the same scenario, if the moving distance is 200px (pixels) and the duration is 5 seconds, the volume of the smart speaker is increased (or decreased) within 5 seconds according to 10 db/sec when the user's finger stays. The above numerical values are only used as exemplary expressions, and a specific adjustment amplitude manufacturer or user can be determined according to actual situations.
In one possible implementation, step S500 may include: in response to the state of the hand being in a gripping state, determining an adjustment distance of an adjustment parameter of the target device based on the dragging distance of the hand being in the gripping state; determining an adjustment direction of the adjustment parameter based on a dragging direction of the hand in a gripping state. The above-mentioned adjusting parameters include: at least one of a curtain position, an air conditioner wind direction, a camera position and an illumination position. For example, the dragging distance and the dragging direction may also be determined by the three-axis coordinate system or the two-axis coordinate system, which is not described herein again in the embodiments of the present disclosure.
For example, the dragging distance may be positively correlated to the adjustment distance of the adjustment parameter, and the dragging direction may be similar to the adjustment direction. For example: if the dragging distance is 100px, the dragging direction is downward, the device is an intelligent curtain, and the intelligent curtain is pulled downward by 2 cm. Under the same scene, if the dragging distance is 200px and the dragging direction is upward right, the intelligent curtain is pulled upward by 4 centimeters.
In one possible implementation, step S500 may include: switching a switching state of the target device in response to a state of the hand changing from a gripping state to an open state. For example: the device is an intelligent television, the on-off state is off, the hand of the user is changed from the holding state to the opening state, and the on-off state of the intelligent television is changed from off to on.
The above various possible embodiments can be arranged and combined according to actual requirements. For example: manufacturers can directly integrate the combined interaction method into the electronic device to reduce the operation difficulty of users, and can also display the above various possible implementation modes on the screen of the electronic device, so that the user can customize the interaction method to better meet the actual requirements of the users, and the embodiment of the disclosure is not limited herein.
Several scenarios using the above-described interaction method are listed here for reference. 1. In an office scene, a user can conveniently control an air conditioner, light, a curtain, a television and a video conference system in an office through a specific gesture on a chair, and even when someone knocks the door, the user can automatically open the door through the specific gesture (namely, the user selects the door through the selected gesture and controls the gesture of opening the door to open the door), so that the user does not need to search corresponding equipment in an application program in a time-consuming manner. 2. In a meeting room scene, when the projector is needed, a user only needs to make several gestures (namely, the projector, the curtain and the like are respectively selected through the selected gestures and are respectively operated through the gestures after the selection), so that the tasks of multi-user cooperation such as projection remote control, curtain pull-down and the like can be realized, and the labor cost is greatly saved. 3. In a home scene, when a user sits on a sofa, the lamp and the air conditioner, even the wind direction of the air conditioner, the brightness of light, the color of the light, the volume of the sound box, the opening and closing of the game machine and the like can be conveniently controlled through specific gestures, the user does not need to get up to regulate and control, and the convenience of a human-computer interaction process is realized.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an interaction apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the interaction methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 5 shows a block diagram of an interaction device according to an embodiment of the present disclosure, and as shown in fig. 5, the interaction device 100 includes: the target image acquiring module 110 is configured to acquire a target image including a target object, and determine an operation area in the target image. An information identifying module 120, configured to identify position information of the hand of the target object in the operation area in the target image and a state of the hand. A target device determining module 130, configured to determine, based on the position information of the hand of the target object in the operation area and the state of the hand, a selected target device in a scene where the target object is located. A target device control module 140, configured to control the target device to perform feedback, where the feedback indicates that the target device has been selected.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area and the state of the hand, a selected target device in a scene where the target object is located includes: determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state; and determining the selected target equipment in the scene where the target object is positioned based on the position information of the hand of the target object in the operation area.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area, a selected target device in a scene where the target object is located includes: determining a device area corresponding to the position information in the operation area based on the position information; wherein the equipment area corresponds to at least one piece of equipment; and taking the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located.
In a possible implementation, the determining, based on the location information, a device area corresponding to the location information in the operation area includes: acquiring the area coordinates of an equipment area corresponding to at least one piece of equipment; mapping the region coordinates to the operating region to determine at least one device region in the operating region.
In one possible implementation, the interaction method further includes: and controlling the target equipment to execute corresponding actions according to the state of the hand in the target image.
In a possible implementation manner, the controlling the target device to perform a corresponding action according to the state of the hand in the target image includes: in response to the state of the hand being in a first state, determining an adjustment parameter for the target device based on the state parameter for which the hand is in the first state; and/or determining an adjustment time for the target device based on a duration of time the hand is in the first state.
In one possible embodiment, the adjustment parameters include: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction.
In a possible implementation, the target device is provided with a feedback component, and the controlling the target device to perform feedback includes: and controlling a feedback component corresponding to the target equipment to perform feedback.
In one possible implementation, the interaction method further includes: and displaying the state of the hand corresponding to the execution action of the target equipment through a screen.
In one possible implementation, the interaction method further includes: and displaying the working state corresponding to the target equipment through a screen.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, or other modality of device.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other terminal device.
Referring to fig. 6, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. An interaction method, characterized in that the interaction method comprises:
acquiring a target image containing a target object, and determining an operation area in the target image;
identifying position information of a hand of the target object in the operation area and a state of the hand in the target image;
determining a selected target device in a scene where the target object is located based on position information of a hand of the target object in the operation area and the state of the hand;
controlling the target device to perform feedback, the feedback indicating that the target device has been selected.
2. The interaction method of claim 1, wherein the determining the selected target device in the scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand comprises:
determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state;
and determining the selected target equipment in the scene where the target object is positioned based on the position information of the hand of the target object in the operation area.
3. The interaction method of claim 2, wherein the determining the selected target device in the scene in which the target object is located based on the position information of the hand of the target object in the operation area comprises:
determining a device area corresponding to the position information in the operation area based on the position information; wherein the equipment area corresponds to at least one piece of equipment;
and taking the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located.
4. The interaction method of claim 3, wherein said determining, based on the location information, a device region of the operating region corresponding to the location information comprises:
acquiring the area coordinates of an equipment area corresponding to at least one piece of equipment;
mapping the region coordinates to the operating region to determine at least one device region in the operating region.
5. The interaction method of any one of claims 1-4, wherein the interaction method further comprises:
and controlling the target equipment to execute corresponding actions according to the state of the hand in the target image.
6. The interaction method as claimed in claim 5, wherein the controlling the target device to perform corresponding actions according to the state of the hand in the target image comprises:
in response to the state of the hand being in a first state, determining an adjustment parameter for the target device based on the state parameter for which the hand is in the first state; and/or
Determining an adjustment time for the target device based on a duration of time the hand is in the first state.
7. The interactive method of claim 6, wherein adjusting the parameters comprises: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction.
8. The interaction method according to any one of claims 1 to 7, wherein the target device is provided with a feedback component, and the controlling the target device to perform feedback comprises: and controlling a feedback component corresponding to the target equipment to perform feedback.
9. The interaction method according to any one of claims 1 to 8, characterized in that the interaction method further comprises: and displaying the state of the hand corresponding to the execution action of the target equipment through a screen.
10. The interaction method according to any one of claims 1 to 9, characterized in that the interaction method further comprises: and displaying the working state corresponding to the target equipment through a screen.
11. An interaction apparatus, characterized in that the interaction apparatus comprises:
the target image acquisition module is used for acquiring a target image containing a target object and determining an operation area in the target image;
an information identification module, configured to identify position information of a hand of the target object in the operation region in the target image and a state of the hand;
the target device determination module is used for determining a selected target device in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand;
and the target equipment control module is used for controlling the target equipment to perform feedback, and the feedback represents that the target equipment is selected.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the interaction method of any of claims 1 to 10.
13. A computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the interaction method of any one of claims 1 to 10.
CN202210042513.3A 2022-01-14 2022-01-14 Interaction method, interaction device, electronic equipment and storage medium Pending CN114384848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210042513.3A CN114384848A (en) 2022-01-14 2022-01-14 Interaction method, interaction device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210042513.3A CN114384848A (en) 2022-01-14 2022-01-14 Interaction method, interaction device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114384848A true CN114384848A (en) 2022-04-22

Family

ID=81201306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210042513.3A Pending CN114384848A (en) 2022-01-14 2022-01-14 Interaction method, interaction device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114384848A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117021117A (en) * 2023-10-08 2023-11-10 电子科技大学 Mobile robot man-machine interaction and positioning method based on mixed reality

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339489A (en) * 2008-08-14 2009-01-07 炬才微电子(深圳)有限公司 Human-computer interaction method, device and system
CN101813922A (en) * 2009-08-10 2010-08-25 李艳霞 Intelligent control device with orientation identification function
CN107493495A (en) * 2017-08-14 2017-12-19 深圳市国华识别科技开发有限公司 Interaction locations determine method, system, storage medium and intelligent terminal
CN108377212A (en) * 2017-01-30 2018-08-07 联发科技股份有限公司 The control method and its electronic system of household electrical appliance
CN110471296A (en) * 2019-07-19 2019-11-19 深圳绿米联创科技有限公司 Apparatus control method, device, system, electronic equipment and storage medium
US20200310637A1 (en) * 2019-03-25 2020-10-01 Samsung Electronics Co., Ltd. Electronic device performing function according to gesture input and operation method thereof
CN111949134A (en) * 2020-08-28 2020-11-17 深圳Tcl数字技术有限公司 Human-computer interaction method, device and computer-readable storage medium
CN112068698A (en) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 Interaction method and device, electronic equipment and computer storage medium
CN113190106A (en) * 2021-03-16 2021-07-30 青岛小鸟看看科技有限公司 Gesture recognition method and device and electronic equipment
CN113486765A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339489A (en) * 2008-08-14 2009-01-07 炬才微电子(深圳)有限公司 Human-computer interaction method, device and system
CN101813922A (en) * 2009-08-10 2010-08-25 李艳霞 Intelligent control device with orientation identification function
CN108377212A (en) * 2017-01-30 2018-08-07 联发科技股份有限公司 The control method and its electronic system of household electrical appliance
CN107493495A (en) * 2017-08-14 2017-12-19 深圳市国华识别科技开发有限公司 Interaction locations determine method, system, storage medium and intelligent terminal
US20200310637A1 (en) * 2019-03-25 2020-10-01 Samsung Electronics Co., Ltd. Electronic device performing function according to gesture input and operation method thereof
CN110471296A (en) * 2019-07-19 2019-11-19 深圳绿米联创科技有限公司 Apparatus control method, device, system, electronic equipment and storage medium
CN111949134A (en) * 2020-08-28 2020-11-17 深圳Tcl数字技术有限公司 Human-computer interaction method, device and computer-readable storage medium
CN112068698A (en) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 Interaction method and device, electronic equipment and computer storage medium
CN113190106A (en) * 2021-03-16 2021-07-30 青岛小鸟看看科技有限公司 Gesture recognition method and device and electronic equipment
CN113486765A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦永强;喻纯;史元春;: "基于结构光源的大屏幕交互技术", 电子学报, no. 1, 15 April 2009 (2009-04-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117021117A (en) * 2023-10-08 2023-11-10 电子科技大学 Mobile robot man-machine interaction and positioning method based on mixed reality
CN117021117B (en) * 2023-10-08 2023-12-15 电子科技大学 Mobile robot man-machine interaction and positioning method based on mixed reality

Similar Documents

Publication Publication Date Title
EP3062196B1 (en) Method and apparatus for operating and controlling smart devices with hand gestures
US10564833B2 (en) Method and apparatus for controlling devices
EP3144915B1 (en) Method and apparatus for controlling device, and terminal device
US20170091551A1 (en) Method and apparatus for controlling electronic device
EP3131258A1 (en) Smart household equipment control methods and corresponding apparatus
CN109087238B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
EP4246287A1 (en) Method and system for displaying virtual prop in real environment image, and storage medium
CN106791893A (en) Net cast method and device
CN108710306B (en) Control method and device of intelligent equipment and computer readable storage medium
US9794495B1 (en) Multiple streaming camera navigation interface system
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN103927101B (en) The method and apparatus of operational controls
CN111610912B (en) Application display method, application display device and storage medium
CN109496293B (en) Extended content display method, device, system and storage medium
CN112327653A (en) Device control method, device control apparatus, and storage medium
EP3282644A1 (en) Timing method and device
US20150288764A1 (en) Method and apparatus for controlling smart terminal
CN110782532A (en) Image generation method, image generation device, electronic device, and storage medium
CN114384848A (en) Interaction method, interaction device, electronic equipment and storage medium
CN108845852B (en) Control method and device of intelligent equipment and computer readable storage medium
CN114296587A (en) Cursor control method and device, electronic equipment and storage medium
CN111782053B (en) Model editing method, device, equipment and storage medium
CN115543064A (en) Interface display control method, interface display control device and storage medium
CN106527954B (en) Equipment control method and device and mobile terminal
CN106773753B (en) Equipment control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination