CN112068698A - Interaction method and device, electronic equipment and computer storage medium - Google Patents

Interaction method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN112068698A
CN112068698A CN202010899228.4A CN202010899228A CN112068698A CN 112068698 A CN112068698 A CN 112068698A CN 202010899228 A CN202010899228 A CN 202010899228A CN 112068698 A CN112068698 A CN 112068698A
Authority
CN
China
Prior art keywords
hand
state
determining
area
display interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010899228.4A
Other languages
Chinese (zh)
Inventor
徐持衡
周舒岩
谢符宝
黄攀
王琳
杨松
陈彬
刘昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010899228.4A priority Critical patent/CN112068698A/en
Publication of CN112068698A publication Critical patent/CN112068698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Abstract

The embodiment of the disclosure discloses an interaction method, an interaction device, electronic equipment and a computer storage medium. The method comprises the following steps: obtaining an image containing a target object, and determining an operation area in the image; identifying a position of a hand of the target object in the image in the operation area and a state of the hand; determining an operation position of an operation point corresponding to the hand in a display interface based on the position of the hand in the operation area, and determining the state of the operation point according to the state of the hand; and executing response operation based on the state of the operation point at the operation position.

Description

Interaction method and device, electronic equipment and computer storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an interaction method and apparatus, an electronic device, and a computer storage medium.
Background
In recent years, the touch interaction mode has a large number of applications and interaction designs in mobile terminals such as mobile phones and tablet computers, and interaction experience is greatly improved. However, the touch interaction mode needs to satisfy a distance not exceeding the length of the arm, and the touch interaction mode cannot be used in some scenes where direct touch cannot be performed or touch is inconvenient.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present application provide an interaction method, an interaction device, an electronic device, and a computer storage medium, which implement space-separated touch by gestures.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
the embodiment of the application provides an interaction method, which comprises the following steps:
obtaining an image containing a target object, and determining an operation area in the image;
identifying a position of a hand of the target object in the image in the operation area and a state of the hand;
determining an operation position of an operation point corresponding to the hand in a display interface based on the position of the hand in the operation area, and determining the state of the operation point according to the state of the hand;
and executing response operation based on the state of the operation point at the operation position.
In some optional embodiments of the present application, the determining the operation region in the image comprises:
identifying a first area in the image corresponding to at least part of a limb of the target object; the at least part of the limb comprises at least a head;
determining the operating region based on the first region.
In some optional embodiments of the present application, the determining the operation region based on the first region comprises: based on the proportion of the first area in the area where the image is located, carrying out reduction or enlargement processing on the first area to obtain the operation area; or, the first region is taken as the operation region.
In some optional embodiments of the present application, the determining, based on the position of the hand in the operation region, an operation position of an operation point corresponding to the hand in a display interface includes:
determining a first relative positional relationship of the hand in the operation region; the first relative positional relationship represents a relative position of the hand in the operation region in a first coordinate system;
and determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relation.
In some optional embodiments of the present application, the operation position has a set matching relationship between the relative positional relationship in the area where the display interface is located and the first relative positional relationship.
In some optional embodiments of the present application, the determining a first relative positional relationship of the hand in the operation region includes: determining a first coordinate position of a specific key point of the hand in the first coordinate system corresponding to the operation area;
the determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relationship comprises: determining a second coordinate position corresponding to the first coordinate position in a second coordinate system corresponding to the display interface based on the transformation relation, and taking the second coordinate position as the operation position of the operation point; wherein the transformation relationship is a transformation relationship between the first coordinate system and the second coordinate system.
In some optional embodiments of the present application, the determining the state of the operation point according to the state of the hand comprises: determining the state of the operation point corresponding to the state of the hand part based on a mapping set obtained in advance; the mapping set comprises mapping relations between states of a plurality of groups of hands and states of the operation points respectively.
In some optional embodiments of the present application, the method further comprises: and displaying an operation identifier corresponding to the state of the operation point at the operation position in the display interface.
In some optional embodiments of the present application, the method further comprises: allocating a first identifier to the hand part and a second identifier to the operation point, and establishing a mapping relation between the first identifier and the second identifier, wherein the mapping relation is used for controlling the operation point to move in the display interface along with the movement of the hand part.
In some optional embodiments of the present application, the state of the hand comprises a palm state and/or a fist state.
In some optional embodiments of the present application, the state of the operating point comprises: a non-touch screen state and/or a touch screen state.
An embodiment of the present application further provides an interaction apparatus, where the apparatus includes: the device comprises a first determining unit, an identifying unit, a second determining unit and an executing unit; wherein the content of the first and second substances,
the first determining unit is used for obtaining an image containing a target object and determining an operation area in the image;
the identification unit is used for identifying the position of a hand of the target object in the image in the operation area and the state of the hand;
the second determining unit is used for determining the operation position of an operation point corresponding to the hand in a display interface based on the position of the hand in the operation area, and determining the state of the operation point according to the state of the hand;
the execution unit is used for executing response operation based on the state of the operation point at the operation position.
In some optional embodiments of the present application, the first determining unit is configured to identify a first region corresponding to at least part of a limb of the target object in the image; the at least part of the limb comprises at least a head; determining the operating region based on the first region.
In some optional embodiments of the present application, the first determining unit is configured to perform reduction or enlargement processing on the first region based on a ratio of the first region to a region where the image is located, so as to obtain the operation region; or, the first region is taken as the operation region.
In some optional embodiments of the present application, the second determining unit is configured to determine a first relative positional relationship of the hand in the operation area; the first relative positional relationship represents a relative position of the hand in the operation region in a first coordinate system; and determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relation.
In some optional embodiments of the present application, the operation position has a set matching relationship between the relative positional relationship in the area where the display interface is located and the first relative positional relationship.
In some optional embodiments of the present application, the second determining unit is configured to determine a first coordinate position of a specific key point of the hand in the first coordinate system corresponding to the operation area; the display interface is used for displaying a first coordinate position corresponding to the first coordinate position in a first coordinate system corresponding to the display interface; wherein the transformation relationship is a transformation relationship between the first coordinate system and the second coordinate system.
In some optional embodiments of the present application, the second determining unit is configured to determine, based on a mapping set obtained in advance, a state of the operation point corresponding to the state of the hand; the mapping set comprises mapping relations between states of a plurality of groups of hands and states of the operation points respectively.
In some optional embodiments of the present application, the apparatus further includes a display unit, configured to display, at the operation position in the display interface, an operation identifier corresponding to the state of the operation point.
In some optional embodiments of the present application, the apparatus further comprises an assigning unit and a mapping unit; the distribution unit is used for distributing a first identifier for the hand and distributing a second identifier for the operation point;
the mapping unit is used for establishing a mapping relation between the first identifier and the second identifier, and the mapping relation is used for controlling the operation point to move along with the movement of the hand in the display interface.
In some optional embodiments of the present application, the state of the hand comprises a palm state and/or a fist state.
In some optional embodiments of the present application, the state of the operating point comprises: a non-touch screen state and/or a touch screen state.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the method according to the embodiments of the present application.
The embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps of the method according to the embodiment of the present application are implemented.
According to the interaction method and device, the electronic device and the computer storage medium, the mapping from the operation area to the area where the image is located is determined, the operation position of the operation point in the display interface is further determined according to the position of the hand in the operation area, and the state of the operation point is determined according to the state of the hand, so that the response operation corresponding to the state of the hand is executed by the state of the operation point in the operation position, and the scheme that interaction is carried out between a user and the electronic device based on the state of the hand is achieved.
Drawings
Fig. 1 is a first flowchart illustrating an interaction method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an operation area and an interaction position in an interaction method according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a second interaction method according to an embodiment of the present application;
fig. 4 is a third schematic flowchart of an interaction method according to an embodiment of the present application;
FIG. 5 is a first schematic diagram illustrating a structure of an interaction device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a second exemplary embodiment of an interaction device;
fig. 7 is a schematic structural diagram of a third component of an interaction device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware component structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and specific embodiments.
The embodiment of the application provides an interaction method. Fig. 1 is a first flowchart illustrating an interaction method according to an embodiment of the present application; as shown in fig. 1, the method includes:
step 101: obtaining an image containing a target object, and determining an operation area in the image;
step 102: identifying a position of a hand of the target object in the image in the operation area and a state of the hand;
step 103: determining an operation position of an operation point corresponding to the hand in a display interface based on the position of the hand in the operation area, and determining the state of the operation point according to the state of the hand;
step 104: and executing response operation based on the state of the operation point at the operation position.
The interaction method in this embodiment may be applied to an electronic device, and the electronic device may be a user device. In some alternative embodiments, the electronic device may include a cell phone, tablet, computer, game console, or the like; in other optional embodiments, the electronic device may also be a display device such as a smart television, a projector, a smart screen, an outdoor display, and the like.
In some optional embodiments, an image capture component (e.g., a camera) may be included in the electronic device, through which an image containing the target object is obtained. In other alternative embodiments, the electronic device may include a communication component, and the communication component obtains an image including the target object captured by another camera (for example, a camera independently disposed in the image capture area, or a camera in another electronic device). For example, taking the electronic device as a mobile phone, an image including the target object may be acquired by a front camera of the mobile phone.
For example, the image capturing component (e.g., a camera) may be a general camera, and accordingly, the image containing the target object may also be a general two-dimensional image, such as an RGB image, and the general image capturing component may be used to obtain the general two-dimensional image in the embodiments of the present application.
Illustratively, the target object may specifically be a target person; the target person may specifically be a person in the image that is located in the foreground; alternatively, the target person may be a specified person in the image.
In this embodiment, a target object (i.e., a target person) is included in the image, and when at least part of the limbs of the target object and the hand of the target object in the image are recognized, the interaction with the electronic device is further realized by recognizing the position information of the hand and the state of the hand.
In this embodiment, the operation region in the image is an effective region in which a response operation can be performed based on the hand of the target object, or it can be understood that, when the hand of the target object is within the operation region in the image, a subsequent response operation can be performed.
In this embodiment, the operation area and the display interface have a mapping relationship, and any position in the operation area can be mapped to one operation position in the display interface. The operation position of the operation point corresponding to the hand in the display interface can be determined based on the position of the hand in the operation area.
For example, fig. 2 is a schematic diagram of an operation area and an operation position in an interaction method according to an embodiment of the present application; as shown in fig. 2, the spatial range of the image capture includes at least part of the limb of the user, and it is understood that the image capture assembly can capture an image including at least part of the limb of the user; the operation area can correspond to the operable area range of at least part of the limbs of the upper body of the user, so that the image acquisition space range is larger than the area range of the operation area. In the display interface of the electronic device, the operation position of the operation point corresponding to the hand corresponds to the position of the hand within the operation area. For example, the region where the user's hand is at the edge of the operation region but the user's hand is above the middle of the image acquisition spatial range; the operation position of the operation point in the display screen is at the edge of the display interface, and the relative position of the operation point in the display interface and the relative position of the hand of the user in the operation area have a set matching relationship.
In some optional embodiments of the present disclosure, the determining the operation region in the image comprises: identifying a first area in the image corresponding to at least part of a limb of the target object; the at least part of the limb comprises at least a head; determining the operating region based on the first region.
The operation range of the hand is limited, and the operation range of the hand is in a circular area which takes the shoulders as the center and takes the arm length as the radius; however, in general, the operating area of the hand is generally smaller than the circular area. Therefore, in this embodiment, by identifying the first region corresponding to at least part of the limbs of the target object in the image, determining the operation region based on the first region, and mapping the point in the operation region to the display interface, the point in the space range operable by the user can be mapped to the display interface, so that the user can interact with any position in the display interface within a limited range.
In this embodiment, since the operation is performed by the hand, and usually the operation is concentrated in the upper body region and even the head region of the human body, at least part of the limbs (including at least the head) of the target object in the image can be recognized by the target recognition model, and the operation region is determined based on the first region where at least part of the limbs (including at least the head) of the recognized target object is located.
In some examples, the detection frame corresponding to the face in the image may be obtained by a face recognition model. For example, a center point of a detection frame of an area where a face is located in an image and a size of the detection frame of the area where the face is located are obtained through a face recognition model, an operation area is determined based on a first area where the detection frame of the face is located, and the operation area can be specifically determined according to information (such as the center point, the size and the like) of the first area where the face detection frame is located. Illustratively, the determining the operating region based on the first region includes: based on the proportion of the first area in the area where the image is located, carrying out reduction or enlargement processing on the first area to obtain the operation area; or, the first region is taken as the operation region.
Illustratively, in one aspect, the proportion of the first area in the area of the image is related to the distance between the target object and the electronic device; the larger the distance between the target object and the electronic equipment is, the larger the proportion of the first area in the area where the image is located is; correspondingly, the smaller the distance between the target object and the electronic equipment is, the smaller the proportion of the first area in the area where the image is located is. On the other hand, the distance between the target object and the electronic device is related to the range of the operation region. If the distance between the target object and the electronic equipment is larger than or equal to the arm length, the user can operate according to the maximum operation range with the arm length as the reference; if the distance between the target object and the electronic device is smaller than the arm length, the user cannot operate within the maximum operation range based on the arm length, and in this case, the user can operate within the maximum operation range based on the distance from the electronic device.
Based on this, the embodiment may have at least two operation modes based on the proportion of the first region to the region where the image is located (or the distance between the target object and the electronic device): a first mode of operation and a second mode of operation; the first operation mode is a mode in which the distance between the target object and the electronic device exceeds the arm length, and the second operation mode is a mode in which the distance between the target object and the electronic device is less than or equal to the arm length. It can be considered that, in the first operation mode, since the distance between the target object and the electronic device is long, the proportion occupied by the target object in the corresponding image is small, and the proportion of the first area corresponding to at least part of the limbs of the target object in the image is also relatively small; the operable space of the target object is relatively large; based on this, in the first operation mode, the first region can be enlarged so as to obtain a larger operation region. In the second operation mode, because the distance between the target object and the electronic equipment is short, the situation that no human face exists in the image and only one hand exists is possible; therefore, in the second operation mode, the first region may be the operation region, or the operation region may be obtained by reducing the first region.
For example, if the electronic device is a smart television or a projector, and the distance between the user and the smart television or the projector is far and exceeds the arm length, the method for determining the operation area in the first operation mode may be adopted. If the electronic device is a mobile phone, because the size of the screen of the mobile phone is relatively small, the user can usually operate the mobile phone within a relatively short distance range, for example, in a scene where the user holds the mobile phone to perform live broadcast, the distance between the mobile phone and the user is less than or equal to the arm length, and the mode of determining the operation area in the second operation mode can be adopted.
In this embodiment, after the operation region in the image is determined, the position of the hand of the target object in the operation region and the state of the hand in the image are recognized.
In some alternative embodiments, the state of the hand is, for example, a palm state and/or a fist state, and may also be a state in which a specific gesture is made. Wherein, the palm state can be the state that the five fingers of hand are opened. For example, identifying that the five fingers of the hand are open, the palm of the hand is facing the image capture assembly, or the back of the hand is facing the image capture assembly, can determine that the hand is in the palm state. Of course, the state of the hand in this embodiment is not limited to the above state, and other states (for example, a user-defined state, such as a single finger pointing state, a pointing state of multiple fingers, and a palm sliding state) may also be within the scope of the embodiments of the present disclosure.
In some optional embodiments, the image may be an image including multiple frames, and then the detection may be performed on a first frame image in the multiple frame images by using a target detection model, a hand of a target object in the first frame image is identified, and a region where the hand in the first frame image is located is determined; for the image after the first frame image, a tracking algorithm can be adopted to process the image based on the area where the hand is located in the first frame image, and the area where the hand is located in the image after the first frame image is obtained. The hand is tracked by adopting a tracking mode, so that the hand in each frame of image is prevented from being detected by adopting a target detection model with large calculated amount, the calculated amount is greatly reduced, and the processing time is shortened.
For example, the hand in the first frame image may be detected, and a first area range of the hand in the first frame image is obtained; expanding the first area range to obtain a second area range; and performing hand detection on the frame image after the first frame image based on the second region range to obtain the region range of the hand in the frame image after the first frame image, and so on, thereby realizing tracking of the hand in the multi-frame image.
In this embodiment, an operation position of an operation point corresponding to a hand in a display interface is determined based on a position of the hand in an operation area, and a state of the operation point is determined according to a state of the hand.
In some optional embodiments, the determining, based on the position of the hand in the operation region, an operation position of an operation point corresponding to the hand in a display interface includes: determining a first relative positional relationship of the hand in the operation region; the first relative positional relationship represents a relative position of the hand in the operation region in a first coordinate system; and determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relation. And the relative position relation of the operation position in the area where the display interface is located and the first relative position relation have a set matching relation.
For example, if the hand is at the center point in the operation area, the operation position of the operation point is also at the center point of the display interface.
In some optional embodiments, the determining a first relative positional relationship of the hand in the operation region includes: determining a first coordinate position of a specific key point of the hand in a first coordinate system corresponding to the operation area; the determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relationship comprises: determining a second coordinate position corresponding to the first coordinate position in a second coordinate system corresponding to the display interface based on the transformation relation, and taking the second coordinate position as the operation position of the operation point; wherein the transformation relationship is a transformation relationship between the first coordinate system and the second coordinate system.
In this embodiment, on the one hand, a first coordinate system is established based on the operation area; for example, the lower left corner of the two-dimensional plane area of the operation area may be set as the origin of the first coordinate system, and the first coordinate system may be established with the horizontal direction and the vertical direction as the x-axis and the y-axis, respectively. On the other hand, a second coordinate system is established based on the area where the display interface is located; for example, the lower left corner of the area where the display interface is located may be taken as the origin of the second coordinate system, and the second coordinate system may be established with the horizontal direction and the vertical direction as the x axis and the y axis, respectively. Further, a transformation relation from the first coordinate system to the second coordinate system is determined, which can be realized by a transformation matrix. It will be appreciated that the transformation relationship (or transformation matrix) can transform point 1 on the first coordinate system to a second coordinate system, resulting in point 2 on the second coordinate system; the relative position relationship of the point 2 on the area (for example, the second coordinate system) where the display interface is located and the relative position relationship of the point 1 on the operation area (for example, the first coordinate system) have a set matching relationship.
In some alternative embodiments, the first relative position relationship of the hand in the operation area may refer to a first coordinate position of a specific key point of the hand in a first coordinate system; alternatively, the first relative positional relationship may be a proportional relationship between distances from the specific key point of the hand to the four sides of the operation region, for example, distances from the specific key point to the four sides of the operation region are a, b, c, and d, respectively.
The set matching relationship exists between the relative position relationship of the operation position in the area where the display interface is located and the first relative position relationship, which may mean that a second coordinate position of the operation position in a second coordinate system corresponding to the display interface is the same as the first coordinate position or satisfies an equal proportion relationship. For example, if the image size corresponding to the operation area is the same as the image size of the display interface, the second coordinate position of the operation position in the second coordinate system is the same as the first coordinate position of the specific key point of the hand in the first coordinate system. If the image size corresponding to the operation area is different from the image size of the display interface, the proportional relation between the image size of the display interface and the image size corresponding to the operation area can be determined, and the scaling processing is performed on the x-axis coordinate and the y-axis coordinate with the first coordinate as the corresponding coordinate based on the proportional relation to obtain the second coordinate position, wherein the second coordinate position and the first coordinate position meet the equal proportional relation under the condition.
Or, the operation position may have a set matching relationship between the relative positional relationship in the area where the display interface is located and the first relative positional relationship, or may be a proportional relationship between the operation position and distances of four sides of the display interface, which satisfies a proportional relationship between distances of the specific key point from the four sides of the operation area. For example, if the distances from the specific key point to the four sides of the operation area are a, b, c, and d, respectively, the first relative position relationship may be a proportional relationship between a, b, c, and d; correspondingly, the proportional relation of the distances from the operation position of the operation point to the four sides of the display interface is also the proportional relation of the distances a, b, c and d.
And further determining a second coordinate position corresponding to the first coordinate position in the display interface based on the transformation relation, that is, mapping a point in the operation area to an area where the display interface is located, so that interactive response between elements in the display interface can be realized based on the position of the hand of the target object in the operation area, and the occurrence of the situation that the target object cannot reach all positions of the image so that the target object cannot move or cannot realize interactive response to the elements in all positions in the display interface without adjusting an image acquisition assembly is avoided.
In this embodiment, the specific key point of the hand is a more stable key point. The more stable key point may be a key point that is not easily changed in the process of changing the state of the hand, and for example, the specific key point of the hand may be a key point at the root of the palm (for example, the wrist) or a key point corresponding to the central point of the palm area. For example, the state of the hand may include a palm state and/or a fist state, and the like. The palm base or the palm center is relatively stable regardless of the state of the hand, and a large state change or displacement change or the like does not occur due to a change in the state of the hand. In practical application, since the state of the operation point needs to be determined according to the state of the hand, and then the response operation is executed according to the state of the operation point at the operation position, different response operations can be executed in different states of the hand.
For example, if the state of the hand is a fist-making state, the electronic device may perform a response operation of the touch at the corresponding operation position. For another example, if the state of the hand is a fist-making state and the hand is in a moving process, the electronic device may perform a response operation of touch and drag (like finger press and drag) at the corresponding operation position.
In some optional embodiments, the determining the state of the operation point according to the state of the hand comprises: determining the state of the operation point corresponding to the state of the hand part based on a mapping set obtained in advance; the mapping set comprises mapping relations between states of a plurality of groups of hands and states of the operation points respectively. Wherein the state of the operating point comprises: a non-touch screen state and/or a touch screen state. Of course, the state of the operation point in the present embodiment is not limited to the above-described example of the state, and may include a specified touch operation corresponding to a specified hand state, and the like.
In this embodiment, a mapping set may be pre-configured and stored in the electronic device, and the mapping set includes mapping relationships between states of a plurality of groups of hands and states of the operation points, respectively. For example, in a case that it is detected that the state of the hand is the palm state, the state of the operation point corresponding to the palm state may be determined by querying the mapping relationship, and the state of the operation point is, for example, the untouched screen state. For another example, when the state of the hand is detected to be a fist-making state, the state of the operation point corresponding to the palm state can be determined by querying the mapping relationship, the state of the operation point is, for example, a touch screen state, and the electronic device can execute the corresponding instruction based on the touch screen state of the operation point.
For example, the state that the operation point is in the non-touch screen state can be understood as a state when any mouse button is not pressed in the mouse operation mode. The state of the operation point on the touch screen may be understood as a state when a mouse button is pressed (for example, a left mouse button is operated) in the mouse operation mode, or may also be understood as a state of the touch point when a finger or a stylus touches the touch screen in the touch operation mode.
According to the method and the device, the mapping from the operation area to the area where the image is located is determined, the operation position of the operation point in the display interface is determined according to the position of the hand in the operation area, and the state of the operation point is determined according to the state of the hand, so that the response operation corresponding to the state of the hand is executed by the operation point in the state of the operation position, and the scheme that interaction is carried out between a user and the electronic equipment based on the state of the hand is realized; in another aspect, the operation area is determined first, and the point in the operation area is mapped to the area where the display interface is located, so that the response operation for each element in the display interface can be realized based on the position of the hand of the user in the operation area, and the occurrence of a situation that the hand of the target object cannot reach all positions of the display interface, so that the target object does not move or the response for each element in the display interface cannot be realized without adjusting the image acquisition component is avoided.
Based on the foregoing embodiments, the embodiments of the present application further provide an interaction method. Fig. 3 is a flowchart illustrating a second interaction method according to an embodiment of the present application; as shown in fig. 3, on the basis of the method shown in fig. 1, the method further includes:
step 105: and displaying an operation identifier corresponding to the state of the operation point at the operation position in the display interface.
In this embodiment, for different states of the operation point, the electronic device correspondingly displays the operation identifier corresponding to the state of the operation point at the operation position of the operation point in the process of outputting the display interface, and the states of the different operation points may correspond to the different operation identifiers, so that the user can know the current state of the operation point.
For example, if the current state of the front hand is a palm state, the state of the corresponding operation point may be recorded as a first state, and the corresponding operation identifier may be an empty circle identifier. If the current hand state is a fist-making state, the state of the corresponding operation point can be recorded as a second state, and the corresponding operation identifier can be a solid circle identifier.
Based on the foregoing embodiments, the embodiments of the present application further provide an interaction method. Fig. 4 is a third schematic flowchart of an interaction method according to an embodiment of the present application; as shown in fig. 4, on the basis of the method shown in fig. 1 or fig. 2, the method further includes:
step 106: allocating a first identifier to the hand part and a second identifier to the operation point, and establishing a mapping relation between the first identifier and the second identifier, wherein the mapping relation is used for controlling the operation point to move in the display interface along with the movement of the hand part.
In this embodiment, after the hand of the target object in the image is recognized, a first marker (also referred to as a tracking marker) is assigned to the hand. In the case of two hands including a target object in the image, or hands including multiple target objects in the image, then each hand is assigned a first identification. For example, if the target object in the image has two hands, the left hand is assigned the first identifier 1, and the right hand is assigned the first identifier 2. Correspondingly, for each hand, an operation point may be determined in the display interface, then the operation point corresponding to the left hand may be assigned with the second identifier 1, the operation point corresponding to the right hand may be assigned with the second identifier 2, an association relationship between the first identifier 1 and the second identifier 1 is established, an association relationship between the first identifier 2 and the second identifier 2 is established, and then the foregoing incidence relationship may include an association relationship between the first identifier 1 and the second identifier 1 and an association relationship between the first identifier 2 and the second identifier 2.
In this embodiment, the mapping relationship may be established after the first frame image of the multi-frame image is processed, that is, after the operation positions of the hand and the operation point are determined based on the first frame image; and in the process of tracking the hand based on the first frame image, the position of the hand in the operation area can be determined again based on the tracked hand, and then the operation point corresponding to the hand can be determined based on the mapping relation, so that the operation position of the operation point is adjusted. By establishing the mapping relation, interactive response operation based on a plurality of hands can be realized.
The interaction scheme of the present embodiment may be applied to the following application scenarios:
scene one:
the user 1 eats crayfish in a restaurant, meanwhile, the smart phone is used for browsing news information, and two hands (or gloves) of the user 1 are full of oil stains. By adopting the technical scheme of the embodiment of the application, the user 1 does not need to wipe hands (or take off gloves), the smart phone can execute response operation by detecting the specific state of the hand of the user 1, the user 1 can operate the mobile phone in the air, and oil stains do not need to be left on the mobile phone.
Scene two:
the son five years old of user 2 is used to operate on a tablet computer in a touch interactive mode, but does not learn to use a television remote controller, but cannot interact with a television in the touch mode. By adopting the technical scheme of the embodiment of the application, the son of the user 2 does not need to use a television remote controller, can realize interaction with a television by adopting the specific state of hands, and is convenient as a tablet personal computer.
Scene three:
the problem that a large screen playing advertisements is arranged outdoors is solved, and factory personnel find that no input equipment (such as a mouse and a keyboard) exists after arriving, and the large screen does not support touch operation. By adopting the technical scheme of the embodiment of the application, the camera can be used for collecting the image containing the operator, and the input of the interactive instruction is realized based on the position and the state of the hand by identifying the position and the state of the hand of the operator in the image, so that the interactive response between the large screen and the camera is realized. In addition, the camera is externally connected to the large screen, and interesting interaction is provided for viewers.
The embodiment of the application also provides an interaction device. FIG. 5 is a first schematic diagram illustrating a structure of an interaction device according to an embodiment of the present disclosure; as shown in fig. 5, the apparatus includes: a first determining unit 21, a recognizing unit 22, a second determining unit 23, and an executing unit 24; wherein the content of the first and second substances,
the first determining unit 21 is configured to obtain an image including a target object, and determine an operation region in the image;
the recognition unit 22 is configured to recognize a position of a hand of the target object in the image in the operation region and a state of the hand;
the second determining unit 23 is configured to determine, based on the position of the hand in the operation region, an operation position of an operation point corresponding to the hand in a display interface, and determine a state of the operation point according to the state of the hand;
the execution unit 24 is configured to execute a response operation based on the state of the operation point at the operation position.
In some optional embodiments of the present application, the first determining unit 21 is configured to identify a first region corresponding to at least part of a limb of the target object in the image; the at least part of the limb comprises at least a head; determining the operating region based on the first region.
In some optional embodiments of the present application, the first determining unit 21 is configured to perform reduction or enlargement processing on the first area based on a ratio of the first area to an area where the image is located, so as to obtain the operation area; or, the first region is taken as the operation region.
In some optional embodiments of the present application, the second determining unit 23 is configured to determine a first relative position relationship of the hand in the operation area; the first relative positional relationship represents a relative position of the hand in the operation region in a first coordinate system; and determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relation.
In some optional embodiments of the present application, the operation position has a set matching relationship between the relative positional relationship in the area where the display interface is located and the first relative positional relationship.
In some optional embodiments of the present application, the second determining unit 23 is configured to determine a first coordinate position of a specific key point of the hand in the first coordinate system corresponding to the operation area; the display interface is used for displaying a first coordinate position corresponding to the first coordinate position in a first coordinate system corresponding to the display interface; wherein the transformation relationship is a transformation relationship between the first coordinate system and the second coordinate system.
In some optional embodiments of the present application, the second determining unit 23 is configured to determine, based on a mapping set obtained in advance, a state of the operation point corresponding to the state of the hand; the mapping set comprises mapping relations between states of a plurality of groups of hands and states of the operation points respectively.
In some optional embodiments of the present application, as shown in fig. 6, the apparatus further includes a display unit 25, configured to display an operation identifier corresponding to the state of the operation point at the operation position in the display interface.
In some alternative embodiments of the present application, as shown in fig. 7, the apparatus further comprises an assigning unit 26 and a mapping unit 27; wherein, the assigning unit 26 is configured to assign a first identifier to the hand and assign a second identifier to the operation point;
the mapping unit 27 is configured to establish a mapping relationship between the first identifier and the second identifier, where the mapping relationship is used to control the operation point to move in the display interface along with the movement of the hand.
In some optional embodiments of the present application, the state of the hand comprises a palm state and/or a fist state; the states of the operating points include: a non-touch screen state and/or a touch screen state.
In some optional embodiments of the present application, the state of the operating point comprises: a non-touch screen state and/or a touch screen state.
In the embodiment of the present disclosure, the first determining Unit 21, the identifying Unit 22, the second determining Unit 23, the executing Unit 24, the allocating Unit 26, and the mapping Unit 27 in the interaction apparatus may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), or a Programmable Gate Array (FPGA) in practical applications; the display unit 25 in the interactive device can be realized by a display screen or a display in practical application.
It should be noted that: in the interactive apparatus provided in the above embodiment, when performing the interactive process, only the division of the program modules is described as an example, and in practical applications, the process distribution may be completed by different program modules according to needs, that is, the internal structure of the apparatus may be divided into different program modules to complete all or part of the process described above. In addition, the interaction apparatus and the interaction method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The embodiment of the application also provides the electronic equipment. Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application, and as shown in fig. 8, the electronic device includes a memory 32, a processor 31, and a computer program stored in the memory 32 and executable on the processor 31, and when the processor 31 executes the computer program, the steps of the method according to the embodiment of the present application are implemented.
Optionally, the electronic device may also include a multimedia component. The multimedia component provides a screen as an output interface between the electronic device and the user. Illustratively, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). In some embodiments, the multimedia component may further include a camera that may capture external images.
It will be appreciated that the various components in the electronic device are coupled together by a bus system 33. It will be appreciated that the bus system 33 is used to enable communications among the components of the connection. The bus system 33 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 33 in fig. 8.
It will be appreciated that the memory 32 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 32 described in the embodiments of the present disclosure is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed by the embodiment of the present disclosure can be applied to the processor 31 or implemented by the processor 31. The processor 31 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 31. The processor 31 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 31 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present disclosure. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 32, and the processor 31 reads the information in the memory 32 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the electronic Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field-Programmable Gate arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
In an exemplary embodiment, the present application further provides a computer readable storage medium, such as a memory 32, comprising a computer program, which is executable by a processor 31 of an electronic device to perform the steps of the foregoing method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
The computer-readable storage medium provided by the embodiment of the present application stores thereon a computer program, and the computer program, when executed by a processor, implements the steps of the gesture interaction processing method described in the embodiment of the present application.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (24)

1. An interactive method, characterized in that the method comprises:
obtaining an image containing a target object, and determining an operation area in the image;
identifying a position of a hand of the target object in the image in the operation area and a state of the hand;
determining an operation position of an operation point corresponding to the hand in a display interface based on the position of the hand in the operation area, and determining the state of the operation point according to the state of the hand;
and executing response operation based on the state of the operation point at the operation position.
2. The method of claim 1, wherein the determining the operation region in the image comprises:
identifying a first area in the image corresponding to at least part of a limb of the target object; the at least part of the limb comprises at least a head;
determining the operating region based on the first region.
3. The method of claim 2, wherein the determining the operating region based on the first region comprises:
based on the proportion of the first area in the area where the image is located, carrying out reduction or enlargement processing on the first area to obtain the operation area; or, the first region is taken as the operation region.
4. The method according to any one of claims 1 to 3, wherein the determining the operation position of the operation point corresponding to the hand in the display interface based on the position of the hand in the operation area comprises:
determining a first relative positional relationship of the hand in the operation region; the first relative positional relationship represents a relative position of the hand in the operation region in a first coordinate system;
and determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relation.
5. The method according to claim 4, wherein the operation position has a set matching relationship between the relative positional relationship in the area where the display interface is located and the first relative positional relationship.
6. The method of claim 4, wherein the determining a first relative positional relationship of the hand in the operation region comprises:
determining a first coordinate position of a specific key point of the hand in the first coordinate system corresponding to the operation area;
the determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relationship comprises:
determining a second coordinate position corresponding to the first coordinate position in a second coordinate system corresponding to the display interface based on the transformation relation, and taking the second coordinate position as the operation position of the operation point; wherein the transformation relationship is a transformation relationship between the first coordinate system and the second coordinate system.
7. The method of any of claims 1 to 6, wherein determining the state of the operation point from the state of the hand comprises:
determining the state of the operation point corresponding to the state of the hand part based on a mapping set obtained in advance;
the mapping set comprises mapping relations between states of a plurality of groups of hands and states of the operation points respectively.
8. The method of any of claims 1 to 7, further comprising:
and displaying an operation identifier corresponding to the state of the operation point at the operation position in the display interface.
9. The method according to any one of claims 1 to 8, further comprising:
allocating a first identifier to the hand part and a second identifier to the operation point, and establishing a mapping relation between the first identifier and the second identifier, wherein the mapping relation is used for controlling the operation point to move in the display interface along with the movement of the hand part.
10. The method of any one of claims 1 to 9, wherein the state of the hand comprises a palm state and/or a fist state.
11. The method according to any of claims 1 to 10, wherein the state of the operating point comprises: a non-touch screen state and/or a touch screen state.
12. An interactive apparatus, characterized in that the apparatus comprises: the device comprises a first determining unit, an identifying unit, a second determining unit and an executing unit; wherein the content of the first and second substances,
the first determining unit is used for obtaining an image containing a target object and determining an operation area in the image;
the identification unit is used for identifying the position of a hand of the target object in the image in the operation area and the state of the hand;
the second determining unit is used for determining the operation position of an operation point corresponding to the hand in a display interface based on the position of the hand in the operation area, and determining the state of the operation point according to the state of the hand;
the execution unit is used for executing response operation based on the state of the operation point at the operation position.
13. The apparatus according to claim 12, wherein the first determining unit is configured to identify a first area corresponding to at least a part of a limb of the target object in the image; the at least part of the limb comprises at least a head; determining the operating region based on the first region.
14. The apparatus according to claim 13, wherein the first determining unit is configured to perform reduction or enlargement processing on the first area based on a ratio of the first area to an area where the image is located, so as to obtain the operation area; or, the first region is taken as the operation region.
15. The apparatus according to any one of claims 12 to 14, wherein the second determination unit is configured to determine a first relative positional relationship of the hand in the operation area; the first relative positional relationship represents a relative position of the hand in the operation region in a first coordinate system; and determining the operation position of the operation point corresponding to the hand in the display interface based on the first relative position relation.
16. The apparatus according to claim 15, wherein the operation position has a set matching relationship between the relative positional relationship in the area where the display interface is located and the first relative positional relationship.
17. The apparatus according to claim 15, wherein the second determining unit is configured to determine a first coordinate position of a specific key point of the hand in the first coordinate system corresponding to the operation area; the display interface is used for displaying a first coordinate position corresponding to the first coordinate position in a first coordinate system corresponding to the display interface; wherein the transformation relationship is a transformation relationship between the first coordinate system and the second coordinate system.
18. The apparatus according to any one of claims 12 to 17, wherein the second determining unit is configured to determine a state of the operation point corresponding to the state of the hand based on a mapping set obtained in advance; the mapping set comprises mapping relations between states of a plurality of groups of hands and states of the operation points respectively.
19. The device according to any one of claims 12 to 18, further comprising a display unit configured to display an operation identifier corresponding to the state of the operation point at the operation position in the display interface.
20. The apparatus according to any one of claims 12 to 19, wherein the apparatus further comprises an allocation unit and a mapping unit; wherein the content of the first and second substances,
the distribution unit is used for distributing a first identifier for the hand part and distributing a second identifier for the operation point;
the mapping unit is used for establishing a mapping relation between the first identifier and the second identifier, and the mapping relation is used for controlling the operation point to move along with the movement of the hand in the display interface.
21. The apparatus of any one of claims 12 to 20, wherein the state of the hand comprises a palm state and/or a fist state.
22. The apparatus according to any one of claims 12 to 21, wherein the state of the operating point comprises: a non-touch screen state and/or a touch screen state.
23. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
24. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of claims 1 to 11 are implemented when the program is executed by the processor.
CN202010899228.4A 2020-08-31 2020-08-31 Interaction method and device, electronic equipment and computer storage medium Pending CN112068698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010899228.4A CN112068698A (en) 2020-08-31 2020-08-31 Interaction method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010899228.4A CN112068698A (en) 2020-08-31 2020-08-31 Interaction method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112068698A true CN112068698A (en) 2020-12-11

Family

ID=73666352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010899228.4A Pending CN112068698A (en) 2020-08-31 2020-08-31 Interaction method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112068698A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486394A (en) * 2020-12-17 2021-03-12 南京维沃软件技术有限公司 Information processing method and device, electronic equipment and readable storage medium
CN113778217A (en) * 2021-09-13 2021-12-10 海信视像科技股份有限公司 Display apparatus and display apparatus control method
CN114253452A (en) * 2021-11-16 2022-03-29 深圳市普渡科技有限公司 Robot, man-machine interaction method, device and storage medium
CN114327058A (en) * 2021-12-24 2022-04-12 海信集团控股股份有限公司 Display device
CN114384848A (en) * 2022-01-14 2022-04-22 北京市商汤科技开发有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN114911384A (en) * 2022-05-07 2022-08-16 青岛海信智慧生活科技股份有限公司 Mirror display and remote control method thereof
WO2024021857A1 (en) * 2022-07-27 2024-02-01 腾讯科技(深圳)有限公司 Image collection method and apparatus, computer device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193621A (en) * 2010-03-17 2011-09-21 三星电子(中国)研发中心 Vision-based interactive electronic equipment control system and control method thereof
CN107493495A (en) * 2017-08-14 2017-12-19 深圳市国华识别科技开发有限公司 Interaction locations determine method, system, storage medium and intelligent terminal
CN109358750A (en) * 2018-10-17 2019-02-19 Oppo广东移动通信有限公司 A kind of control method, mobile terminal, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193621A (en) * 2010-03-17 2011-09-21 三星电子(中国)研发中心 Vision-based interactive electronic equipment control system and control method thereof
CN107493495A (en) * 2017-08-14 2017-12-19 深圳市国华识别科技开发有限公司 Interaction locations determine method, system, storage medium and intelligent terminal
CN109358750A (en) * 2018-10-17 2019-02-19 Oppo广东移动通信有限公司 A kind of control method, mobile terminal, electronic equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486394A (en) * 2020-12-17 2021-03-12 南京维沃软件技术有限公司 Information processing method and device, electronic equipment and readable storage medium
CN113778217A (en) * 2021-09-13 2021-12-10 海信视像科技股份有限公司 Display apparatus and display apparatus control method
CN114253452A (en) * 2021-11-16 2022-03-29 深圳市普渡科技有限公司 Robot, man-machine interaction method, device and storage medium
CN114327058A (en) * 2021-12-24 2022-04-12 海信集团控股股份有限公司 Display device
CN114327058B (en) * 2021-12-24 2023-11-10 海信集团控股股份有限公司 Display apparatus
CN114384848A (en) * 2022-01-14 2022-04-22 北京市商汤科技开发有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN114911384A (en) * 2022-05-07 2022-08-16 青岛海信智慧生活科技股份有限公司 Mirror display and remote control method thereof
WO2024021857A1 (en) * 2022-07-27 2024-02-01 腾讯科技(深圳)有限公司 Image collection method and apparatus, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN112068698A (en) Interaction method and device, electronic equipment and computer storage medium
CN110471596B (en) Split screen switching method and device, storage medium and electronic equipment
US10021319B2 (en) Electronic device and method for controlling image display
EP2905679B1 (en) Electronic device and method of controlling electronic device
KR100783552B1 (en) Input control method and device for mobile phone
US20140300542A1 (en) Portable device and method for providing non-contact interface
CN109062464B (en) Touch operation method and device, storage medium and electronic equipment
JP2014211858A (en) System, method and program for providing user interface based on gesture
CN111527468A (en) Air-to-air interaction method, device and equipment
CN112714253A (en) Video recording method and device, electronic equipment and readable storage medium
CN113873151A (en) Video recording method and device and electronic equipment
CN112911147A (en) Display control method, display control device and electronic equipment
KR20150082032A (en) Electronic Device And Method For Controlling Thereof
CN109873980B (en) Video monitoring method and device and terminal equipment
CN112578967B (en) Chart information reading method and mobile terminal
WO2024012268A1 (en) Virtual operation method and apparatus, electronic device, and readable storage medium
CN110069126B (en) Virtual object control method and device
CN112068699A (en) Interaction method, interaction device, electronic equipment and storage medium
WO2023273071A1 (en) Image processing method and apparatus and electronic device
GB2590207A (en) Scenario control method and device, and electronic device
CN116301551A (en) Touch identification method, touch identification device, electronic equipment and medium
CN115480639A (en) Human-computer interaction system, human-computer interaction method, wearable device and head display device
CN113192127A (en) Image processing method and device, electronic equipment and storage medium
CN111766947A (en) Display method, display device, wearable device and medium
WO2019100547A1 (en) Projection control method, apparatus, projection interaction system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201211

RJ01 Rejection of invention patent application after publication