CN115113722A - Man-machine interaction method and device and related electronic equipment - Google Patents

Man-machine interaction method and device and related electronic equipment Download PDF

Info

Publication number
CN115113722A
CN115113722A CN202110307467.0A CN202110307467A CN115113722A CN 115113722 A CN115113722 A CN 115113722A CN 202110307467 A CN202110307467 A CN 202110307467A CN 115113722 A CN115113722 A CN 115113722A
Authority
CN
China
Prior art keywords
camera
instruction
motion track
video
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110307467.0A
Other languages
Chinese (zh)
Inventor
胡宏伟
王耀园
卢曰万
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110307467.0A priority Critical patent/CN115113722A/en
Publication of CN115113722A publication Critical patent/CN115113722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a man-machine interaction method, a man-machine interaction device and related electronic equipment, wherein the method is applied to first electronic equipment comprising a camera, and the method comprises the following steps: acquiring a first instruction, wherein the first instruction is used for determining the starting moment of the motion track of the camera which is identified in real time by using the video shot by the camera; identifying the motion track of the camera in real time by using the video shot by the camera; acquiring a second instruction, wherein the second instruction is used for determining the end time of identifying the motion track of the camera by using the video shot by the camera; and under the condition that the recognized motion track of the camera is a preset motion track, determining a third instruction matched with the recognized motion track of the camera. By adopting the technical scheme provided by the embodiment of the application, the man-machine interaction can be more conveniently carried out.

Description

Man-machine interaction method and device and related electronic equipment
Technical Field
The present application relates to the field of human-computer interaction technologies, and in particular, to a human-computer interaction method and apparatus, and a related electronic device.
Background
With the development of the human-computer interaction technology, terminals with cameras are more and more common, and interactive instructions of users can be obtained to control the terminals by identifying videos acquired by the cameras.
For example, a terminal such as a mobile phone, a tablet, and Augmented Reality (AR) glasses having a camera may capture a gesture image and a gesture movement track of a user by using the camera, and obtain an interaction instruction of the user by recognizing the gesture, so as to achieve an interaction purpose. For another example, after the camera of the smart watch recognizes the gesture, functions such as screen capture, page up or page down can be realized.
However, in actual use, the relative positions of the camera and the gesture are usually changed constantly, which affects the accuracy of interaction, and when performing gesture recognition, the hand of the user needs to be placed in the visual range of the camera, and the user needs to lift the arm to make various gestures, which easily causes fatigue of the user and affects the experience.
Therefore, it is desirable to provide a human-computer interaction method, device, electronic device and computer-readable storage medium capable of solving the above problems.
Disclosure of Invention
The embodiment of the application provides a human-computer interaction method, a human-computer interaction device and related electronic equipment, and human-computer interaction can be performed more conveniently.
In a first aspect, an embodiment of the present application provides a human-computer interaction method, which is applied to a first electronic device including a camera, and the method includes: acquiring a first instruction, wherein the first instruction is used for determining the starting moment of the motion track of the camera which is identified in real time by using the video shot by the camera; identifying the motion track of the camera in real time by using the video shot by the camera; acquiring a second instruction, wherein the second instruction is used for determining the starting moment of identifying the motion track of the camera by using the video shot by the camera; and under the condition that the recognized motion track of the camera is a preset motion track, determining a third instruction matched with the recognized motion track of the camera.
The embodiment of the application adopts a human-computer interaction method which can identify videos shot by the camera in real time, determine the motion track of the camera according to the identification result, and can obtain an instruction matched with the motion track of the camera when the motion track of the camera is the preset motion track.
In a possible implementation manner of the human-computer interaction method according to the first aspect, after the determining the third instruction matching the identified motion trajectory of the camera, the method further includes: executing the third instruction.
According to the first aspect, in a possible implementation manner of the human-computer interaction method, after the determining the third instruction matching the identified motion trajectory of the camera, the method further includes: and sending the third instruction to a second electronic device. The second electronic device may be controlled by sending an instruction to the second electronic device, for example, the television may be controlled by sending an instruction to perform operations such as channel switching, volume adjustment, and the like.
According to the first aspect, in a possible implementation manner of the human-computer interaction method, the identifying, in real time, a motion trajectory of the camera by using the video captured by the camera includes: identifying feature points from image frames in the video; and tracking the positions of the characteristic points to determine the motion track of the camera.
In a second aspect, an embodiment of the present application provides a human-computer interaction device, including: the camera still includes: the first acquisition unit is used for acquiring a first instruction, wherein the first instruction is used for determining the starting moment of the motion track of the camera which is identified in real time by the video shot by the camera; the identification unit is used for identifying the motion track of the camera in real time by utilizing the video shot by the camera; acquiring a second instruction, wherein the second instruction is used for determining the starting moment of the motion track of the camera identified by the video shot by the camera; the determining unit is used for determining a third instruction matched with the identified motion track of the camera under the condition that the identified motion track of the camera is a preset motion track.
The embodiment of the application adopts the man-machine interaction device which can recognize videos shot by the camera in real time, the motion track of the camera is determined according to the recognition result, when the motion track of the camera is the preset motion track, an instruction matched with the motion track of the camera can be obtained, and the method can determine a control instruction only by controlling the camera to move according to the preset motion track, is simple to operate and can more conveniently carry out man-machine interaction.
According to the second aspect, in a possible implementation manner of the human-computer interaction device, the method further includes: and the execution unit is used for executing the third instruction after the determination unit determines the third instruction matched with the recognized motion trail of the camera.
According to the second aspect, in a possible implementation manner of the human-computer interaction device, the method further includes: and the sending unit is used for sending the third instruction to the second electronic equipment after the determining unit determines the third instruction matched with the recognized motion trail of the camera.
In a possible implementation manner of the human-computer interaction device according to the second aspect, the determining unit is specifically configured to identify feature points from image frames in the video, track positions of the feature points, and determine a motion trajectory of the camera.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a camera, a memory and a processor, wherein the memory is used for storing a preset motion track and computer program codes, and the computer program codes comprise instructions; the instructions, when executed by the processor, cause the electronic device to perform the human-computer interaction method of the first aspect or one or more of the many possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer program product, which when run on a computer causes the computer to execute the human-computer interaction method of the first aspect or one or more of many possible implementations of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium storing program code for execution by an apparatus, where the program code, when executed, causes the apparatus to perform a method for human interaction according to the first aspect or one or more of many possible implementations of the first aspect.
For any of the above possible designs, the combination of the schemes can be performed without violating the natural laws.
By adopting the technical scheme provided by the embodiment of the application, the video shot by the camera can be identified in real time, the motion track of the camera is determined according to the identification result, and when the motion track of the camera is the preset motion track, the command matched with the motion track of the camera can be obtained.
Drawings
Fig. 1 is a flowchart illustrating a human-computer interaction method according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a human-computer interaction method according to another embodiment of the present application.
Figure 3A is a schematic view of a corner point in an embodiment of the present application.
Figure 3B is a schematic diagram of a corner point in an embodiment of the present application.
Figure 3C is a schematic view of a corner point in an embodiment of the present application.
Figure 3D is a schematic view of a corner point in an embodiment of the present application.
Figure 3E is a schematic view of a corner point in an embodiment of the present application.
Fig. 4A is a schematic diagram of one frame of image of a video captured by a camera in an embodiment of the present application.
Fig. 4B is a schematic diagram of another frame of image of a video captured by a camera in an embodiment of the present application.
Fig. 4C is a schematic diagram of identifying corner points of the image shown in fig. 4A according to an embodiment of the present application.
Fig. 4D is a schematic diagram of identifying corner points of the image shown in fig. 4B in an embodiment of the present application.
Fig. 5 is a flowchart illustrating a human-computer interaction method according to another embodiment of the present application.
Fig. 6 is a flowchart illustrating a human-computer interaction method according to another embodiment of the present application.
Fig. 7 is a schematic structural diagram of a human-computer interaction device according to another embodiment of the present application.
Fig. 8 is a schematic structural diagram of a human-computer interaction device according to another embodiment of the present application.
Fig. 9 is a schematic structural diagram of a human-computer interaction device according to another embodiment of the present application.
Fig. 10 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this embodiment, the electronic device may be a terminal having a camera, such as: augmented reality glasses with cameras, smart phones, portable wearable devices (such as smart watches) or tablet computers, and the like. In actual use, sometimes a user is required to operate the electronic device, and in the prior art, the scheme is generally adopted in which the user performs direct manual operation, for example, the user turns pages displayed in a mobile phone by operating a control key. For another example, when the augmented reality glasses communicate with the television, the display channel of the television can be switched by operating the display interface of the augmented reality glasses or the keys of the augmented reality glasses. Some prior art adopts gesture recognition scheme, and operates the controlled object by recognizing the operation gesture of the user. The existing technical schemes are complex to operate, or the electronic equipment is required to recognize the gesture of the user, when the gesture is recognized, the gesture of the user needs to be displayed in a shooting range of the camera, when various gestures are made, the user needs to lift the arm, and the user is easy to fatigue. The application provides a man-machine interaction method, the method identifies videos shot by a camera, the motion track of the camera is determined according to the videos shot by the camera, when the motion track of the camera is a preset motion track, an instruction matched with the motion track is determined, a user only needs to control the motion of the camera by adopting the scheme, the motion track of the user is matched with the required instruction, the operation is simple and convenient, better man-machine interaction is facilitated, and the technical scheme of the application is specifically described through a specific embodiment.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a human-computer interaction method according to an embodiment of the present application, where the method may include the following steps.
Step 101, obtaining a first instruction, where the first instruction is used to determine a start time of a motion track of a camera identified in real time by using a video shot by the camera.
In some possible implementations, the first instruction may be triggered by a user clicking a virtual key displayed on a display screen of the electronic device, may be triggered by clicking a physical key of the electronic device, or may be triggered by performing a predetermined action (such as raising a hand, or recognizing a circle-drawing operation, etc.). And after the first instruction is triggered, the video shot by the camera is identified in real time to determine the motion track of the camera.
In some possible implementations, the camera is not activated before the first instruction is acquired, in which case, after the first instruction is acquired, the camera may be triggered to activate and start acquiring video.
In some possible embodiments, identifying the start time of the motion trajectory of the camera in real time by using the video captured by the camera may be the time of acquiring the first instruction. For example, if the first instruction is obtained at 8 o 'clock and 30 o' clock, the 8 o 'clock and 30 o' clock can be used as the start time of the motion track of the real-time recognition camera of the video shot by the camera.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and may be a certain time after a preset time elapses after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o 'clock, if the preset time is 5 seconds, 9 o' clock 05 seconds may be used as the start time of the video shot by the camera to identify the motion track of the camera in real time.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and the starting time may be a certain time after the user executes a preset action after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o' clock, if the preset motion is arm raising, the time after the arm raising motion is recognized may be used as the starting time for recognizing the motion track of the camera in real time by using the video shot by the camera.
And 102, identifying the motion track of the camera in real time by using the video shot by the camera.
In some possible implementation manners, when the video acquired by the camera is identified in real time, each frame of image in the video can be identified, and the motion track of the camera is determined by identifying each frame of image.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be selected from images in the video in a time sequence, for example, images of every other frame may be used as the images to be identified, and specifically, images of a 1 st frame, a 3 rd frame, a 5 th frame, a 7 th frame, a 9 th frame, an 11 th frame, and the like in the video to be identified may be identified, so as to determine a motion trajectory of the camera.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be randomly selected from images in the video according to a time sequence, for example, images of a 1 st frame, a 2 nd frame, a 5 th frame, an 11 th frame, a 13 th frame, a 16 th frame, and the like in the acquired video may be identified, and a motion trajectory of the camera is determined.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected according to a time sequence from images in the video by combining the moving speed of the camera, and the moving speed of the camera can be obtained by adopting a plurality of methods, for example, the moving speed can be obtained by an acceleration sensor and other devices. When the camera moves fast, a large number of images are obtained from the corresponding video for recognition (for example, recognition can be performed every frame). When the speed is slow, the number of images acquired from the corresponding video for identification is small (for example, identification is performed every other frame), for example, if the moving speed of the camera in the first 3 seconds exceeds the first threshold V1, each frame of image in the video in the first three seconds is identified, and after 3 seconds, the speed is lower than the threshold V2, identification can be performed every other frame. And determining the motion track of the camera according to the selected image for identification.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera and the performance of the current electronic equipment, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. The performance of the electronic device may be hardware performance, or remaining power, memory occupancy rate, and the like of the electronic device at that time. When the camera has a fast moving speed and/or high performance, a large number of images are obtained from the corresponding video for identification (for example, each frame can be identified). When the speed is slow and/or the performance is low, the number of images obtained from the corresponding video for identification is small (for example, every other frame for identification).
And 103, acquiring a second instruction, wherein the second instruction is used for determining the end time of recognizing the motion track of the camera by using the video shot by the camera.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the first instruction is acquired, for example, the second instruction is triggered to be generated after 10 seconds after the first instruction is acquired. For example, if 21:31:10 obtains the first instruction, the second instruction is automatically generated at 21:31: 20.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the start time determined by the first instruction, for example, the second instruction may be triggered to be generated after 10 seconds after the start time determined by the first instruction. For example, if the starting time determined by the first instruction is 22:11:10, the starting time determined by the first instruction is 22:11:20, and the second instruction is automatically generated.
In some possible implementations, the second instruction may be generated by operating a virtual key, for example, a preset virtual key for instructing to end the recognition of the motion trajectory of the camera is clicked on a touch screen of the mobile phone, and when the user clicks the virtual key, the second instruction is generated by being triggered.
In some possible implementations, the second instruction may be generated by operating a physical key, for example, a key for instructing to end recognizing the motion trajectory of the camera is provided on the VR glasses, and when the user clicks the key, the second instruction is generated.
And 104, determining a third instruction matched with the recognized motion track of the camera under the condition that the recognized motion track of the camera is a preset motion track.
In some possible implementations, the preset trajectory may be a left movement, a right movement, an upward movement, a downward movement, a hook shape, a circle, a closed shape, a Chinese character, a letter, a number, a specific shape, or a combination or a deformation of at least one of the above elements.
The embodiment of the application adopts a human-computer interaction method which can identify videos shot by the camera in real time, determine the motion track of the camera according to the identification result, and can obtain an instruction matched with the motion track of the camera when the motion track of the camera is the preset motion track.
Example two
Referring to fig. 2, fig. 2 is a flowchart illustrating a human-computer interaction method according to an embodiment of the present application, where the method includes the following steps.
Step 201, obtaining a first instruction, where the first instruction is used to determine a start time of a motion track of a camera identified in real time by using a video shot by the camera.
In some possible implementations, the first instruction may be triggered by a user clicking a virtual key displayed on a display screen of the electronic device, may be triggered by clicking a physical key of the electronic device, or may be triggered by performing a predetermined action (such as raising a hand, or recognizing a circle-drawing operation, etc.). And after the first instruction is triggered, the video shot by the camera is identified in real time to determine the motion track of the camera.
In some possible implementations, the camera is not activated before the first instruction is acquired, in which case, after the first instruction is acquired, the camera may be triggered to activate and start acquiring video.
In some possible embodiments, identifying the start time of the motion trajectory of the camera in real time by using the video captured by the camera may be the time of acquiring the first instruction. For example, if the first instruction is obtained at 8 o 'clock and 30 o' clock, the 8 o 'clock and 30 o' clock can be used as the starting time of the motion track of the real-time recognition camera of the video shot by the camera.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and the starting time may be a certain time after a preset time elapses after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o 'clock, and if the preset time is 5 seconds, 9 o' clock 05 seconds may be used as the start time of the video shot by the camera to identify the motion track of the camera in real time.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and the starting time may be a certain time after the user executes a preset action after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o' clock, if the preset motion is arm raising, the time after the arm raising motion is recognized may be used as the starting time for recognizing the motion track of the camera in real time by using the video shot by the camera.
Step 2021, identify feature points from image frames in the video.
Step 2022, tracking the positions of the feature points to determine the motion trajectory of the camera.
In some possible implementation manners, when the video acquired by the camera is identified in real time, each frame of image in the video can be identified, and the motion track of the camera is determined by identifying each frame of image.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be selected from images in the video in a time sequence, for example, images of every other frame may be used as the images to be identified, and specifically, images of a 1 st frame, a 3 rd frame, a 5 th frame, a 7 th frame, a 9 th frame, an 11 th frame, and the like in the video to be identified may be identified, so as to determine a motion trajectory of the camera.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be randomly selected from images in the video according to a time sequence, for example, images of a 1 st frame, a 2 nd frame, a 5 th frame, an 11 th frame, a 13 th frame, a 16 th frame, and the like in the acquired video may be identified, and a motion trajectory of the camera is determined.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. When the camera moves fast, a large number of images are obtained from the corresponding video for recognition (for example, recognition can be performed every frame). When the speed is slow, the number of images obtained from the corresponding video for identification is small (for example, identification is performed every other frame), for example, if the moving speed of the camera in the first 3 seconds exceeds the first threshold V1, each image in the video in the first three seconds is identified, and after 3 seconds, the speed is lower than the threshold V2, identification can be performed every other frame. And determining the motion track of the camera according to the selected image for identification.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera and the performance of the current electronic equipment, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. The performance of the electronic device may be a hardware performance, or a current remaining power, a memory occupancy rate, or the like of the electronic device. When the camera has a fast moving speed and/or high performance, a large number of images are obtained from the corresponding video for identification (for example, each frame can be identified). When the speed is slow and/or the performance is low, the number of images obtained from the corresponding video for identification is small (for example, every other frame for identification).
In some possible implementations, the identification of a particular image may be performed by identifying feature points in the picture. For example, the feature points may be corner points, and the corner points may be of various types, for example, the corner points may be as shown in fig. 3A to 3E. The corner point shown in fig. 3A is the common point of two line segments at right angles. The corner point shown in figure 3B is the common point of three line segments having a unique common point. The corner point shown in fig. 3C is the common point of the two line segments in the T-shape. The corner point shown in fig. 3D is the common point of the three line segments in the shape of an arrow. The corner point shown in fig. 3E is the common point of the two line segments in the X-shape.
For example, fig. 4A and 4B show pictures for identifying a motion trajectory of a camera, where the pictures include an N-shaped pattern composed of four line segments. According to fig. 4A, it can be known that the N-shaped pattern is located at the upper left corner, and according to fig. 4B, it can be known that the N-shaped pattern is located at the lower left corner, and since the N-shaped pattern moves downward due to the movement of the camera, it can be determined that the moving direction of the camera moves upward. Specifically, the corner points of the N-shaped graphs in fig. 4A and 4B may be identified, as shown in fig. 4C and 4D, the movement locus of the corner point is determined according to the identified position of the corner point, and then the movement locus of the camera is derived. It should be understood that the example mentioned here is only an example of identifying the image frames to determine the motion track of the camera, and other identification methods may also be used to identify the image frames in actual use to determine the motion track of the camera, which is not given here.
And 203, acquiring a second instruction, wherein the second instruction is used for determining the end time of recognizing the motion track of the camera by using the video shot by the camera.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the first instruction is acquired, for example, the second instruction is triggered to be generated after 10 seconds after the first instruction is acquired. For example, if 21:31:10 obtains the first instruction, the second instruction is automatically generated at 21:31: 20.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the start time determined by the first instruction, for example, the second instruction may be triggered to be generated after 10 seconds after the start time determined by the first instruction. For example, if the starting time determined by the first instruction is 22:11:10, the starting time determined by the first instruction is 22:11:20, and the second instruction is automatically generated.
In some possible implementations, the second instruction may be generated by operating a virtual key, for example, a preset virtual key for instructing to end the recognition of the motion trajectory of the camera is clicked on a touch screen of the mobile phone, and when the user clicks the virtual key, the second instruction is generated by being triggered.
In some possible implementations, the second instruction may be generated by operating a physical key, for example, a key for instructing to end recognizing the motion trajectory of the camera is provided on the VR glasses, and when the user clicks the key, the second instruction is generated.
And 204, determining a third instruction matched with the recognized motion track of the camera under the condition that the recognized motion track of the camera is a preset motion track.
In some possible implementations, the preset trajectory may be a left movement, a right movement, an upward movement, a downward movement, a hook shape, a circle, a closed shape, a Chinese character, a letter, a number, a specific shape, or a combination or a deformation of at least one of the above elements.
The method for human-computer interaction can be used for identifying the characteristic points in the images in the video shot by the camera in real time, determining the motion track of the camera according to the identification result, and obtaining the command matched with the motion track of the camera when the motion track of the camera is the preset motion track.
EXAMPLE III
Referring to fig. 5, fig. 5 is a schematic flowchart of a human-computer interaction method according to an embodiment of the present application, in which the method may include the following steps:
step 501, obtaining a first instruction, where the first instruction is used to determine a start time of a motion track of a camera identified in real time by using a video shot by the camera.
In some possible implementations, the first instruction may be triggered by a user clicking a virtual key displayed on a display screen of the electronic device, may be triggered by clicking a physical key of the electronic device, or may be triggered by performing a predetermined action (such as raising a hand, or recognizing a circle-drawing operation, etc.). And after the first instruction is triggered, the video shot by the camera is identified in real time to determine the motion track of the camera.
In some possible implementations, the camera is not activated before the first instruction is acquired, in which case, after the first instruction is acquired, the camera may be triggered to activate and start acquiring video.
In some possible embodiments, the start time of the motion trajectory of the camera, which is identified in real time by the video captured by the camera, may be the time of acquiring the first instruction. For example, if the first instruction is obtained at 8 o 'clock and 30 o' clock, the 8 o 'clock and 30 o' clock can be used as the starting time of the motion track of the real-time recognition camera of the video shot by the camera.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and may be a certain time after a preset time elapses after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o 'clock, and if the preset time is 5 seconds, 9 o' clock 05 seconds may be used as the start time of the video shot by the camera to identify the motion track of the camera in real time.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and the starting time may be a certain time after the user executes a preset action after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o' clock, if the preset motion is arm raising, the time after the arm raising motion is recognized may be used as the starting time for recognizing the motion track of the camera in real time by using the video shot by the camera.
And 502, identifying the motion track of the camera in real time by using the video shot by the camera.
In some possible implementation manners, when the video acquired by the camera is identified in real time, each frame of image in the video can be identified, and the motion track of the camera is determined by identifying each frame of image.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be selected from the images in the video in a time sequence, for example, images of every other frame may be used as the images to be identified, and specifically, images of a 1 st frame, a 3 rd frame, a 5 th frame, a 7 th frame, a 9 th frame, an 11 th frame, and the like in the video to be identified may be identified to determine a motion trajectory of the camera.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be randomly selected from images in the video according to a time sequence, for example, images of a 1 st frame, a 2 nd frame, a 5 th frame, an 11 th frame, a 13 th frame, a 16 th frame, and the like in the acquired video may be identified, and a motion trajectory of the camera is determined.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. When the moving speed of the camera is high, the number of images acquired from the corresponding video for identification is large (for example, identification can be performed every frame). When the speed is slow, the number of images acquired from the corresponding video for identification is small (for example, identification is performed every other frame), for example, if the moving speed of the camera in the first 3 seconds exceeds the first threshold V1, each frame of image in the video in the first three seconds is identified, and after 3 seconds, the speed is lower than the threshold V2, identification can be performed every other frame. And determining the motion track of the camera according to the selected image for identification.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera and the performance of the current electronic equipment, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. The performance of the electronic device may be a hardware performance, or a current remaining power, a memory occupancy rate, or the like of the electronic device. When the camera has a fast moving speed and/or high performance, a large number of images are obtained from the corresponding video for identification (for example, each frame can be identified). When the speed is slow and/or the performance is low, the number of images obtained from the corresponding video for identification is small (for example, every other frame for identification).
And 503, acquiring a second instruction, wherein the second instruction is used for determining the end time of recognizing the motion track of the camera by using the video shot by the camera.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the first instruction is acquired, for example, the second instruction is triggered to be generated after 10 seconds after the first instruction is acquired. For example, if 21:31:10 obtains the first instruction, the second instruction is automatically generated at 21:31: 20.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the start time determined by the first instruction, for example, the second instruction may be triggered to be generated after 10 seconds after the start time determined by the first instruction. For example, if the starting time determined by the first instruction is 22:11:10, the starting time determined by the first instruction is 22:11:20, and the second instruction is automatically generated.
In some possible implementations, the second instruction may be generated by operating a virtual key, for example, a preset virtual key for instructing to end the recognition of the motion trajectory of the camera is clicked on a touch screen of the mobile phone, and when the user clicks the virtual key, the second instruction is generated by being triggered.
In some possible implementations, the second instruction may be generated by operating a physical key, for example, a key for instructing to end recognizing the motion trajectory of the camera is provided on the VR glasses, and when the user clicks the key, the second instruction is generated.
Step 504, determining a third instruction matched with the recognized motion track of the camera under the condition that the recognized motion track of the camera is a preset motion track.
In some possible implementations, the preset trajectory may be a left movement, a right movement, an upward movement, a downward movement, a check shape, a circle, a closed shape, a Chinese character, a letter, a number, a specific shape, or a combination or variation of at least one of the above elements.
For example, if a current application scene is subjected to page turning operation by identifying a motion track of a camera, a rightward motion track and a leftward motion track can be preset, wherein the instruction corresponding to the rightward motion track can be set to turn to a previous page, and the instruction corresponding to the leftward motion track can be set to turn to a next page.
And step 505, executing the third instruction.
Taking the page turning scene in step 504 as an example, if the obtained third instruction is to turn a page forward, the content displayed in the display screen is controlled to be switched to the previous page. And if the obtained third instruction is to turn back one page, controlling the content displayed in the display screen to be switched to the next page.
Example four
Referring to fig. 6, fig. 6 is a schematic flowchart of a human-computer interaction method according to an embodiment of the present application, where in the embodiment, the method may include the following steps:
step 601, obtaining a first instruction, where the first instruction is used to determine a start time of a motion track of a camera identified in real time by using a video shot by the camera.
In some possible implementations, the first instruction may be triggered by a user clicking a virtual key displayed on a display screen of the electronic device, may be triggered by clicking a physical key of the electronic device, or may be triggered by performing a predetermined action (such as raising a hand, or recognizing a circle drawing operation, etc.). And after the first instruction is triggered, the video shot by the camera is identified in real time to determine the motion track of the camera.
In some possible implementations, the camera is not activated before the first instruction is acquired, in which case, after the first instruction is acquired, the camera may be triggered to activate and start acquiring video.
In some possible embodiments, the start time of the motion trajectory of the camera, which is identified in real time by the video captured by the camera, may be the time of acquiring the first instruction. For example, if the first instruction is obtained at 8 o 'clock and 30 o' clock, the 8 o 'clock and 30 o' clock can be used as the starting time of the motion track of the real-time recognition camera of the video shot by the camera.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and may be a certain time after a preset time elapses after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o 'clock, and if the preset time is 5 seconds, 9 o' clock 05 seconds may be used as the start time of the video shot by the camera to identify the motion track of the camera in real time.
In some possible implementation manners, the starting time of the motion track of the camera is identified in real time by using the video shot by the camera, and the starting time may be a certain time after the user executes a preset action after the time of acquiring the first instruction. For example, if the first instruction is obtained at 9 o' clock, if the preset motion is arm raising, the time after the arm raising motion is recognized may be used as the starting time for recognizing the motion track of the camera in real time by using the video shot by the camera.
And step 602, identifying the motion track of the camera in real time by using the video shot by the camera.
In some possible implementation manners, when the video acquired by the camera is identified in real time, each frame of image in the video can be identified, and the motion track of the camera is determined by identifying each frame of image.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be selected from images in the video in a time sequence, for example, images of every other frame may be used as the images to be identified, and specifically, images of a 1 st frame, a 3 rd frame, a 5 th frame, a 7 th frame, a 9 th frame, an 11 th frame, and the like in the video to be identified may be identified, so as to determine a motion trajectory of the camera.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified may be randomly selected according to a time sequence from the images in the video, for example, the images of the 1 st frame, the 2 nd frame, the 5 th frame, the 11 th frame, the 13 th frame, the 16 th frame, and the like in the acquired video may be identified to determine a motion trajectory of the camera.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. When the camera moves fast, a large number of images are obtained from the corresponding video for recognition (for example, recognition can be performed every frame). When the speed is slow, the number of images acquired from the corresponding video for identification is small (for example, identification is performed every other frame), for example, if the moving speed of the camera in the first 3 seconds exceeds the first threshold V1, each frame of image in the video in the first three seconds is identified, and after 3 seconds, the speed is lower than the threshold V2, identification can be performed every other frame. And determining the motion track of the camera according to the selected image for identification.
In some possible implementation manners, when a video acquired by a camera is identified in real time, images to be identified can be selected from images in the video according to a time sequence by combining the moving speed of the camera and the performance of the current electronic equipment, and the moving speed of the camera can be obtained by various methods, for example, by an acceleration sensor or other devices. The performance of the electronic device may be hardware performance, or remaining power, memory occupancy rate, and the like of the electronic device at that time. When the camera has a fast moving speed and/or high performance, a large number of images are obtained from the corresponding video for identification (for example, each frame can be identified). When the speed is slow and/or the performance is low, the number of images obtained from the corresponding video for identification is small (for example, identification is performed every other frame).
And 603, acquiring a second instruction, wherein the second instruction is used for determining the end time of identifying the motion track of the camera by using the video shot by the camera.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the first instruction is acquired, for example, the second instruction is triggered to be generated after 10 seconds after the first instruction is acquired. For example, if 21:31:10 obtains the first instruction, the second instruction is automatically generated at 21:31: 20.
In some possible implementations, the second instruction may be automatically generated after a predetermined time after the start time determined by the first instruction, for example, the second instruction may be triggered to be generated after 10 seconds after the start time determined by the first instruction. For example, if the starting time determined by the first instruction is 22:11:10, the starting time determined by the first instruction is 22:11:20, and the second instruction is automatically generated.
In some possible implementations, the second instruction may be generated by operating a virtual key, for example, a preset virtual key for instructing to end the recognition of the motion trajectory of the camera is clicked on a touch screen of the mobile phone, and when the user clicks the virtual key, the second instruction is generated by being triggered.
In some possible implementations, the second instruction may be generated by operating a physical key, for example, a key for instructing to end recognizing the motion trajectory of the camera is provided on the VR glasses, and when the user clicks the key, the second instruction is generated.
And step 604, determining a third instruction matched with the recognized motion track of the camera under the condition that the recognized motion track of the camera is a preset motion track.
In some possible implementations, the preset trajectory may be a left movement, a right movement, an upward movement, a downward movement, a hook shape, a circle, a closed shape, a Chinese character, a letter, a number, a specific shape, or a combination or a deformation of at least one of the above elements.
The instruction corresponding to the preset motion trajectory may be preset, for example, if the current application scene is Augmented Reality (Augmented Reality) glasses controlling a television to switch channels, and the television channel switching operation is performed by identifying the motion trajectory of the camera, the preset motion trajectory may include: upward movement and downward movement. The instruction corresponding to the upward movement track may be set to switch the program played by the television to the previous channel, and the instruction corresponding to the downward movement track may be set to switch the program played by the television to the next channel.
Step 605, sending the third instruction to the second electronic device.
Taking the scene of switching the channel of the program played by the television through the AR glasses in step 604 as an example, if the obtained third instruction is to switch to the previous channel, the program displayed in the television display is controlled to switch to the previous channel. And if the obtained third instruction is to switch to the next channel, controlling the program displayed in the television display to switch to the next channel.
The man-machine interaction method adopted by the embodiment of the application can be used for identifying the video shot by the camera in real time, determining the motion track of the camera according to the identification result, obtaining the instruction matched with the motion track of the camera when the motion track of the camera is the preset motion track, and further controlling the second electronic equipment through the obtained instruction. By adopting the method, the control instruction can be determined to control the second electronic equipment only by controlling the camera to move according to the preset motion track, the operation is simple, and the man-machine interaction can be more conveniently carried out.
EXAMPLE five
Referring to fig. 7, fig. 7 is a schematic structural diagram of a human-computer interaction device 700 according to an embodiment of the present disclosure, where the human-computer interaction device 700 includes: a first acquisition unit 701, a recognition unit 702, a second acquisition unit 703, and a determination unit 704. The first obtaining unit 701 is configured to obtain a first instruction, where the first instruction is used to determine a start time of a motion trajectory of the camera, which is identified in real time by using a video captured by the camera. An identifying unit 702, configured to identify a motion trajectory of the camera in real time by using the video captured by the camera. A second obtaining unit 703, configured to obtain a second instruction, where the second instruction is used to determine an end time of recognizing a motion trajectory of the camera by using the video captured by the camera; a determining unit 704, configured to determine, when the identified motion trajectory of the camera is a preset motion trajectory, a third instruction matching the identified motion trajectory of the camera. The human-computer interaction device 700 may implement the methods corresponding to the first embodiment and the second embodiment, and the implementation process of each unit module may refer to the description of the first embodiment and the second embodiment, which is not described herein again.
EXAMPLE six
Referring to fig. 8, fig. 8 is a schematic structural diagram of a human-computer interaction device 800 according to an embodiment of the present application, where the human-computer interaction device 800 includes: a first acquisition unit 801, a recognition unit 802, a second acquisition unit 803, a determination unit 804, and an execution unit 805. The first obtaining unit 801 is configured to obtain a first instruction, where the first instruction is used to determine a start time of a motion trajectory of the camera, which is identified in real time by using a video captured by the camera. The identification unit 802 is configured to identify a motion trajectory of the camera in real time by using the video captured by the camera. A second obtaining unit 803, configured to obtain a second instruction, where the second instruction is used to determine an end time when a motion trajectory of the camera is identified by using the video captured by the camera. A determining unit 804, configured to determine, when the identified motion trajectory of the camera is a preset motion trajectory, a third instruction matched with the identified motion trajectory of the camera. An executing unit 805, configured to execute the third instruction after the determining unit 804 determines the third instruction matching the identified motion trajectory of the camera. The human-computer interaction device 800 may implement the method of the third embodiment, and the implementation process of each unit module may refer to the description of the third embodiment, which is not described herein again.
EXAMPLE seven
Referring to fig. 9, fig. 9 is a schematic structural diagram of a human-computer interaction device 900 according to an embodiment of the present application, where the human-computer interaction device 900 includes: a first acquisition unit 901, a recognition unit 902, a second acquisition unit 903, a determination unit 904 and a transmission unit 905. The first obtaining unit 901 is configured to obtain a first instruction, where the first instruction is used to determine a start time of a motion trajectory of the camera, which is identified in real time by using a video shot by the camera. The identification unit 902 is configured to identify a motion trajectory of the camera in real time by using the video captured by the camera. A second obtaining unit 903, configured to obtain a second instruction, where the second instruction is used to determine an end time of recognizing a motion trajectory of the camera by using the video captured by the camera. A determining unit 904, configured to determine, when the identified motion trajectory of the camera is a preset motion trajectory, a third instruction matching the identified motion trajectory of the camera. A sending unit 905, configured to send a third instruction to a second electronic device after the determining unit determines the third instruction matching the identified motion trajectory of the camera. The man-machine interaction device 900 may implement the method of the fourth embodiment, and the implementation process of each unit module may refer to the description of the fourth embodiment, which is not described herein again.
Example eight
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 1000 includes: a radio frequency unit 1010, a memory 1020, an input unit 1030, a camera 1040, an audio circuit 1050, a processor 1060, an external interface 1070, and a power supply 1080. Among them, the input unit 1030 includes a touch screen 1031 and other input devices 1032, and the audio circuit 1050 includes a speaker 1051, a microphone 1052, and an earphone jack 1053. The touch screen 1031 may be a display screen having a touch function. In this embodiment, a user may trigger a first instruction by clicking a photographing key displayed on the touch screen 1031, so as to determine a start time of a motion trajectory of the camera which is recognized in real time by a video photographed by the camera 1040. When processor 1060 obtains the first instruction, processor 1060 utilizes the video captured by camera 1040 to recognize the motion track of camera 1040 in real time. The user may trigger the second instruction by clicking a photographing stop key displayed on the touch screen 1031, and after the processor 1060 acquires the second instruction, the end time of the motion trajectory of the video recognition camera 1040 photographed by the camera 1040 is determined; the processor 1060 determines a third instruction matching the recognized motion trajectory of the camera 1040, in a case where the recognized motion trajectory of the camera 1040 is the preset motion trajectory.
By adopting the technical scheme provided by the embodiment of the application, the video shot by the camera can be identified in real time, the motion track of the camera is determined according to the identification result, and when the motion track of the camera is the preset motion track, the command matched with the motion track of the camera can be obtained.
An embodiment of the present application further provides an electronic device, including: the device comprises a camera, a memory and a processor, wherein the memory is used for storing a preset motion track and computer program codes, and the computer program codes comprise instructions; the instructions are used for causing the electronic equipment to execute part or all of the steps of the human-computer interaction method of any one of the method embodiments when the instructions are executed by the processor.
The embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute some or all of the steps of the human-computer interaction method described in any of the foregoing method embodiments.
The explanations and expressions of the technical features and the extensions of various implementation forms in the above specific method embodiments and embodiments are also applicable to the method execution in the apparatus, and are not repeated in the apparatus embodiments.
It should be understood that the division of the modules in the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. For example, each of the above modules may be a processing element separately set up, or may be implemented by being integrated in a certain chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and a certain processing element of the processor calls and executes the functions of each of the above modules. In addition, the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit chip having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software. The processing element may be a general-purpose processor, such as a Central Processing Unit (CPU), or may be one or more integrated circuits configured to implement the above method, such as: one or more application-specific integrated circuits (ASICs), or one or more microprocessors (DSPs), or one or more field-programmable gate arrays (FPGAs), among others.
It is to be understood that the terms "first," "second," and the like in the description and in the claims, and in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
While the invention has been described with reference to a number of illustrative embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A human-computer interaction method is applied to a first electronic device comprising a camera, and comprises the following steps:
acquiring a first instruction, wherein the first instruction is used for determining the starting moment of the motion track of the camera which is identified in real time by using the video shot by the camera;
identifying the motion track of the camera in real time by using the video shot by the camera;
acquiring a second instruction, wherein the second instruction is used for determining the end time of identifying the motion track of the camera by using the video shot by the camera;
and under the condition that the recognized motion track of the camera is a preset motion track, determining a third instruction matched with the recognized motion track of the camera.
2. The method of claim 1, wherein after determining the third instruction that matches the identified motion trajectory of the camera, the method further comprises:
executing the third instruction.
3. The method of claim 1, wherein after determining the third instruction that matches the identified motion trajectory of the camera, the method further comprises:
and sending the third instruction to a second electronic device.
4. The method according to any one of claims 1 to 3, wherein the identifying the motion track of the camera in real time by using the video shot by the camera comprises:
identifying feature points from image frames in the video;
and tracking the positions of the characteristic points to determine the motion track of the camera.
5. A human-computer interaction device, comprising: the camera still includes:
the first acquisition unit is used for acquiring a first instruction, wherein the first instruction is used for determining the starting moment of the motion track of the camera which is identified in real time by the video shot by the camera;
the identification unit is used for identifying the motion track of the camera in real time by utilizing the video shot by the camera;
acquiring a second instruction, wherein the second instruction is used for determining the end time of identifying the motion track of the camera by using the video shot by the camera;
the determining unit is used for determining a third instruction matched with the recognized motion track of the camera under the condition that the recognized motion track of the camera is a preset motion track.
6. The apparatus of claim 5, further comprising:
and the execution unit is used for executing the third instruction after the determination unit determines the third instruction matched with the recognized motion trail of the camera.
7. The apparatus of claim 5, further comprising:
and the sending unit is used for sending the third instruction to the second electronic equipment after the determining unit determines the third instruction matched with the recognized motion trail of the camera.
8. The device according to any one of claims 5 to 7,
the determining unit is specifically configured to identify feature points from image frames in the video, track positions of the feature points, and determine a motion trajectory of the camera.
9. An electronic device, comprising: the device comprises a camera, a memory and a processor, wherein the memory is used for storing a preset motion track and computer program codes, and the computer program codes comprise instructions; the instructions, when executed by the processor, cause the electronic device to perform the method of any of claims 1-4.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program code for execution by a device, which when executed performs the method of any of claims 1-4.
CN202110307467.0A 2021-03-23 2021-03-23 Man-machine interaction method and device and related electronic equipment Pending CN115113722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110307467.0A CN115113722A (en) 2021-03-23 2021-03-23 Man-machine interaction method and device and related electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110307467.0A CN115113722A (en) 2021-03-23 2021-03-23 Man-machine interaction method and device and related electronic equipment

Publications (1)

Publication Number Publication Date
CN115113722A true CN115113722A (en) 2022-09-27

Family

ID=83323327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110307467.0A Pending CN115113722A (en) 2021-03-23 2021-03-23 Man-machine interaction method and device and related electronic equipment

Country Status (1)

Country Link
CN (1) CN115113722A (en)

Similar Documents

Publication Publication Date Title
US11163426B2 (en) Interaction position determination method and system, storage medium and smart terminal
KR102062310B1 (en) Method and apparatus for prividing control service using head tracking in an electronic device
CN108596092B (en) Gesture recognition method, device, equipment and storage medium
CN110572575A (en) camera shooting control method and device
WO2020078319A1 (en) Gesture-based manipulation method and terminal device
US9641743B2 (en) System, method, and apparatus for controlling timer operations of a camera
CN112099707A (en) Display method and device and electronic equipment
CN107694087B (en) Information processing method and terminal equipment
KR20150029463A (en) Method, apparatus and recovering medium for controlling user interface using a input image
CN112068698A (en) Interaction method and device, electronic equipment and computer storage medium
CN112825013A (en) Control method and device of terminal equipment
JP2021512436A (en) Global special effect switching method, device, terminal device and storage medium
CN112492201B (en) Photographing method and device and electronic equipment
CN112364799A (en) Gesture recognition method and device
CN112911147A (en) Display control method, display control device and electronic equipment
CN110198421B (en) Video processing method and related product
CN112954209B (en) Photographing method and device, electronic equipment and medium
CN112788244B (en) Shooting method, shooting device and electronic equipment
CN112437231B (en) Image shooting method and device, electronic equipment and storage medium
CN108008804A (en) The screen control method and device of smart machine
CN110837766B (en) Gesture recognition method, gesture processing method and device
CN108073291A (en) A kind of input method and device, a kind of device for input
CN115113722A (en) Man-machine interaction method and device and related electronic equipment
CN117234405A (en) Information input method and device, electronic equipment and storage medium
CN114245017A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination