CN114879898A - Control method, device, equipment and storage medium - Google Patents

Control method, device, equipment and storage medium Download PDF

Info

Publication number
CN114879898A
CN114879898A CN202210625674.5A CN202210625674A CN114879898A CN 114879898 A CN114879898 A CN 114879898A CN 202210625674 A CN202210625674 A CN 202210625674A CN 114879898 A CN114879898 A CN 114879898A
Authority
CN
China
Prior art keywords
motion information
mounted terminal
information
gesture motion
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210625674.5A
Other languages
Chinese (zh)
Inventor
代黎明
张伟玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Xingji Shidai Technology Co Ltd
Original Assignee
Hubei Xingji Shidai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Xingji Shidai Technology Co Ltd filed Critical Hubei Xingji Shidai Technology Co Ltd
Priority to CN202210625674.5A priority Critical patent/CN114879898A/en
Publication of CN114879898A publication Critical patent/CN114879898A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a control method, a control device, control equipment and a storage medium. The method comprises the following steps: acquiring distance information acquired by each distance sensor and time information corresponding to the distance information; determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information; according to the technical scheme, the problem that imaging of an optical-mechanical system shakes due to the fact that the intelligent head-mounted terminal is light in weight and interaction is achieved through the touch control areas of the touch control glasses legs is solved, interaction can be conducted under the condition that the intelligent head-mounted terminal is not contacted, and watching experience of a user is improved.

Description

Control method, device, equipment and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a control method, an apparatus, a device, and a storage medium.
Background
With the development of scientific technology, attention of people is paid to AR (Augmented Reality) intelligent head-mounted terminals, VR (Virtual Reality) intelligent head-mounted terminals, MR (media Reality) intelligent head-mounted terminals and future Virtual-real fusion intelligent head-mounted terminals.
Among the prior art, the interactive mode of intelligent head-mounted terminal is usually realized through the touch-control district of touch-control mirror leg, and in order to promote the comfort level that the user wore, the weight of intelligent head-mounted terminal is all lighter (the weight of intelligent head-mounted terminal is less than 100g under the general condition), when the user contacts the mirror leg, leads to intelligent head-mounted terminal to rock very easily, and then leads to optical-mechanical system formation of image to follow and rock, influences user's the experience of watching.
Disclosure of Invention
Embodiments of the present invention provide a control method, an apparatus, a device, and a storage medium, so as to solve a problem of optical-mechanical system imaging shaking caused by a touch area of a touch glasses leg due to a light weight of an intelligent head-mounted terminal, and enable interaction without contacting the intelligent head-mounted terminal, thereby improving viewing experience of a user.
According to an aspect of the present invention, there is provided a control method applied to a smart headset terminal, the smart headset terminal including: terminal body is worn to intelligence, with this body coupling's of intelligent head terminal at least one mirror leg and setting are in two at least distance sensor on the mirror leg include:
acquiring distance information acquired by each distance sensor and time information corresponding to the distance information;
determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and generating a control instruction according to the gesture motion information, and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
According to another aspect of the present invention, there is provided a control apparatus including:
the information acquisition module is used for acquiring distance information acquired by each distance sensor and time information corresponding to the distance information;
the gesture motion information determining module is used for determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and the control module is used for generating a control instruction according to the gesture motion information and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
According to another aspect of the present invention, there is provided an intelligent headset terminal including:
the intelligent head-wearing terminal comprises an intelligent head-wearing terminal body, at least one glasses leg connected with the intelligent head-wearing terminal body, at least two distance sensors arranged on the glasses leg and at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the control method according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the control method according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the embodiment of the invention, the gesture motion information is determined through the distance information acquired by at least two distance sensors on the same glasses leg of the intelligent head-mounted terminal and the time information corresponding to the distance information, the control instruction is generated according to the gesture motion information, and the current interface of the intelligent head-mounted terminal is controlled based on the control instruction, so that the problem of optical-mechanical system imaging shaking caused by interaction through the touch control area of the touch control glasses leg due to light weight of the intelligent head-mounted terminal is solved, interaction can be carried out under the condition of not contacting the intelligent head-mounted terminal, and the viewing experience of a user is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a control method in an embodiment of the invention;
fig. 2 is a schematic diagram of an intelligent head-mounted terminal in an embodiment of the present invention;
FIG. 3 is an interactive illustration of gesture motion information from a first distance sensor to a second distance sensor in an embodiment of the invention;
FIG. 4 is a schematic structural diagram of a control device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an intelligent head-mounted terminal in an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the following embodiments, the smart headset refers to glasses that can interact with functions of a user through data processing capability of the smart headset, or interact with functions of the user through data communication with a mobile phone, a tablet, a computer, and the like, and the smart headset includes, but is not limited to, AR glasses, VR glasses, MR glasses, bluetooth glasses, and the like.
These intelligent headsets have one or more temples in shape, with one or more lenses on the intelligent headset body support.
The smart headset, as it is referred to, may be display-capable, such as AR glasses, VR glasses, MR glasses; or may be non-display capable, such as bluetooth glasses.
Example one
Fig. 1 is a flowchart of a control method provided in an embodiment of the present invention, where the present embodiment is applicable to a situation of controlling an intelligent headset terminal, and the method may be executed by a control device in an embodiment of the present invention, where the device may be implemented in a software and/or hardware manner, as shown in fig. 1, the method specifically includes the following steps:
and S110, acquiring distance information acquired by each distance sensor and time information corresponding to the distance information.
The control method provided by the embodiment of the invention is applied to the intelligent head-mounted terminal, and the intelligent head-mounted terminal comprises the following steps: the intelligent headset comprises a headset body, at least one glasses leg and at least two distance sensors, wherein the at least one glasses leg and the at least two distance sensors are connected with the headset body, and the at least two sensors are arranged on the glasses legs. For example, as shown in fig. 2, the smart headset terminal may include: terminal body, two mirror legs, first distance sensor and second distance sensor are worn to intelligence, and the terminal body is worn to intelligence includes: picture frame, display module and ray apparatus module, two mirror legs set up respectively in the left and right ends of terminal body are worn to intelligence, first distance sensor and second distance sensor all set up in the right side mirror leg outside, and the distance between first distance sensor and the intelligent head terminal body is less than the distance between second distance sensor and the intelligent head terminal body, and the distance is preset with the second distance sensor interval to first distance sensor.
The distance sensor, also called a displacement sensor, is a sensor for sensing the distance between the distance sensor and an object to perform a predetermined function. The distance sensors may be disposed outside the temples of the smart headset terminal, and for example, two distance sensors may be disposed outside the right temple. The distance sensor can be classified into an optical distance sensor, an infrared distance sensor, an ultrasonic distance sensor and the like according to different working principles. The distance sensor provided by the embodiment of the invention can be an infrared distance sensor, the infrared distance sensor is provided with an infrared transmitting tube and an infrared receiving tube, when the infrared rays transmitted by the transmitting tube are received by the receiving tube, the fact that the detected distance between the infrared transmitting tube and the shielding object is short and the shielding object exists is indicated, and when the receiving tube cannot receive the infrared rays transmitted by the transmitting tube, the fact that the distance is long is indicated, and the fact that the infrared transmitting tube is not shielded can be judged. The working principle of other types of distance sensors is also the same, and the distance is determined by the emission and the reception of a certain substance, and the emitted substance may be ultrasonic waves, light pulses, and the like, which is not limited in the embodiment of the present invention.
The distance information can be determined according to infrared rays emitted by an emitting tube and infrared rays received by a receiving tube in the distance sensor; the distance information may also be determined according to ultrasonic waves transmitted by a transmitting tube and ultrasonic waves received by a receiving tube in the distance sensor, which is not limited in this embodiment of the present invention.
The time information corresponding to the distance information may be: the distance sensor records a time stamp while collecting distance information.
And S120, determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information.
Wherein the gesture motion information comprises: the hand waving direction is: the side close to the intelligent head-mounted terminal body → the side far away from the intelligent head-mounted terminal body; the hand waving direction is: far away from the intelligent head-mounted terminal body side → close to the intelligent head-mounted terminal body side; a hand stop (motion information of the hand after the time when the hand blocks the distance sensor is longer than a time threshold), and the like.
Specifically, the manner of determining the gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information may be: determining a time stamp of the shelter detected by each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information; if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is negative, determining that the gesture motion information is first motion information; and if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body in the adjacent distance sensors is positive, determining that the gesture motion information is second motion information. For example, the smart headset terminal may include: first distance sensor and second distance sensor, if the difference of the time stamp that first distance sensor detected the shelter and the time stamp that second distance sensor detected the shelter is negative, then confirm gesture motion information and wave the direction for the hand and be: first distance sensor → second distance sensor (hand first obscures first distance sensor, then second distance sensor). If the difference value between the time stamp of the first distance sensor detecting the shielding object and the time stamp of the second distance sensor detecting the shielding object is positive, determining that the gesture motion information is that the hand waving direction is: second distance sensor → first distance sensor (the second distance sensor is occluded first by the hand, then the first distance sensor). And if the shielding time corresponding to the first distance sensor is greater than the time threshold, determining that the gesture motion information is the hand stop. And if the shielding time corresponding to the second distance sensor is greater than the time threshold, determining that the gesture motion information is the hand stop.
And S130, generating a control instruction according to the gesture motion information, and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
Wherein, the control instruction may be: any one of next, previous, pause, previous, next, left-swipe, and right-swipe.
Specifically, the manner of generating the control instruction according to the gesture motion information may be: the method includes the steps of establishing a corresponding relation table of gesture motion information and a control instruction in advance, and inquiring the corresponding relation table according to the gesture motion information to obtain a control instruction corresponding to the gesture motion information. The manner of generating the control instruction according to the gesture motion information may also be: acquiring the type information of the current interface of the intelligent head-mounted terminal; generating a control instruction according to the type information and the gesture motion information; and controlling the current interface of the intelligent head-mounted terminal based on the control instruction. For example, it may be: if the current interface of the intelligent head-mounted terminal is a reading interface and the gesture motion information is first motion information, generating a next page jump instruction; and if the current interface of the intelligent head-mounted terminal is a reading interface and the gesture motion information is second motion information, generating a previous page jump instruction. If the current interface of the intelligent head-mounted terminal is an audio playing interface and the gesture motion information is first motion information, generating a next-head-skipping instruction; and if the current interface of the intelligent head-mounted terminal is an audio playing interface and the gesture motion information is second motion information, generating a previous jump instruction. If the current interface of the intelligent head-mounted terminal is a video playing interface and the gesture motion information is first motion information, generating a next set jumping instruction; and if the current interface of the intelligent head-mounted terminal is a video playing interface and the gesture motion information is second motion information, generating a previous-set-skipping instruction. If the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is second motion information, generating a right sliding instruction; and if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is first motion information, generating a leftward sliding instruction. If the current interface of the intelligent head-mounted terminal is a folder interface and the gesture motion information is second motion information, generating a previous instruction for jumping; and if the current interface of the intelligent head-mounted terminal is a folder interface and the gesture motion information is first motion information, generating a previous jump instruction. If the current interface of the intelligent head-mounted terminal is a main interface, wherein the main interface comprises: if the target icon is in the selected state and the gesture motion information is second motion information, generating a right icon instruction for selecting the target icon; if the current interface of the intelligent head-mounted terminal is a main interface, wherein the main interface comprises: and generating a left icon instruction of the selected target icon if the target icon is in the selected state and the gesture motion information is the first motion information.
Specifically, the manner of controlling the current interface of the smart headset terminal based on the control instruction may be: if the control instruction is a previous page jump instruction, controlling a current interface of the intelligent head-mounted terminal to display a previous page of corresponding content; if the control instruction is a next page jump instruction, controlling a current interface of the intelligent head-mounted terminal to display the content corresponding to the next page; if the control instruction is a next set skip instruction, controlling a current interface of the intelligent head-mounted terminal to display content corresponding to a next set of videos; if the control instruction is a previous set of instruction, controlling a current interface of the intelligent head-mounted terminal to display the content corresponding to the previous set of video; if the control instruction is a rightward sliding instruction, controlling a current interface of the intelligent head-mounted terminal to display an interface obtained after rightward sliding; and if the control instruction is a leftward sliding instruction, controlling the current interface of the intelligent head-mounted terminal to display an interface obtained after leftward sliding.
Optionally, the at least two distance sensors are arranged on the outer side of the right side of the temple, and the distance between every two adjacent distance sensors is a preset distance.
The preset distance may be a preset distance, and the preset distance is set to facilitate determination of gesture movement information. For example, the smart headset terminal may include: terminal body, two mirror legs, first distance sensor and second distance sensor are worn to intelligence, and the terminal body is worn to intelligence includes: terminal body, display module assembly and ray apparatus module are worn to intelligence, two mirror legs set up respectively in the left and right ends of terminal body is worn to intelligence, first distance sensor and second distance sensor all set up in the right side mirror leg outside, and the distance between first distance sensor and the intelligent head terminal body is less than the distance between second distance sensor and the intelligent head terminal body, and the distance is preset with the second distance sensor interval to first distance sensor.
Optionally, determining gesture motion information according to the target signal acquired by each distance sensor and the time information corresponding to the target signal, includes:
determining the time stamp of the shielding object detected by each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is negative, determining that the gesture motion information is first motion information;
and if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body in the adjacent distance sensors is positive, determining that the gesture motion information is second motion information.
Wherein the timestamp is a point in time.
Wherein, be close to among the adjacent distance sensor distance between the intelligent head-mounted terminal body and the intelligent head-mounted terminal body is less than and keeps away from distance between the intelligent head-mounted terminal body's distance sensor and the intelligent head-mounted terminal body.
Specifically, the manner of determining the timestamp of the blocking object detected by each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information may be: when the infrared rays emitted by the emitting tube in the distance sensor are received by the receiving tube, the fact that the distance sensor detects the shielding object is determined, and the time point when the receiving tube receives the infrared rays emitted by the emitting tube is determined as the time stamp when the distance sensor detects the shielding object.
Wherein, the first motion information is that the hand waving direction is: approaching the smart headset terminal body side → departing the smart headset terminal body side, or the first distance sensor → the second distance sensor.
Wherein, the second motion information is that the hand waving direction is: away from the smart headset terminal body side → close to the smart headset terminal body side, or the second distance sensor → the first distance sensor.
Specifically, if the difference between the timestamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body in the adjacent distance sensors and the timestamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is negative, it is determined that the gesture motion information is the first motion information. For example, the smart headset terminal may include: the distance between the first distance sensor and the intelligent head-mounted terminal body is smaller than the distance between the second distance sensor and the intelligent head-mounted terminal body, the first distance sensor is determined to be close to the distance sensor of the intelligent head-mounted terminal body, and the second distance sensor is determined to be far away from the distance sensor of the intelligent head-mounted terminal body. If the difference value between the time stamp of the first distance sensor detecting the obstruction and the time stamp of the second distance sensor detecting the obstruction is negative, the gesture motion information is determined to be the first motion information, namely the first distance sensor → the second distance sensor.
Specifically, if the difference between the timestamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the timestamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is positive, it is determined that the gesture motion information is the second motion information. For example, the smart headset terminal may include: the distance between the first distance sensor and the intelligent head-mounted terminal body is smaller than the distance between the second distance sensor and the intelligent head-mounted terminal body, the first distance sensor is determined to be close to the distance sensor of the intelligent head-mounted terminal body, and the second distance sensor is determined to be far away from the distance sensor of the intelligent head-mounted terminal body. If the difference value between the time stamp of the first distance sensor detecting the obstruction and the time stamp of the second distance sensor detecting the obstruction is positive, the gesture motion information is determined to be second motion information, namely the second distance sensor → the first distance sensor.
It should be noted that: if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is positive, determining that the gesture motion information is first motion information; and if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is negative, determining that the gesture motion information is second motion information. The embodiments of the present invention are not limited in this regard.
Optionally, generating a control instruction according to the gesture motion information, and controlling a current interface of the smart headset terminal based on the control instruction includes:
acquiring the type information of the current interface of the intelligent head-mounted terminal;
generating a control instruction according to the type information and the gesture motion information;
and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
The type information may be any one of a reading interface, an audio playing interface, a video playing interface, a main interface and a folder interface.
Specifically, the manner of generating the control instruction according to the type information and the gesture motion information may be: and if the current interface of the intelligent head-mounted terminal is a reading interface and the gesture motion information is first motion information, generating a next page jump instruction, and if the current interface of the intelligent head-mounted terminal is a reading interface and the gesture motion information is second motion information, generating a previous page jump instruction. If the current interface of the intelligent head-mounted terminal is an audio playing interface and the gesture motion information is first motion information, generating a next-head-skipping instruction; and if the current interface of the intelligent head-mounted terminal is an audio playing interface and the gesture motion information is second motion information, generating a previous instruction for jumping. If the current interface of the intelligent head-mounted terminal is a video playing interface and the gesture motion information is first motion information, generating a next set jump instruction; and if the current interface of the intelligent head-mounted terminal is a video playing interface and the gesture motion information is second motion information, generating a previous-set-skipping instruction. If the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is second motion information, generating a right sliding instruction; if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is first motion information, generating a leftward sliding instruction, and if the current interface of the intelligent head-mounted terminal is a folder interface and the gesture motion information is second motion information, generating a previous instruction to jump; and if the current interface of the intelligent head-mounted terminal is a folder interface and the gesture motion information is first motion information, generating a previous jump instruction. If the current interface of the intelligent head-mounted terminal is a main interface, wherein the main interface comprises: generating a right icon instruction of the selected target icon if the target icon is in the selected state and the gesture motion information is second motion information; if the current interface of the intelligent head-mounted terminal is a main interface, wherein the main interface comprises: and generating a left icon instruction of the selected target icon if the target icon is in the selected state and the gesture motion information is the first motion information.
Specifically, the manner of controlling the current interface of the smart headset terminal based on the control instruction may be: if the control instruction is a previous page jump instruction, controlling a current interface of the intelligent head-mounted terminal to display a previous page of corresponding content; if the control instruction is a next page jump instruction, controlling a current interface of the intelligent head-mounted terminal to display the content corresponding to the next page; if the control instruction is a next set skip instruction, controlling a current interface of the intelligent head-mounted terminal to display content corresponding to a next set of videos; if the control instruction is a previous set of instruction, controlling a current interface of the intelligent head-mounted terminal to display the content corresponding to the previous set of video; if the control instruction is a rightward sliding instruction, controlling a current interface of the intelligent head-mounted terminal to display an interface obtained after rightward sliding; and if the control instruction is a leftward sliding instruction, controlling the current interface of the intelligent head-mounted terminal to display an interface obtained after leftward sliding.
Optionally, generating a control instruction according to the type information and the gesture motion information includes:
if the current interface of the intelligent head-mounted terminal is a first-class interface and the gesture motion information is first motion information, generating a next-page jump instruction, wherein the first-class interface comprises: a reading interface;
and if the current interface of the intelligent head-mounted terminal is the first type of interface and the gesture motion information is the second motion information, generating a previous page jump instruction.
Wherein the first type of interface comprises: and (6) reading an interface.
Specifically, if the current interface of the smart headset terminal is a first type of interface and the gesture motion information is first motion information, the instruction to jump to the next page is generated, for example, if the current interface of the smart headset terminal is a reading interface and the gesture motion information is first distance sensor → second distance sensor, the instruction to jump to the next page is generated.
Specifically, if the current interface of the smart headset terminal is the first type of interface and the gesture motion information is the second motion information, the instruction to jump to the previous page is generated, for example, if the current interface of the smart headset terminal is the reading interface and the gesture motion information is the second distance sensor → the first distance sensor, the instruction to jump to the previous page is generated.
Optionally, generating a control instruction according to the type information and the gesture motion information includes:
if the current interface of the intelligent head-mounted terminal is a second-type interface and the gesture motion information is first motion information, generating a next-jump instruction, wherein the second-type interface comprises: an audio playing interface;
and if the current interface of the intelligent head-mounted terminal is a second type interface and the gesture motion information is second motion information, generating a previous instruction for jumping.
Specifically, if the current interface of the smart headset terminal is the second-type interface and the gesture motion information is the first motion information, the next skip instruction is generated, for example, if the current interface of the smart headset terminal is an audio playing interface and the gesture motion information is the first distance sensor → the second distance sensor, the next skip instruction is generated.
Specifically, if the current interface of the intelligent head-mounted terminal is the second-type interface and the gesture motion information is the second motion information, the instruction to jump to the previous interface is generated, for example, if the current interface of the intelligent head-mounted terminal is the audio playing interface and the gesture motion information is the second distance sensor → the first distance sensor, the instruction to jump to the previous interface is generated.
It should be noted that after the instruction to jump to the next song is generated, the intelligent head-mounted terminal is controlled to play the next song, and after the instruction to jump to the previous song is generated, the intelligent head-mounted terminal is controlled to play the previous song.
Optionally, generating a control instruction according to the type information and the gesture motion information includes:
if the current interface of the intelligent head-mounted terminal is a third interface and the gesture motion information is the first motion information, generating a next set of instruction for jumping, wherein the third interface comprises: a video playing interface;
and if the current interface of the intelligent head-mounted terminal is a third-type interface and the gesture motion information is second motion information, generating a last-set skip instruction.
Specifically, if the current interface of the intelligent head-mounted terminal is the third-type interface and the gesture motion information is the first motion information, a next-set jump instruction is generated. For example, if the current interface of the smart headset terminal is a video playing interface and the gesture motion information is the first distance sensor → the second distance sensor, a jump next set instruction is generated, or a jump next video instruction is generated.
Specifically, if the current interface of the intelligent head-mounted terminal is the third-type interface and the gesture motion information is the second motion information, a previous-set jump instruction is generated. For example, if the current interface of the smart headset terminal is a video playing interface and the gesture motion information is the second distance sensor → the first distance sensor, a previous instruction for jumping is generated, or a previous video instruction for jumping is generated.
Optionally, generating a control instruction according to the type information and the gesture motion information includes:
if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is second motion information, generating a right sliding instruction;
and if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is first motion information, generating a leftward sliding instruction.
Optionally, if the current interface of the smart headset terminal is a main interface, the main interface includes: if the target icon is in the selected state and the gesture motion information is second motion information, generating a right icon instruction for selecting the target icon; if the current interface of the intelligent head-mounted terminal is a main interface, wherein the main interface comprises: and generating a left icon instruction of the selected target icon if the target icon is in the selected state and the gesture motion information is the first motion information.
Optionally, if the current interface of the intelligent head-mounted terminal is a commodity display interface and the gesture motion information is second motion information, generating a downward sliding instruction; and if the current interface of the intelligent head-mounted terminal is a commodity display interface and the gesture motion information is first motion information, generating an upward sliding instruction.
Optionally, generating a control instruction according to the gesture motion information includes:
pre-establishing a corresponding relation table of gesture motion information and control instructions;
and inquiring the corresponding relation table according to the gesture motion information to obtain a control instruction corresponding to the gesture motion information.
Wherein, the corresponding relation table comprises: the correspondence between the gesture motion information and the control command may be, for example, that the correspondence table includes: gesture motion information 1 corresponds to a control command W, gesture motion information 2 corresponds to a control command M, and the like.
Specifically, the correspondence table is queried according to the gesture motion information to obtain a control instruction corresponding to the gesture motion information, for example, the correspondence table includes: the gesture motion information 1 corresponds to the control instruction W, the gesture motion information 2 corresponds to the control instruction M, and if the gesture motion information is the gesture motion information 1, the corresponding relation table is inquired to obtain the control instruction W corresponding to the gesture motion information 1.
Optionally, determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information, includes:
determining the shielding time corresponding to each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and if the shielding time corresponding to any distance sensor is greater than the time threshold, determining that the gesture motion information is third motion information.
The time threshold may be set by a system, and for example, the time threshold may be 2 s.
The blocking time corresponding to the distance sensor is a duration of the distance sensor detecting the blocking object, and for example, if the first distance sensor continuously detects the blocking object within 2S, the blocking time corresponding to the first distance sensor is determined to be 2S.
Wherein the third motion information may be: the hand is paused, i.e. the hand continuously obscures the distance sensor.
In one specific example, the smart headset terminal includes: terminal body, two mirror legs, first distance sensor and second distance sensor are worn to intelligence, and the terminal body is worn to intelligence includes: picture frame, display module and ray apparatus module, two mirror legs set up respectively in the left and right ends of terminal body are worn to intelligence, first distance sensor and second distance sensor all set up in the right side mirror leg outside, and the distance between first distance sensor and the intelligent head terminal body is less than the distance between second distance sensor and the intelligent head terminal body, and the distance is preset with the second distance sensor interval to first distance sensor. And if the shielding time corresponding to the first distance sensor is 4s, determining that the gesture motion information is that the hand shields the first distance sensor.
Optionally, generating a control instruction according to the gesture motion information includes:
and generating a pause instruction according to the third motion information.
In a specific example, as shown in fig. 3, when the user wears the smart headset terminal, the distance sensor is located outside the right side temple of the smart headset terminal. When the hands of a user move from the position of the first distance sensor to the position of the second distance sensor within a range of being 10cm from the temple, the first distance sensor and the second distance sensor sequentially detect that shielding exists, and then the motion of waving the hands is recognized; the data of the first distance sensor and the data of the second distance sensor are communicated through an I2C interface and detected by the CPU, and the CPU can judge the waving direction of the hand through the sequence of successively sampling the data by the first distance sensor and the second distance sensor according to the configured parameters; a swiping function will open and match the corresponding function in a particular scene: if the music playing interface is in the state, the CPU makes a data judgment scheme in advance, and identifies the next song switching action when the data reporting sequence is the first distance sensor- > the second distance sensor by detecting the data sampling of the first distance sensor and the second distance sensor; and when the data reporting sequence is a second distance sensor, namely the first distance sensor, the last song switching action can be identified. A single sensor can be configured to have the shielding time exceeding 2s as a tentative action; when the main interface is processed in the same manner, the menu needs to be switched left and right.
According to the technical scheme, the gesture motion information is determined through the distance information acquired by at least two distance sensors on the same glasses leg of the intelligent head-mounted terminal and the time information corresponding to the distance information, the control instruction is generated according to the gesture motion information, and based on the control instruction, the current interface of the intelligent head-mounted terminal is controlled, the problem that the optical system is shaken due to the fact that the weight of the intelligent head-mounted terminal is light and interaction is achieved through the touch control area of the touch control glasses leg is solved, interaction can be conducted under the condition that the intelligent head-mounted terminal is not contacted, and the watching experience of a user is improved.
Example two
Fig. 4 is a schematic structural diagram of a control device according to an embodiment of the present invention. The embodiment may be applicable to the case of controlling an intelligent head-mounted terminal, the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be integrated in any device providing a control function, as shown in fig. 2, where the control apparatus specifically includes: an information acquisition module 210, a gesture motion information determination module 220, and a control module 230.
The information acquisition module is used for acquiring distance information acquired by each distance sensor and time information corresponding to the distance information;
the gesture motion information determining module is used for determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and the control module is used for generating a control instruction according to the gesture motion information and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
Optionally, the at least two distance sensors are arranged on the outer side of the right side of the temple, and the distance between every two adjacent distance sensors is a preset distance.
Optionally, the gesture motion information determining module is specifically configured to:
determining the time stamp of the shielding object detected by each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is negative, determining that the gesture motion information is first motion information;
and if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body in the adjacent distance sensors is positive, determining that the gesture motion information is second motion information.
Optionally, the control module is specifically configured to:
acquiring the type information of the current interface of the intelligent head-mounted terminal;
generating a control instruction according to the type information and the gesture motion information;
and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
Optionally, the control module is specifically configured to:
if the current interface of the intelligent head-mounted terminal is a first-class interface and the gesture motion information is first motion information, generating a next-page jump instruction, wherein the first-class interface comprises: a reading interface;
and if the current interface of the intelligent head-mounted terminal is the first type of interface and the gesture motion information is the second motion information, generating a previous page jump instruction.
Optionally, the control module is specifically configured to:
if the current interface of the intelligent head-mounted terminal is a second-type interface and the gesture motion information is first motion information, generating a next-jump instruction, wherein the second-type interface comprises: an audio playing interface;
and if the current interface of the intelligent head-mounted terminal is a second type interface and the gesture motion information is second motion information, generating a previous instruction for jumping.
Optionally, the control module is specifically configured to:
if the current interface of the intelligent head-mounted terminal is a third interface and the gesture motion information is the first motion information, generating a next set of instruction for jumping, wherein the third interface comprises: a video playing interface;
and if the current interface of the intelligent head-mounted terminal is a third-type interface and the gesture motion information is second motion information, generating a previous-set-skipping instruction.
Optionally, the control module is specifically configured to:
if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is second motion information, generating a right sliding instruction;
and if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is first motion information, generating a leftward sliding instruction.
Optionally, the control module is specifically configured to:
pre-establishing a corresponding relation table of gesture motion information and control instructions;
and inquiring the corresponding relation table according to the gesture motion information to obtain a control instruction corresponding to the gesture motion information.
Optionally, the gesture motion information determining module is specifically configured to:
determining the shielding time corresponding to each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and if the shielding time corresponding to any distance sensor is greater than the time threshold, determining that the gesture motion information is third motion information.
Optionally, the control module is specifically configured to:
and generating a pause instruction according to the third motion information.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
According to the technical scheme, the gesture motion information is determined through the distance information acquired by at least two distance sensors on the same glasses leg of the intelligent head-mounted terminal and the time information corresponding to the distance information, the control instruction is generated according to the gesture motion information, and based on the control instruction, the current interface of the intelligent head-mounted terminal is controlled, the problem that the optical system is shaken due to the fact that the weight of the intelligent head-mounted terminal is light and interaction is achieved through the touch control area of the touch control glasses leg is solved, interaction can be conducted under the condition that the intelligent head-mounted terminal is not contacted, and the watching experience of a user is improved.
EXAMPLE III
Fig. 5 shows a schematic structural diagram of an intelligent headset terminal 10 that can be used to implement an embodiment of the invention. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the smart headset terminal 10 includes: an intelligent headset terminal body (not shown in fig. 5), at least one temple (not shown in fig. 5) connected to the intelligent headset terminal body, at least two distance sensors (not shown in fig. 5) provided on the temple, at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the smart headset terminal 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A plurality of components in the smart headset terminal 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the smart headset terminal 10 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 executes the respective methods and processes described above, such as the control method.
In some embodiments, the control method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on the smart headset terminal 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the control method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the control method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described herein may be implemented on a smart headset terminal having: a display device (for example, a diffraction waveguide chip + DLP (Digital Light Processing) display module, an LCOS (Liquid Crystal On Silicon) display module, or the like) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the smart headset terminal, or by which a user can provide input to the smart headset terminal. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A control method is applied to an intelligent head-mounted terminal, and the intelligent head-mounted terminal comprises the following steps: the intelligent head-mounted terminal comprises an intelligent head-mounted terminal body, at least one glasses leg connected with the intelligent head-mounted terminal body and at least two distance sensors arranged on the glasses leg, wherein the control method comprises the following steps:
acquiring distance information acquired by each distance sensor and time information corresponding to the distance information;
determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and generating a control instruction according to the gesture motion information, and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
2. The method of claim 1, wherein the at least two distance sensors are disposed outside the temple and adjacent distance sensors are spaced apart a predetermined distance.
3. The method of claim 1, wherein determining gesture motion information according to the target signal acquired by each distance sensor and the time information corresponding to the target signal comprises:
determining the time stamp of the shielding object detected by each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body is negative, determining that the gesture motion information is first motion information;
and if the difference value between the time stamp of the shelter detected by the distance sensor close to the intelligent head-mounted terminal body and the time stamp of the shelter detected by the distance sensor far away from the intelligent head-mounted terminal body in the adjacent distance sensors is positive, determining that the gesture motion information is second motion information.
4. The method according to claim 3, wherein generating a control instruction according to the gesture motion information, and controlling a current interface of the smart headset terminal based on the control instruction comprises:
acquiring type information of a current interface of the intelligent head-mounted terminal;
generating a control instruction according to the type information and the gesture motion information;
and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
5. The method of claim 4, wherein generating control instructions based on the type information and the gesture motion information comprises:
if the current interface of the intelligent head-mounted terminal is a first-class interface and the gesture motion information is first motion information, generating a next-page jump instruction, wherein the first-class interface comprises: a reading interface;
and if the current interface of the intelligent head-mounted terminal is the first type of interface and the gesture motion information is the second motion information, generating a previous page jump instruction.
6. The method of claim 4, wherein generating control instructions according to the type information and the gesture motion information comprises:
if the current interface of the intelligent head-mounted terminal is a second-type interface and the gesture motion information is first motion information, generating a next-jump instruction, wherein the second-type interface comprises: an audio playing interface;
and if the current interface of the intelligent head-mounted terminal is a second type interface and the gesture motion information is second motion information, generating a previous instruction for jumping.
7. The method of claim 4, wherein generating control instructions according to the type information and the gesture motion information comprises:
if the current interface of the intelligent head-mounted terminal is a third interface and the gesture motion information is the first motion information, generating a next set of instruction for jumping, wherein the third interface comprises: a video playing interface;
and if the current interface of the intelligent head-mounted terminal is a third-type interface and the gesture motion information is second motion information, generating a previous-set-skipping instruction.
8. The method of claim 4, wherein generating control instructions according to the type information and the gesture motion information comprises:
if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is second motion information, generating a right sliding instruction;
and if the current interface of the intelligent head-mounted terminal is a main interface and the gesture motion information is first motion information, generating a leftward sliding instruction.
9. The method of claim 3, wherein generating control instructions from the gesture motion information comprises:
pre-establishing a corresponding relation table of gesture motion information and control instructions;
and inquiring the corresponding relation table according to the gesture motion information to obtain a control instruction corresponding to the gesture motion information.
10. The method of claim 1, wherein determining gesture motion information according to the distance information collected by each distance sensor and the time information corresponding to the distance information comprises:
determining the shielding time corresponding to each distance sensor according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and if the shielding time corresponding to any distance sensor is greater than the time threshold, determining that the gesture motion information is third motion information.
11. The method of claim 10, wherein generating a control instruction according to the gesture motion information comprises:
and generating a pause instruction according to the third motion information.
12. A control device, comprising:
the information acquisition module is used for acquiring distance information acquired by each distance sensor and time information corresponding to the distance information;
the gesture motion information determining module is used for determining gesture motion information according to the distance information acquired by each distance sensor and the time information corresponding to the distance information;
and the control module is used for generating a control instruction according to the gesture motion information and controlling the current interface of the intelligent head-mounted terminal based on the control instruction.
13. The utility model provides an intelligence head-mounted terminal, its characterized in that, intelligence head-mounted terminal includes:
the intelligent head-mounted terminal comprises an intelligent head-mounted terminal body, at least one glasses leg connected with the intelligent head-mounted terminal body, at least two distance sensors arranged on the glasses leg, and at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the control method of any one of claims 1-11.
14. A computer-readable storage medium storing computer instructions for causing a processor to implement the control method of any one of claims 1-11 when executed.
CN202210625674.5A 2022-06-02 2022-06-02 Control method, device, equipment and storage medium Pending CN114879898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210625674.5A CN114879898A (en) 2022-06-02 2022-06-02 Control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210625674.5A CN114879898A (en) 2022-06-02 2022-06-02 Control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114879898A true CN114879898A (en) 2022-08-09

Family

ID=82678698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210625674.5A Pending CN114879898A (en) 2022-06-02 2022-06-02 Control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114879898A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103167155A (en) * 2012-09-25 2013-06-19 深圳市金立通信设备有限公司 System and method for achieving control of music player of mobile phone based on range sensor
US20140152539A1 (en) * 2012-12-03 2014-06-05 Qualcomm Incorporated Apparatus and method for an infrared contactless gesture system
CN106020462A (en) * 2016-05-16 2016-10-12 北京奇虎科技有限公司 Light-sensing operation method and apparatus for intelligent device
CN106527711A (en) * 2016-11-07 2017-03-22 珠海市魅族科技有限公司 Virtual reality equipment control method and virtual reality equipment
CN111651034A (en) * 2019-12-05 2020-09-11 武汉美讯半导体有限公司 Intelligent glasses and control method and control chip of intelligent glasses
CN113377199A (en) * 2021-06-16 2021-09-10 广东艾檬电子科技有限公司 Gesture recognition method, terminal device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103167155A (en) * 2012-09-25 2013-06-19 深圳市金立通信设备有限公司 System and method for achieving control of music player of mobile phone based on range sensor
US20140152539A1 (en) * 2012-12-03 2014-06-05 Qualcomm Incorporated Apparatus and method for an infrared contactless gesture system
CN106020462A (en) * 2016-05-16 2016-10-12 北京奇虎科技有限公司 Light-sensing operation method and apparatus for intelligent device
CN106527711A (en) * 2016-11-07 2017-03-22 珠海市魅族科技有限公司 Virtual reality equipment control method and virtual reality equipment
CN111651034A (en) * 2019-12-05 2020-09-11 武汉美讯半导体有限公司 Intelligent glasses and control method and control chip of intelligent glasses
CN113377199A (en) * 2021-06-16 2021-09-10 广东艾檬电子科技有限公司 Gesture recognition method, terminal device and storage medium

Similar Documents

Publication Publication Date Title
JP5962403B2 (en) Information processing apparatus, display control method, and program
US20180173326A1 (en) Automatic configuration of an input device based on contextual usage
WO2016045579A1 (en) Application interaction control method and apparatus, and terminal
CN105760102B (en) Terminal interaction control method and device and application program interaction control method
CN109643241A (en) Display processing method, device, storage medium and electric terminal
CN109086366B (en) Recommended news display method, device and equipment in browser and storage medium
CN110618780A (en) Interaction device and interaction method for interacting multiple signal sources
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
US10474324B2 (en) Uninterruptable overlay on a display
US11720814B2 (en) Method and system for classifying time-series data
CN112148160B (en) Floating window display method and device, electronic equipment and computer readable storage medium
CN112835484B (en) Dynamic display method and device based on operation body, storage medium and electronic equipment
CN103440033A (en) Method and device for achieving man-machine interaction based on bare hand and monocular camera
WO2019204772A1 (en) Display interface systems and methods
US20190056845A1 (en) Page Sliding Method And Apparatus, And User Terminal
CN112396676B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN109302563B (en) Anti-shake processing method and device, storage medium and mobile terminal
KR20160016574A (en) Method and device for providing image
EP2888716A1 (en) Target object angle determination using multiple cameras
CN111638787B (en) Method and device for displaying information
CN104077784A (en) Method for extracting target object and electronic device
CN108829329B (en) Operation object display method and device and readable medium
CN114879898A (en) Control method, device, equipment and storage medium
CN107479692B (en) Virtual reality scene control method and device and virtual reality device
CN114659450B (en) Robot following method, device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 430050 No. b1337, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Applicant after: Hubei Xingji Meizu Technology Co.,Ltd.

Address before: 430050 No. b1337, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Applicant before: Hubei Xingji times Technology Co.,Ltd.