CN113010008A - Tooth movement interaction method and device based on TWS earphone and computer equipment - Google Patents

Tooth movement interaction method and device based on TWS earphone and computer equipment Download PDF

Info

Publication number
CN113010008A
CN113010008A CN202110138076.0A CN202110138076A CN113010008A CN 113010008 A CN113010008 A CN 113010008A CN 202110138076 A CN202110138076 A CN 202110138076A CN 113010008 A CN113010008 A CN 113010008A
Authority
CN
China
Prior art keywords
user
tooth
interactive information
morse code
tws
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110138076.0A
Other languages
Chinese (zh)
Inventor
蒋壮
郑勇
段瑾
邬志强
戴志涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Waterward Information Co Ltd
Original Assignee
Shenzhen Waterward Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Waterward Information Co Ltd filed Critical Shenzhen Waterward Information Co Ltd
Priority to CN202110138076.0A priority Critical patent/CN113010008A/en
Publication of CN113010008A publication Critical patent/CN113010008A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The application provides a dental motion interaction method and device based on a TWS earphone and computer equipment. And then, analyzing according to the tooth action to obtain corresponding interactive information, and sending the interactive information to a preset terminal for displaying or playing so as to realize the interactive action between the first user and a second user of the preset terminal. In the application, the TWS earphone obtains the interactive information that the user wants to express through monitoring the tooth action of the user, and the corresponding analysis is particularly suitable for the scene that the user is the deaf-mute, so that the user can realize through simple tooth action when needing to express the interactive information, the convenience and the rapidness are realized, and the intelligent degree and the function diversification of the TWS earphone are effectively improved.

Description

Tooth movement interaction method and device based on TWS earphone and computer equipment
Technical Field
The application relates to the technical field of earphones, in particular to a dental interaction method and device based on a TWS earphone and computer equipment.
Background
With the rapid growth of the TWS (true Wireless stereo) headset market, the TWS headset has the capability of carrying various types of sensors such as a pressure sensor and an acceleration sensor, so that the TWS headset has the potential of realizing more functions. However, in the existing market, the intelligent degree and the functions of the TWS headset are single, and the TWS headset is basically only used for playing music and cannot meet the diversified demands of users.
Disclosure of Invention
The application mainly aims to provide a dental interaction method and device based on a TWS (two way communication) headset and computer equipment, and aims to overcome the defects of low intelligence degree and single function of the existing TWS headset.
In order to achieve the above object, the present application provides a dental motion interaction method based on a TWS headset, comprising:
acquiring, by a first user, dental actions of the first user while the first user is wearing the TWS headset;
analyzing according to the tooth action to obtain corresponding interactive information;
and sending the interactive information to a preset terminal for displaying or playing to realize the interactive action of the first user and a second user, wherein the second user is the user of the preset terminal.
Further, the tooth action includes a plurality of tooth occlusion durations, and the step of obtaining corresponding interaction information according to the tooth action analysis includes:
matching and obtaining Morse code symbols corresponding to the tooth occlusion durations respectively from a pre-constructed tooth occlusion duration and Morse code symbol mapping relation table, wherein the tooth occlusion duration and Morse code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Morse code symbols, and a single tooth occlusion duration corresponds to a single Morse code symbol;
matching characters corresponding to the Morse code symbols from a Morse code table;
and arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character in sequence to obtain the interactive information.
Further, the morse code symbols include "·" and "-", and the step of obtaining a tooth engagement duration of the first user through the acceleration sensor when the first user wears the TWS headset is preceded by:
receiving a first tooth occlusion time length and a second tooth occlusion time length input by a user;
and associating the first tooth occlusion time length with the Morse code symbol- ', and associating the second tooth occlusion time length with the Morse code symbol-', so as to generate the tooth occlusion time length and Morse code symbol mapping relation table.
Further, before the step of sending the interactive information to a preset terminal for displaying or playing, the method includes:
acquiring the language type of the preset terminal;
judging whether the current language type of the interactive information is the same as the used language type;
and if the current language type of the interactive information is different from the used language type, converting the interactive information into character information corresponding to the used language type.
Further, the TWS headset is deployed with an acceleration sensor, and the step of acquiring the tooth movements of the first user by the TWS headset comprises:
acquiring acceleration generated when the face of the first user moves through the acceleration sensor;
calculating the face activity strength according to the acceleration;
judging whether the face activity strength is greater than a strength threshold value;
if the face activity strength is greater than a strength threshold value, judging that the teeth of the first user are occluded, and recording that the current moment is a first moment;
when the face activity strength is detected to be larger than the strength threshold value again, the tooth of the first user is judged to be loosened, and the current moment is recorded as a second moment;
and calculating the tooth occlusion duration according to the first time and the second time.
Further, the step of capturing the dental actions of the first user by the TWS headset is preceded by:
identifying whether the TWS headset currently starts an interactive mode;
and if the TWS earphone starts the interactive mode currently, generating an acquisition instruction, wherein the acquisition instruction is used for controlling the TWS earphone to acquire the tooth action of the first user.
Further, the step of sending the interactive information to a preset terminal for displaying or playing includes:
determining whether the second user is hearing impaired;
if the hearing of the second user is damaged, the interactive information is sent to the preset terminal in a text form to be displayed;
and if the second user has no hearing damage, sending the interactive information to the preset terminal in an audio form for playing.
The application also provides a dental motion interaction device based on the TWS headset, comprising:
the first acquisition module is used for acquiring tooth actions of a first user through a TWS earphone when the first user wears the TWS earphone;
the analysis module is used for obtaining corresponding interactive information according to the tooth action analysis;
and the interaction module is used for sending the interaction information to a preset terminal for displaying or playing.
Further, the tooth action includes a plurality of tooth occlusion durations, and the analyzing module includes:
the first matching unit is used for matching and obtaining Morse code symbols corresponding to the tooth occlusion durations from a pre-constructed tooth occlusion duration and Morse code symbol mapping relation table, the tooth occlusion duration and Morse code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Morse code symbols, and a single tooth occlusion duration corresponds to a single Morse code symbol;
the second matching unit is used for matching characters corresponding to the Morse code symbols from the Morse code table;
and the arranging unit is used for sequentially arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character to obtain the interactive information.
Further, the morse code symbols include "·" and "-", the dental interaction device further includes:
the receiving module is used for receiving a first tooth occlusion duration and a second tooth occlusion duration which are input by a user;
the first generation module is used for associating the first tooth occlusion time length with the Morse code symbol "·", and associating the second tooth occlusion time length with the Morse code symbol "-", so as to generate the tooth occlusion time length and Morse code symbol mapping relation table.
Further, the dental movement interaction device further comprises:
the second acquisition module is used for acquiring the language type of the preset terminal;
the judging module is used for judging whether the current language type of the interactive information is the same as the used language type;
and the conversion module is used for converting the interactive information into character information corresponding to the used language type if the current language type of the interactive information is different from the used language type.
Further, the TWS headset is disposed with an acceleration sensor, and the first obtaining module includes:
the acquisition unit is used for acquiring acceleration generated when the face of the first user moves through the acceleration sensor;
the first calculation unit is used for calculating the face activity strength according to the acceleration;
the first judging unit is used for judging whether the face activity strength is greater than a strength threshold value;
the first judging unit is used for judging the tooth occlusion of the first user if the face activity strength is greater than a strength threshold value and recording the current moment as a first moment;
the second judging unit is used for judging that the teeth of the first user are loosened when the face activity strength is detected to be larger than the strength threshold value again, and recording the current moment as a second moment;
and the second calculating unit is used for calculating the tooth occlusion time length according to the first time and the second time.
Further, the dental movement interaction device further comprises:
the identification module is used for identifying whether the TWS headset starts an interaction mode currently;
and the second generating module is used for generating an acquisition instruction if the TWS headset currently starts the interactive mode, wherein the acquisition instruction is used for controlling the TWS headset to acquire the tooth action of the first user.
Further, the interaction module includes:
a second judging unit, configured to judge whether the second user is hearing-impaired;
the first generation unit is used for sending the interactive information to the preset terminal in a text form for displaying if the hearing of the second user is damaged;
and the second sending unit is used for sending the interactive information to the preset terminal in an audio form for playing if the second user has no hearing damage.
The present application further provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
According to the dental motion interaction method, the dental motion interaction device and the computer equipment based on the TWS headset, when a first user wears the TWS headset, the TWS headset acquires the dental motion of the first user. And then, analyzing according to the tooth action to obtain corresponding interactive information, and sending the interactive information to a preset terminal for displaying or playing so as to realize the interactive action between the first user and a second user of the preset terminal. In the application, the TWS earphone obtains the interactive information that the user wants to express through monitoring the tooth action of the user, and the corresponding analysis is particularly suitable for the scene that the user is the deaf-mute, so that the user can realize through simple tooth action when needing to express the interactive information, the convenience and the rapidness are realized, and the intelligent degree and the function diversification of the TWS earphone are effectively improved.
Drawings
FIG. 1 is a schematic diagram illustrating steps of a TWS headset-based dental interaction method according to an embodiment of the present application;
FIG. 2 is a block diagram of the overall structure of a TWS headset-based dental interaction device according to an embodiment of the present application;
fig. 3 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, in an embodiment of the present application, a method for dental interaction based on a TWS headset is provided, including:
s1, collecting tooth actions of the first user through the TWS headset when the first user wears the TWS headset;
s2, analyzing according to the tooth action to obtain corresponding interactive information;
and S3, sending the interactive information to a preset terminal for displaying or playing.
In this embodiment, the tooth action may be a tooth occlusion duration or a tooth occlusion force. In this embodiment, the tooth action is specifically described as a tooth occlusion duration, an acceleration sensor is disposed in the TWS headset, and when the first user wears the TWS headset and starts the interactive mode, the system acquires acceleration generated during facial movement of the first user through the acceleration sensor on the TWS headset. Then, the system calculates the face activity strength according to the acceleration, and judges whether the face activity strength is greater than a strength threshold value, wherein the strength threshold value is set according to the strength of the face activity when the first user bites or loosens teeth, and the first user can input the strength in the initialization process of the TWS headset. For example, when the TWS headset is initialized, the first user wears the TWS headset, the system prompts the first user to respectively bite and loosen teeth, records force data respectively detected by the acceleration sensor when the teeth bite and are loosened, and sets a force threshold according to the force data. And if the detected face activity strength of the first user is greater than the strength threshold value, judging that the teeth of the first user are occluded, and recording that the current moment is the first moment. And after that, when the face activity strength of the first user is detected to be larger than the strength threshold value again, the teeth of the first user are determined to be loosened, and the current time is recorded as a second time. And the system calculates the tooth occlusion duration according to the time interval length between the first time and the second time. And the system acquires a plurality of tooth occlusion durations of the first user according to the rule. The single tooth occlusion duration corresponds to single character information, for example, when the tooth occlusion duration is 1S, the single tooth occlusion duration corresponds to character information a; when the occlusion duration is 1.5S, corresponding to character information B; the corresponding relationship between the tooth occlusion duration and the character information may be factory default setting, or may be user-defined, and is not limited herein. The system analyzes the corresponding relation between the set tooth occlusion duration and the character information to obtain the character information corresponding to each tooth occlusion duration, and arranges the character information in sequence according to the acquisition time of the tooth occlusion duration, so as to combine and obtain the interactive information which the user wants to express. The system can send the interactive information to the preset terminal in a text form for display or in an audio form for playing, so that a second user of the preset terminal can know the information which the first user wants to express, and the interactive action between the first user and the second user is realized. The system of the embodiment can be a processing system of a mobile terminal, wherein the mobile terminal is in signal connection with a TWS earphone of a first user to obtain tooth occlusion duration, and the tooth occlusion duration is correspondingly processed to obtain interaction information; the system may also be a cloud processing system or a processing system of the TWS headset itself, which is not limited herein. In the embodiment, the system obtains the interactive information which the user wants to express by monitoring the tooth occlusion duration when the user wears the TWS headset and correspondingly analyzing the tooth occlusion duration, and is particularly suitable for scenes that the user is a deaf-mute, so that the user can realize the interactive information through simple tooth occlusion actions when needing to express the interactive information, the operation is convenient and fast, and the intelligent degree and the function diversification of the TWS headset are effectively improved.
In another embodiment, when the tooth action is tooth occlusion force, the tooth occlusion force can be acquired through an acceleration sensor on the TWS headset. As mentioned above, the system can acquire the facial activity strength caused by the first user performing the tooth occlusion action through the acceleration sensor, and when the tooth occlusion strength is larger, the facial activity of the first user is correspondingly intensified, so that the facial activity strength is larger. Therefore, positive correlation exists between the tooth occlusion force and the face activity force, and the system performs product calculation with the face activity force according to a preset force coefficient to obtain the tooth occlusion force. Wherein the force coefficient is set by a developer. After a plurality of tooth occlusion forces of the first user are collected, the system analyzes and obtains interactive information which the first user wants to express according to the corresponding relation between the preset tooth occlusion force and the character information.
Further, the tooth action includes a plurality of tooth occlusion durations, and the step of obtaining corresponding interaction information according to the tooth action analysis includes:
s201, Moss code symbols corresponding to the tooth occlusion durations are obtained through matching in a pre-constructed tooth occlusion duration and Moss code symbol mapping relation table, the tooth occlusion duration and Moss code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Moss code symbols, and a single tooth occlusion duration corresponds to a single Moss code symbol;
s202, matching characters corresponding to the Morse code symbols from a Morse code table;
and S203, arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character in sequence to obtain the interactive information.
In this embodiment, a mapping relationship table of the tooth occlusion duration and the morse code symbol is stored in the database inside the system, the mapping relationship table of the tooth occlusion duration and the morse code symbol includes a plurality of groups of tooth occlusion durations and morse code symbols, and a single tooth occlusion duration corresponds to a single morse code symbol. Because Morse code symbols only comprise two code symbols, namely the 'code symbol and the' code symbol, the corresponding relation between the tooth occlusion duration and the code symbols is simpler, and the corresponding interactive information can be obtained conveniently and quickly. Specifically, the system firstly matches and obtains Morse code symbols respectively corresponding to each tooth occlusion duration from a pre-constructed tooth occlusion duration and Morse code symbol mapping relation table. Then, the system matches the morse code table (the morse code table is common knowledge and will not be described here) to obtain characters corresponding to the morse code symbols, such as "· -" corresponding to the character "N". The system arranges the characters in sequence according to the acquisition time corresponding to each tooth occlusion duration, so as to obtain the interactive information which the user wants to express.
Further, the morse code symbols include "·" and "-", and the step of obtaining a tooth engagement duration of the first user through the acceleration sensor when the first user wears the TWS headset is preceded by:
s4, receiving a first tooth biting duration and a second tooth biting duration input by a user;
and S5, associating the first tooth occlusion time length with the Morse code symbol- 'and associating the second tooth occlusion time length with the Morse code symbol-' to generate a mapping relation table of the tooth occlusion time length and the Morse code symbol.
In this embodiment, the user may initialize the TWS headset, and customize the correspondence between the tooth occlusion duration and the morse code symbol, thereby constructing a mapping relationship table of the corresponding tooth occlusion duration and the morse code symbol. Specifically, the morse code symbols include two code symbols "·" and "-", the user initializes the TWS headset, and then prompts the user to perform a tooth occlusion action, respectively, and it is described that a first tooth occlusion duration of the user corresponds to the morse code symbol "·" and a second tooth occlusion duration corresponds to the morse code symbol "-". After receiving a first tooth occlusion time length and a second tooth occlusion time length which correspond to the tooth occlusion action of the user respectively according to the prompt, the system associates the first tooth occlusion time length with the Morse code symbol "·", associates the second tooth occlusion time length with the Morse code symbol "-", generates a mapping relation table of the tooth occlusion time length and the Morse code symbol, so that the interactive information expressed by the tooth occlusion action of the user can be analyzed according to the tooth occlusion time length and the Morse code symbol mapping relation table in the follow-up process.
Further, before the step of sending the interactive information to a preset terminal for displaying or playing, the method includes:
s6, acquiring the language type of the preset terminal;
s7, judging whether the current language type of the interactive information is the same as the used language type;
and S8, if the current language type of the interactive information is different from the used language type, converting the interactive information into character information corresponding to the used language type.
In this embodiment, the system obtains a used language type of the preset terminal, where the used language type may be a default language type (for example, chinese, japanese, etc.) of the preset terminal, or a language type used by the second user in daily use. And the system judges whether the current language type of the interactive information obtained by analysis is the same as the language type used by the preset terminal. If the current language type of the interactive information is different from the language type used by the preset terminal, the interactive information is converted into character information (for example, English is converted into Chinese) with the same language type as the used language type, and then the converted interactive information is sent to the preset terminal, so that a user can understand the interactive information without manual conversion.
Further, the TWS headset is deployed with an acceleration sensor, and the step of acquiring the tooth movements of the first user by the TWS headset comprises:
s101, acquiring acceleration generated during the facial activity of the first user through the acceleration sensor;
s102, calculating to obtain the face activity strength according to the acceleration;
s103, judging whether the face activity strength is greater than a strength threshold value;
s104, if the face activity strength is greater than a strength threshold value, judging that the teeth of the first user are occluded, and recording that the current moment is a first moment;
s105, when the fact that the face activity strength is larger than the strength threshold value is detected again, determining that the teeth of the first user are loosened, and recording the current moment as a second moment;
and S106, calculating the tooth occlusion time length according to the first time and the second time.
In this embodiment, the acceleration sensor is preferably a three-axis acceleration sensor, and the acceleration sensor can sense a differential capacitance generated by motion, and is mainly used for detecting information such as a state and a shake of the earphone. Because the interlock of tooth can cause the effect of emergence power between TWS earphone and the ear cavity, triaxial acceleration sensor can detect that the TWS earphone appears rocking this moment, and triaxial acceleration sensor can obtain an acceleration data, and the system corresponds the dynamics data that calculates and obtain a tooth interlock through Newton's second law F ═ Ma. Specifically, the system acquires acceleration generated by the facial activity of the first user through an acceleration sensor. And calculating to obtain the face activity strength according to the acceleration, comparing the face activity strength with a strength threshold value, and judging the size relationship between the face activity strength and the strength threshold value. If the strength of the facial activity is smaller than the strength threshold, the facial activity is not generated by the first user occluding or releasing teeth, and therefore, the analysis action of the interactive information is not triggered. If the face activity strength is greater than the strength threshold, the system determines that the first user's teeth are occluded and records that the current moment is the first moment. The system monitors the facial activity of the user in real time through the acceleration sensor, and when the facial activity strength of the user is detected to be larger than the strength threshold value again, the tooth of the first user is loosened, and the current moment is recorded as the second moment. And the system calculates the difference between the second moment and the first moment to obtain the tooth occlusion duration. During the wearing period of the TWS earphone, the system monitors the tooth occlusion and release actions of the user according to the rules, and therefore a plurality of tooth occlusion time lengths representing interactive information are obtained.
Further, the step of capturing the dental actions of the first user by the TWS headset is preceded by:
s9, identifying whether the TWS headset starts an interactive mode currently;
and S10, if the interaction mode of the TWS earphone is started currently, generating an acquisition instruction, wherein the acquisition instruction is used for controlling the TWS earphone to acquire the tooth action of the first user.
In this embodiment, the TWS headset is set to have a standard mode and an interactive mode, the standard mode only has conventional functions of playing audio, talking, and the like, and the interactive mode adds a function of monitoring and analyzing the tooth movement of the user to obtain interactive information, so that the common mode is more energy-saving than the interactive mode. When the user wears and turns on the TWS headset, the system identifies whether the TWS headset currently has the interactive mode enabled. If the TWS earphone does not start the interaction mode currently, the interaction information does not need to be analyzed, and corresponding components (such as an acceleration sensor) do not need to be started, so that the energy consumption is saved. And if the TWS headset currently starts the interaction mode, generating an acquisition instruction, wherein the acquisition instruction is used for controlling the TWS headset to acquire the tooth action of the first user. Furthermore, the system needs to establish a signal connection channel with the preset terminal to facilitate transmission of the interactive information.
Further, the step of sending the interactive information to a preset terminal for displaying or playing includes:
s301, judging whether the second user is hearing-impaired;
s302, if the hearing of the second user is damaged, the interactive information is sent to the preset terminal in a text form to be displayed;
and S303, if the second user has no hearing damage, sending the interactive information to the preset terminal in an audio form for playing.
In this embodiment, the system may identify whether the second user is hearing-impaired according to the personal case information input by the second user; or the system inquires and acquires the operation authority of the preset terminal, acquires the daily use operation of the second user from the preset terminal, and judges whether the second user has hearing damage according to the daily use operation of the second user on the preset terminal. And if the hearing-impaired record exists in the personal case information of the second user or no audio playing operation history exists in the daily use operation of the second user on the preset terminal, the system judges that the second user is hearing-impaired. The system sends the interactive information to a preset terminal in a text form for displaying so that a second user can clearly know the interactive information. And if the personal case information of the second user does not have the record of hearing impairment, or the second user has the operation history of audio playing in the daily use operation of the preset terminal, the system judges that the second user does not have the hearing impairment. The system sends the interactive information to a preset terminal in an audio form for playing, and a user can more directly know the interactive information through audio data.
Referring to fig. 2, an embodiment of the present application further provides a dental interaction device based on a TWS headset, including:
the first acquisition module 1 is used for acquiring tooth actions of a first user through a TWS headset when the first user wears the TWS headset;
the analysis module 2 is used for obtaining corresponding interaction information according to the tooth action analysis;
and the interaction module 3 is used for sending the interaction information to a preset terminal for displaying or playing.
Further, the tooth action includes a plurality of tooth occlusion durations, and the analysis module 2 includes:
the first matching unit is used for matching and obtaining Morse code symbols corresponding to the tooth occlusion durations from a pre-constructed tooth occlusion duration and Morse code symbol mapping relation table, the tooth occlusion duration and Morse code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Morse code symbols, and a single tooth occlusion duration corresponds to a single Morse code symbol;
the second matching unit is used for matching characters corresponding to the Morse code symbols from the Morse code table;
and the arranging unit is used for sequentially arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character to obtain the interactive information.
Further, the morse code symbols include "·" and "-", the dental interaction device further includes:
the receiving module 4 is used for receiving a first tooth occlusion duration and a second tooth occlusion duration input by a user;
the first generation module 5 is configured to associate the first tooth occlusion duration with the morse code symbol "·" and associate the second tooth occlusion duration with the morse code symbol "-", so as to generate the tooth occlusion duration and morse code symbol mapping relationship table.
Further, the dental movement interaction device further comprises:
the second obtaining module 6 is used for obtaining the language type of the preset terminal;
the judging module 7 is used for judging whether the current language type of the interactive information is the same as the used language type;
a conversion module 8, configured to convert the interactive information into character information corresponding to the used language type if the current language type of the interactive information is different from the used language type.
Further, the TWS headset is disposed with an acceleration sensor, and the first obtaining module 1 includes:
the acquisition unit is used for acquiring acceleration generated when the face of the first user moves through the acceleration sensor;
the first calculation unit is used for calculating the face activity strength according to the acceleration;
the first judging unit is used for judging whether the face activity strength is greater than a strength threshold value;
the first judging unit is used for judging the tooth occlusion of the first user if the face activity strength is greater than a strength threshold value and recording the current moment as a first moment;
the second judging unit is used for judging that the teeth of the first user are loosened when the face activity strength is detected to be larger than the strength threshold value again, and recording the current moment as a second moment;
and the second calculating unit is used for calculating the tooth occlusion time length according to the first time and the second time.
Further, the dental movement interaction device further comprises:
an identifying module 9, configured to identify whether the TWS headset currently starts an interactive mode;
a second generating module 10, configured to generate an obtaining instruction if the TWS headset currently starts an interactive mode, where the obtaining instruction is used to control the TWS headset to collect tooth motions of the first user.
Further, the interaction module 3 includes:
a second judging unit, configured to judge whether the second user is hearing-impaired;
the first generation unit is used for sending the interactive information to the preset terminal in a text form for displaying if the hearing of the second user is damaged;
and the second sending unit is used for sending the interactive information to the preset terminal in an audio form for playing if the second user has no hearing damage.
In this embodiment, each module and unit of the dental interaction device are used to correspondingly execute each step in the above-mentioned dental interaction method based on the TWS headset, and the detailed implementation process thereof is not described in detail herein.
The embodiment provides a tooth movement interaction device based on a TWS earphone, and when a first user wears the TWS earphone, the TWS earphone acquires tooth movements of the first user. And then, analyzing according to the tooth action to obtain corresponding interactive information, and sending the interactive information to a preset terminal for displaying or playing so as to realize the interactive action between the first user and a second user of the preset terminal. In the application, the TWS earphone obtains the interactive information that the user wants to express through monitoring the tooth action of the user, and the corresponding analysis is particularly suitable for the scene that the user is the deaf-mute, so that the user can realize through simple tooth action when needing to express the interactive information, the convenience and the rapidness are realized, and the intelligent degree and the function diversification of the TWS earphone are effectively improved.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as strength threshold values and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a TWS headset-based dental interaction method.
The processor executes the steps of the dental interaction method based on the TWS headset:
s1, collecting tooth actions of the first user through the TWS headset when the first user wears the TWS headset;
s2, analyzing according to the tooth action to obtain corresponding interactive information;
and S3, sending the interactive information to a preset terminal for displaying or playing.
Further, the tooth action includes a plurality of tooth occlusion durations, and the step of obtaining corresponding interaction information according to the tooth action analysis includes:
s201, Moss code symbols corresponding to the tooth occlusion durations are obtained through matching in a pre-constructed tooth occlusion duration and Moss code symbol mapping relation table, the tooth occlusion duration and Moss code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Moss code symbols, and a single tooth occlusion duration corresponds to a single Moss code symbol;
s202, matching characters corresponding to the Morse code symbols from a Morse code table;
and S203, arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character in sequence to obtain the interactive information.
Further, the morse code symbols include "·" and "-", and the step of obtaining a tooth engagement duration of the first user through the acceleration sensor when the first user wears the TWS headset is preceded by:
s4, receiving a first tooth biting duration and a second tooth biting duration input by a user;
and S5, associating the first tooth occlusion time length with the Morse code symbol- 'and associating the second tooth occlusion time length with the Morse code symbol-' to generate a mapping relation table of the tooth occlusion time length and the Morse code symbol.
Further, before the step of sending the interactive information to a preset terminal for displaying or playing, the method includes:
s6, acquiring the language type of the preset terminal;
s7, judging whether the current language type of the interactive information is the same as the used language type;
and S8, if the current language type of the interactive information is different from the used language type, converting the interactive information into character information corresponding to the used language type.
Further, the TWS headset is deployed with an acceleration sensor, and the step of acquiring the tooth movements of the first user by the TWS headset comprises:
s101, acquiring acceleration generated during the facial activity of the first user through the acceleration sensor;
s102, calculating to obtain the face activity strength according to the acceleration;
s103, judging whether the face activity strength is greater than a strength threshold value;
s104, if the face activity strength is greater than a strength threshold value, judging that the teeth of the first user are occluded, and recording that the current moment is a first moment;
s105, when the fact that the face activity strength is larger than the strength threshold value is detected again, determining that the teeth of the first user are loosened, and recording the current moment as a second moment;
and S106, calculating the tooth occlusion time length according to the first time and the second time.
Further, the step of capturing the dental actions of the first user by the TWS headset is preceded by:
s9, identifying whether the TWS headset starts an interactive mode currently;
and S10, if the interaction mode of the TWS earphone is started currently, generating an acquisition instruction, wherein the acquisition instruction is used for controlling the TWS earphone to acquire the tooth action of the first user.
Further, the step of sending the interactive information to a preset terminal for displaying or playing includes:
s301, judging whether the second user is hearing-impaired;
s302, if the hearing of the second user is damaged, the interactive information is sent to the preset terminal in a text form to be displayed;
and S303, if the second user has no hearing damage, sending the interactive information to the preset terminal in an audio form for playing.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a TWS-headset-based dental interaction method, where the TWS-headset-based dental interaction method specifically includes:
s1, collecting tooth actions of the first user through the TWS headset when the first user wears the TWS headset;
s2, analyzing according to the tooth action to obtain corresponding interactive information;
and S3, sending the interactive information to a preset terminal for displaying or playing.
Further, the tooth action includes a plurality of tooth occlusion durations, and the step of obtaining corresponding interaction information according to the tooth action analysis includes:
s201, Moss code symbols corresponding to the tooth occlusion durations are obtained through matching in a pre-constructed tooth occlusion duration and Moss code symbol mapping relation table, the tooth occlusion duration and Moss code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Moss code symbols, and a single tooth occlusion duration corresponds to a single Moss code symbol;
s202, matching characters corresponding to the Morse code symbols from a Morse code table;
and S203, arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character in sequence to obtain the interactive information.
Further, the morse code symbols include "·" and "-", and the step of obtaining a tooth engagement duration of the first user through the acceleration sensor when the first user wears the TWS headset is preceded by:
s4, receiving a first tooth biting duration and a second tooth biting duration input by a user;
and S5, associating the first tooth occlusion time length with the Morse code symbol- 'and associating the second tooth occlusion time length with the Morse code symbol-' to generate a mapping relation table of the tooth occlusion time length and the Morse code symbol.
Further, before the step of sending the interactive information to a preset terminal for displaying or playing, the method includes:
s6, acquiring the language type of the preset terminal;
s7, judging whether the current language type of the interactive information is the same as the used language type;
and S8, if the current language type of the interactive information is different from the used language type, converting the interactive information into character information corresponding to the used language type.
Further, the TWS headset is deployed with an acceleration sensor, and the step of acquiring the tooth movements of the first user by the TWS headset comprises:
s101, acquiring acceleration generated during the facial activity of the first user through the acceleration sensor;
s102, calculating to obtain the face activity strength according to the acceleration;
s103, judging whether the face activity strength is greater than a strength threshold value;
s104, if the face activity strength is greater than a strength threshold value, judging that the teeth of the first user are occluded, and recording that the current moment is a first moment;
s105, when the fact that the face activity strength is larger than the strength threshold value is detected again, determining that the teeth of the first user are loosened, and recording the current moment as a second moment;
and S106, calculating the tooth occlusion time length according to the first time and the second time.
Further, the step of capturing the dental actions of the first user by the TWS headset is preceded by:
s9, identifying whether the TWS headset starts an interactive mode currently;
and S10, if the interaction mode of the TWS earphone is started currently, generating an acquisition instruction, wherein the acquisition instruction is used for controlling the TWS earphone to acquire the tooth action of the first user.
Further, the step of sending the interactive information to a preset terminal for displaying or playing includes:
s301, judging whether the second user is hearing-impaired;
s302, if the hearing of the second user is damaged, the interactive information is sent to the preset terminal in a text form to be displayed;
and S303, if the second user has no hearing damage, sending the interactive information to the preset terminal in an audio form for playing.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware associated with instructions of a computer program, which may be stored on a non-volatile computer-readable storage medium, and when executed, may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only for the preferred embodiment of the present application and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (10)

1. A dental movement interaction method based on TWS earphones is characterized by comprising the following steps:
acquiring, by a first user, dental actions of the first user while the first user is wearing the TWS headset;
analyzing according to the tooth action to obtain corresponding interactive information;
and sending the interactive information to a preset terminal for displaying or playing.
2. A TWS headset-based dental movement interaction method according to claim 1, wherein the dental movement includes a plurality of dental occlusion durations, and the step of obtaining corresponding interaction information by parsing according to the dental movement includes:
matching and obtaining Morse code symbols corresponding to the tooth occlusion durations respectively from a pre-constructed tooth occlusion duration and Morse code symbol mapping relation table, wherein the tooth occlusion duration and Morse code symbol mapping relation table comprises a plurality of groups of tooth occlusion durations and Morse code symbols, and a single tooth occlusion duration corresponds to a single Morse code symbol;
matching characters corresponding to the Morse code symbols from a Morse code table;
and arranging the characters according to the acquisition time of the tooth occlusion duration corresponding to each character in sequence to obtain the interactive information.
3. A TWS headset based dental interaction method according to claim 2, wherein the Morse code symbols comprise "·" and "-", wherein the step of obtaining a number of tooth bite durations of the first user by the acceleration sensor when the first user wears the TWS headset is preceded by:
receiving a first tooth occlusion time length and a second tooth occlusion time length input by a user;
and associating the first tooth occlusion time length with the Morse code symbol- ', and associating the second tooth occlusion time length with the Morse code symbol-', so as to generate the tooth occlusion time length and Morse code symbol mapping relation table.
4. A TWS headset-based dental interaction method according to claim 2, wherein the TWS headset is deployed with an acceleration sensor, and the step of capturing the dental actions of the first user by the TWS headset comprises:
acquiring acceleration generated when the face of the first user moves through the acceleration sensor;
calculating the face activity strength according to the acceleration;
judging whether the face activity strength is greater than a strength threshold value;
if the face activity strength is greater than a strength threshold value, judging that the teeth of the first user are occluded, and recording that the current moment is a first moment;
when the face activity strength is detected to be larger than the strength threshold value again, the tooth of the first user is judged to be loosened, and the current moment is recorded as a second moment;
and calculating the tooth occlusion duration according to the first time and the second time.
5. The TWS headset-based dental interaction method of claim 1, wherein the step of sending the interaction information to a preset terminal for display or playing is preceded by the steps of:
acquiring the language type of the preset terminal;
judging whether the current language type of the interactive information is the same as the used language type;
and if the current language type of the interactive information is different from the used language type, converting the interactive information into character information corresponding to the used language type.
6. A TWS headset-based dental interaction method according to claim 1, wherein the step of capturing the dental actions of the first user by the TWS headset is preceded by:
identifying whether the TWS headset currently starts an interactive mode;
and if the TWS earphone starts the interactive mode currently, generating an acquisition instruction, wherein the acquisition instruction is used for controlling the TWS earphone to acquire the tooth action of the first user.
7. The TWS headset-based dental interaction method of claim 1, wherein the step of sending the interaction information to a preset terminal for display or playing comprises:
determining whether the second user is hearing impaired;
if the hearing of the second user is damaged, the interactive information is sent to the preset terminal in a text form to be displayed;
and if the second user has no hearing damage, sending the interactive information to the preset terminal in an audio form for playing.
8. A TWS headset-based dental interaction device, comprising:
the first acquisition module is used for acquiring tooth actions of a first user through a TWS earphone when the first user wears the TWS earphone;
the analysis module is used for obtaining corresponding interactive information according to the tooth action analysis;
and the interaction module is used for sending the interaction information to a preset terminal for displaying or playing.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110138076.0A 2021-02-01 2021-02-01 Tooth movement interaction method and device based on TWS earphone and computer equipment Pending CN113010008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110138076.0A CN113010008A (en) 2021-02-01 2021-02-01 Tooth movement interaction method and device based on TWS earphone and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110138076.0A CN113010008A (en) 2021-02-01 2021-02-01 Tooth movement interaction method and device based on TWS earphone and computer equipment

Publications (1)

Publication Number Publication Date
CN113010008A true CN113010008A (en) 2021-06-22

Family

ID=76385314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110138076.0A Pending CN113010008A (en) 2021-02-01 2021-02-01 Tooth movement interaction method and device based on TWS earphone and computer equipment

Country Status (1)

Country Link
CN (1) CN113010008A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130083940A1 (en) * 2010-05-26 2013-04-04 Korea Advanced Institute Of Science And Technology Bone Conduction Earphone, Headphone and Operation Method of Media Device Using the Same
CN103412640A (en) * 2013-05-16 2013-11-27 胡三清 Device and method for character or command input controlled by teeth
CN104317388A (en) * 2014-09-15 2015-01-28 联想(北京)有限公司 Interaction method and wearable electronic equipment
CN108427962A (en) * 2018-03-01 2018-08-21 阿里巴巴集团控股有限公司 A kind of method, apparatus and equipment of identification
CN111050248A (en) * 2020-01-14 2020-04-21 Oppo广东移动通信有限公司 Wireless earphone and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130083940A1 (en) * 2010-05-26 2013-04-04 Korea Advanced Institute Of Science And Technology Bone Conduction Earphone, Headphone and Operation Method of Media Device Using the Same
CN103412640A (en) * 2013-05-16 2013-11-27 胡三清 Device and method for character or command input controlled by teeth
CN104317388A (en) * 2014-09-15 2015-01-28 联想(北京)有限公司 Interaction method and wearable electronic equipment
CN108427962A (en) * 2018-03-01 2018-08-21 阿里巴巴集团控股有限公司 A kind of method, apparatus and equipment of identification
CN111050248A (en) * 2020-01-14 2020-04-21 Oppo广东移动通信有限公司 Wireless earphone and control method thereof

Similar Documents

Publication Publication Date Title
CN111817943B (en) Data processing method and device based on instant messaging application
CN113163272B (en) Video editing method, computer device and storage medium
CN111770407A (en) Earphone state detection method and device, earphone and readable storage medium
CN108519811B (en) Screenshot method and related product
CN108391164B (en) Video parsing method and related product
WO2015176287A1 (en) Method and apparatus for communication by using text information
CN108833721B (en) Emotion analysis method based on call, user terminal and system
CN110807895A (en) Monitoring method, device, equipment and storage medium based on intelligent wearable equipment
CN113055778A (en) Earphone interaction method and device based on dental motion state, terminal equipment and medium
CN113010008A (en) Tooth movement interaction method and device based on TWS earphone and computer equipment
CN108762644A (en) Control method and device and earphone for terminal
CN108769799B (en) Information processing method and electronic equipment
CN109361987B (en) Sports earphone and control method, device and equipment thereof
JP2020099674A (en) Hearing system with heart rate monitoring and related method
CN112862073B (en) Compressed data analysis method and device, storage medium and terminal
CN114501292A (en) Earphone wearing detection method and device, earphone equipment and storage medium
CN110247770B (en) Key generation method, device, terminal and medium for body area network
CN115089152A (en) Heart rate detection method and device, wearable device and medium
CN111800699B (en) Volume adjustment prompting method and device, earphone equipment and storage medium
CN108540916B (en) Bluetooth headset detection method and device based on human head model and human head model
CN106713641B (en) Time reporting method based on wearable equipment and wearable equipment
CN112748676B (en) Intelligent comb control method and device and storage medium
WO2022247627A1 (en) Control method and apparatus based on earphones, and earphones and computer-readable storage medium
CN114501220B (en) Earphone control method, earphone control device, earphone equipment and storage medium
CN113316041B (en) Remote health detection system, method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination