CN111179923B - Audio playing method based on wearable device and wearable device - Google Patents

Audio playing method based on wearable device and wearable device Download PDF

Info

Publication number
CN111179923B
CN111179923B CN201911154106.6A CN201911154106A CN111179923B CN 111179923 B CN111179923 B CN 111179923B CN 201911154106 A CN201911154106 A CN 201911154106A CN 111179923 B CN111179923 B CN 111179923B
Authority
CN
China
Prior art keywords
voice interaction
playing
audio
user
interaction instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911154106.6A
Other languages
Chinese (zh)
Other versions
CN111179923A (en
Inventor
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201911154106.6A priority Critical patent/CN111179923B/en
Publication of CN111179923A publication Critical patent/CN111179923A/en
Application granted granted Critical
Publication of CN111179923B publication Critical patent/CN111179923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3173Constructional details thereof wherein the projection device is specially adapted for enhanced portability
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

An intelligent host included in the wearable device can automatically rotate when standing to be vertical to a horizontal plane, whether an externally input voice interaction instruction is received or not is detected through a microphone on the intelligent host, a sound source position corresponding to the voice interaction instruction is identified when the voice interaction instruction is received, the intelligent host is controlled to automatically rotate to a directional loudspeaker on the intelligent host to face the sound source position corresponding to the voice interaction instruction when standing to be vertical to the horizontal plane, and response audio aiming at the voice interaction instruction is played through the directional loudspeaker. By implementing the embodiment of the application, the user experience can be improved.

Description

Audio playing method based on wearable device and wearable device
Technical Field
The application relates to the technical field of wearable equipment, in particular to an audio playing method based on wearable equipment and the wearable equipment.
Background
With the development of science and technology, the functions that the smart watch can realize are more and more, and the user can utilize the smart watch to carry out video conversation, listen to music and watch videos and so on. In practice, when a user uses the smart watch to carry out video call or enjoy audio and video in a public place, the user often chooses to wear the earphones in order to reduce the influence on people around or to obtain good immersion, but the user experience is affected because the user usually feels uncomfortable when wearing the earphones for a long time.
Disclosure of Invention
The embodiment of the application discloses an audio playing method based on wearable equipment and the wearable equipment, which are beneficial to improving user experience.
The first aspect of the present application discloses an audio playing method based on a wearable device, where a smart host included in the wearable device can automatically rotate when standing up to be perpendicular to a horizontal plane, and the method includes:
detecting whether an externally input voice interaction instruction is received or not through a microphone on the intelligent host;
if the voice interaction instruction is received, identifying a sound source position corresponding to the voice interaction instruction;
controlling the intelligent host to automatically rotate to the orientation of the directional loudspeaker on the intelligent host towards the sound source orientation when the intelligent host stands to be vertical to the horizontal plane;
and playing response audio aiming at the voice interaction instruction through the directional loudspeaker.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the method further includes:
if the voice interaction instruction is received, extracting user voiceprint characteristics of the voice interaction instruction;
judging whether the user voiceprint features are matched with preset voiceprint features or not;
and when the user voiceprint features are matched with the preset voiceprint features, executing the recognition of the sound source position corresponding to the voice interaction instruction.
As an optional implementation manner, in the first aspect of this embodiment of the present application, after the controlling the smart host automatically rotates to a position where a directional speaker on the smart host faces the sound source when standing to be perpendicular to a horizontal plane, and before playing a response audio for the voice interaction instruction through the directional speaker, the method further includes:
acquiring a distance value between a sound source and the directional loudspeaker;
determining the playing volume according to the distance value;
the playing, by the directional speaker, the response audio for the voice interaction instruction includes:
and playing the response audio aiming at the voice interaction instruction at the playing volume through the directional loudspeaker.
As an optional implementation manner, in the first aspect of the embodiment of the present application, if the voice interaction instruction is a music playing instruction, before the directional speaker plays a response audio corresponding to the voice interaction instruction at the playing volume, the method further includes:
identifying the collected user image of the sound source to obtain the emotion type of the user;
acquiring a music list conforming to the emotion category from a music library;
taking the audio corresponding to the music list as response audio aiming at the voice interaction instruction;
the method further comprises the following steps:
when a projection request is received, acquiring a synchronous video of the response audio;
and performing projection operation on the synchronous video while playing the response audio.
As an optional implementation manner, in the first aspect of this embodiment of the present application, if the voice interaction instruction is a word reading instruction, before the response audio to the voice interaction instruction is played at the play volume through the directional speaker, the method includes:
acquiring words to be read with following;
taking the standard pronunciation audio of the words to be read after as the response audio aiming at the voice interaction instruction;
the method further comprises the following steps:
collecting the follow-up reading audio of the user while playing the response audio;
and when receiving a reading following termination instruction, comparing the reading following audio with the response audio to obtain a reading following score.
A second aspect of embodiments of the present application discloses a wearable device, where a smart host included in the wearable device can automatically rotate when standing up to be perpendicular to a horizontal plane, and the smart host includes:
the detection unit is used for detecting whether an externally input voice interaction instruction is received or not through a microphone on the intelligent host;
the acquisition unit is used for identifying the sound source direction corresponding to the voice interaction instruction when the voice interaction instruction is received;
the rotating unit is used for controlling the intelligent host to automatically rotate to the orientation of the directional loudspeaker on the intelligent host towards the sound source orientation when the intelligent host stands to be vertical to the horizontal plane;
and the playing unit is used for playing response audio aiming at the voice interaction instruction through the directional loudspeaker.
As an optional implementation manner, in the second aspect of this embodiment of the present application, the smart host further includes:
and the judging unit is used for extracting the user voiceprint features of the voice interaction instruction when the voice interaction instruction is received, judging whether the user voiceprint features are matched with the preset voiceprint features or not, and triggering the acquiring unit to execute the operation of identifying the sound source position corresponding to the voice interaction instruction when the user voiceprint features are matched with the preset voiceprint features.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the obtaining unit is further configured to, when the rotating unit controls the smart host to automatically rotate to a position where a directional speaker on the smart host faces the sound source when the smart host stands to be perpendicular to a horizontal plane, and before the playing unit plays a response audio for the voice interaction instruction through the directional speaker, obtain a distance value between the sound source and the directional speaker;
the intelligent host further comprises:
the determining unit is used for determining the playing volume according to the distance value;
the way that the playing unit plays the response audio aiming at the voice interaction instruction through the directional loudspeaker is specifically as follows:
and the playing unit is used for playing the response audio aiming at the voice interaction instruction through the directional loudspeaker at the playing volume.
As an optional implementation manner, in the second aspect of the embodiment of the present application, if the voice interaction instruction is a music playing instruction, the intelligent host further includes:
the image processing unit is used for identifying the collected user image of the sound source before the playing unit plays the response audio aiming at the voice interaction instruction at the playing volume through the directional loudspeaker to obtain the emotion category of the user;
the determining unit is further configured to acquire a music list conforming to the emotion category in a music library, and use audio corresponding to the music list as response audio for the voice interaction instruction;
and the projection unit is used for acquiring the synchronous video of the response audio when receiving a projection request, and executing projection operation on the synchronous video while playing the response audio.
As an optional implementation manner, in the second aspect of the embodiment of the present application, if the voice interaction instruction is a word reading instruction, the obtaining unit is further configured to obtain, by the playing unit through the directional speaker, a word to be read before a response audio to the voice interaction instruction is played at the playing volume;
the determining unit is further configured to use the standard pronunciation audio of the word to be read after as a response audio for the voice interaction instruction;
the intelligent host further comprises:
and the comparison unit is used for collecting the reading following audio of the user while playing the response audio, and obtaining a reading following score by comparing the reading following audio with the response audio when receiving a reading following termination instruction.
A third aspect of the embodiments of the present application discloses a wearable device, where a smart host included in the wearable device can automatically rotate when standing up to be perpendicular to a horizontal plane, and the smart host includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program codes stored in the memory to execute the steps of the wearable device-based audio playing method disclosed in the first aspect of the embodiment of the application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, on which computer instructions are stored, where the computer instructions, when executed, cause a computer to perform the steps of the method for playing an audio based on a wearable device disclosed in the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the intelligent host included in the wearable device can automatically rotate when standing to be vertical to a horizontal plane, whether an externally input voice interaction instruction is received or not is detected through a microphone on the intelligent host, a sound source position corresponding to the voice interaction instruction is identified when the voice interaction instruction is received, the intelligent host is controlled to automatically rotate to the sound source position corresponding to the voice interaction instruction in the direction of a directional loudspeaker on the intelligent host when standing to be vertical to the horizontal plane, and response audio aiming at the voice interaction instruction is played through the directional loudspeaker. It is thus clear that, implement this application embodiment, the user is when utilizing wearable equipment to carry out video conversation or enjoy audio and video, and wearable equipment can the orientation of automatic adjustment directional loudspeaker, only plays the audio frequency for appointed user, has not only promoted user's sense of immersing, has still avoided because of the user lasts to wear the condition emergence that the earphone leads to people's ear uncomfortable for a long time, is favorable to promoting user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a wearable device according to an embodiment of the present invention when a smart host is erected to be perpendicular to a horizontal plane;
fig. 2 is a schematic structural diagram of the wearable device shown in fig. 1 after the smart host is rotated by 90 degrees;
fig. 3 is a schematic flowchart of an audio playing method based on a wearable device disclosed in an embodiment of the present application;
fig. 4 is a schematic flowchart of another wearable device-based audio playing method disclosed in an embodiment of the present application;
fig. 5 is a schematic flowchart of another wearable device-based audio playing method disclosed in the embodiment of the present application;
fig. 6 is a modular schematic diagram of a wearable device disclosed in an embodiment of the present application;
fig. 7 is a modular schematic diagram of another wearable device disclosed in embodiments of the present application;
fig. 8 is a modular schematic diagram of yet another wearable device disclosed in embodiments of the present application;
fig. 9 is a modular schematic diagram of yet another wearable device disclosed in embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses an audio playing method based on wearable equipment and the wearable equipment, which are beneficial to improving user experience. The following detailed description is made with reference to the accompanying drawings.
In order to better understand the audio playing method based on the wearable device disclosed in the embodiment of the present application, the wearable device disclosed in the embodiment of the present application is described first, please refer to fig. 1 and fig. 2, as shown in fig. 1 and fig. 2, the wearable device may be composed of a support 10, a smart host 20, and a rotating component 30, where fig. 1 is a schematic structural diagram of the wearable device when the smart host stands up to be perpendicular to a horizontal plane, and fig. 2 is a schematic structural diagram of the wearable device shown in fig. 1 after the smart host rotates 90 degrees.
The following describes in detail an understanding of the wearable device-based audio playing method disclosed in the embodiments of the present application.
Example one
Referring to fig. 3, fig. 3 is a schematic flowchart of an audio playing method based on a wearable device according to an embodiment of the present disclosure, where a smart host included in the wearable device can automatically rotate when the smart host stands to be perpendicular to a horizontal plane, as shown in fig. 1 to 2, and as shown in fig. 3, the audio playing method based on the wearable device may include the following steps:
301. detecting whether an externally input voice interaction instruction is received or not through a microphone on the intelligent host of the wearable device, and if so, executing the step 302 to the step 304; if not, the flow is ended.
302. And identifying the sound source position corresponding to the voice interaction instruction.
In this embodiment of the application, the number of the microphones on the smart host of the wearable device may be multiple, the multiple microphones form a microphone array, the microphones of the wearable device detect that an externally input voice interaction instruction is received, that is, each microphone in the microphone array detects that an externally input voice interaction instruction is received, and based on the basis, identifying the sound source orientation corresponding to the voice interaction instruction may include: acquiring time delay information of a microphone array receiving an externally input voice interaction instruction; and calculating the sound source position corresponding to the voice interaction instruction according to the time delay information.
In this embodiment of the present application, when the detection result in step 301 is yes, the following steps may also be performed:
extracting the user voiceprint characteristics of the voice interaction instruction;
judging whether the user voiceprint features are matched with preset voiceprint features or not;
when the user voiceprint feature matches the preset voiceprint feature, the step 302 is continued.
In the embodiment of the application, the user identity is subjected to legality verification by utilizing the voiceprint characteristics of the user, so that the privatization degree of the wearable equipment is favorably improved, and the wearable equipment can be effectively prevented from being illegally used.
Optionally, in this embodiment of the application, if the voiceprint feature of the user does not match the preset voiceprint feature, the following steps may be further performed:
acquiring a user image, and sending the user image to a terminal device associated with the wearable device;
detecting whether an authorization instruction sent by the terminal equipment is received or not;
if the authorization command is received, continue to step 302.
By implementing the method, when the voiceprint feature verification of the user fails, the legality of the user can be further verified in a mode of sending the user image to the terminal equipment associated with the wearable equipment, so that the legal user is not limited to the user corresponding to the preset voiceprint feature, the legal user of the wearable equipment can be immediately expanded in a terminal equipment authorization mode, and the flexibility of the wearable equipment is improved.
303. And controlling the intelligent host to automatically rotate to the direction of the sound source of the directional loudspeaker on the intelligent host when the intelligent host is erected to be vertical to the horizontal plane.
304. And playing response audio aiming at the voice interaction instruction through a directional loudspeaker.
By implementing the method, the immersion feeling of the user is improved, the condition that the ears of the user are uncomfortable due to the fact that the user continuously wears the earphone for a long time is avoided, the improvement of user experience is facilitated, the wearable device can be effectively prevented from being illegally used, the legal user of the wearable device can be immediately expanded, and the flexibility of the wearable device is improved.
Example two
Referring to fig. 4, fig. 4 is a schematic flowchart of another wearable device-based audio playing method disclosed in the embodiment of the present application, and as shown in fig. 4, the wearable device-based audio playing method may include the following steps:
401. detecting whether an externally input voice interaction instruction is received or not through a microphone on the intelligent host of the wearable device, and if so, executing the step 402 to the step 406; if not, the flow is ended.
402. And identifying the sound source position corresponding to the voice interaction instruction.
403. And controlling the intelligent host to automatically rotate to the direction of the sound source of the directional loudspeaker on the intelligent host when the intelligent host is erected to be vertical to the horizontal plane.
For detailed descriptions of steps 401 to 403, please refer to the description of steps 301 to 303 in the first embodiment, and the embodiments of the present application are not described again.
404. And acquiring a distance value between a sound source and the directional loudspeaker.
405. And determining the playing volume according to the distance value.
406. And playing the response audio aiming at the voice interaction instruction by the directional loudspeaker at the playing volume.
By executing the steps 404 to 406, the playing volume of the response audio of the voice interaction instruction is determined according to the distance value between the sound source and the directional speaker, and the user does not need to adjust the volume, thereby further improving the user experience.
As an optional implementation manner, in this embodiment, after step 404, the following steps may also be performed:
judging whether the current electric quantity of the wearable equipment is larger than a preset electric quantity or not;
when the current electric quantity of the wearable device is larger than the preset electric quantity, continuing to execute step 405;
when the current electric quantity of the wearable device is smaller than or equal to the preset electric quantity, judging whether the distance value is larger than the preset distance value or not, if so, outputting prompt information for prompting a user to get close to the wearable device, and when detecting that the distance value between the user and the wearable device is smaller than or equal to the preset distance value, determining the preset volume corresponding to the preset distance value as the playing volume.
When the method is implemented, the current electric quantity of the wearable device is used as one of the influence factors of the playing volume, when the current electric quantity of the wearable device is smaller than or equal to the preset electric quantity, prompt information for prompting a user to approach the wearable device is output, and when the distance value between the user and the wearable device is detected to be smaller than or equal to the preset distance value, the preset volume corresponding to the preset distance value is determined as the playing volume, so that the power consumption of the wearable device can be effectively reduced, and the standby time length is prolonged.
By implementing the method, the immersion feeling of the user is improved, the condition that the ears of the user are uncomfortable due to the fact that the user continuously wears the earphone for a long time is avoided, the user experience is improved, the wearable device can be effectively prevented from being illegally used, the legal user of the wearable device can be immediately expanded, the flexibility of the wearable device is improved, the power consumption of the wearable device can be effectively reduced, and the standby time is prolonged.
EXAMPLE III
501. Detecting whether an externally input voice interaction instruction is received or not through a microphone on an intelligent host of the wearable device, and if so, executing the steps 502-511; if not, the flow is ended.
502. And identifying the sound source position corresponding to the voice interaction instruction.
503. And controlling the intelligent host to automatically rotate to the direction of the sound source of the directional loudspeaker on the intelligent host when the intelligent host is erected to be vertical to the horizontal plane.
504. And if the voice interaction instruction is a music playing instruction, identifying the acquired user image of the sound source to obtain the emotion category of the user.
505. And acquiring a music list which is in accordance with the emotion category of the user in the music library.
506. And taking the audio corresponding to the music list as response audio aiming at the voice interaction instruction.
507. And acquiring a distance value between a sound source and the directional loudspeaker.
508. And determining the playing volume according to the distance value.
For detailed descriptions of steps 501 to 503 and steps 507 to 508, please refer to the descriptions of steps 401 to 403 in embodiment two, and the embodiments of the present application are not described again.
509. And when a projection request is received, acquiring the synchronous video of the response audio.
510. And playing the response audio aiming at the voice interaction instruction by the playing volume through a directional loudspeaker.
511. And performing projection operation on the synchronous video of the response audio while playing the response audio.
In this embodiment of the application, step 504 to step 506 are executed, when the voice interaction instruction is a music playing instruction, music meeting the emotion of the user can be played to the user by recognizing the emotion of the user, and step 509 to step 511 are executed, and a synchronous video corresponding to the music can be projected while the music is played, so that the defect that a display screen of the wearable device is small is overcome, and a better visual experience is provided for the user. In conclusion, by implementing the method, the use experience of the user can be further improved.
Wherein, while playing the response audio, performing a projection operation on the synchronous video of the response audio, may include:
while playing the response audio, detecting whether the wearable device is in communication connection with the large-screen display device;
and if the wearable device is in communication connection with the large-screen display device, sending the synchronous video of the response audio to the large-screen display device so as to display the synchronous video of the response audio on the large-screen display device.
Alternatively, the first and second liquid crystal display panels may be,
detecting whether a gesture which is input by a user and used for indicating the position of the projection surface is acquired or not while the response audio is played;
if a gesture for indicating the position of the projection surface is acquired, adjusting the projection angle of a projection device arranged on the intelligent host according to the indication of the gesture so as to enable the projection surface corresponding to the projection angle to be matched with the projection position;
and projecting the synchronous video corresponding to the response audio to the projection position.
Further, when a zooming gesture of the user for the projection screen is detected, zooming adjustment of the projection screen can be realized according to the zooming gesture.
By implementing the method, the synchronous video of the response audio can be output on a large-screen display device in communication connection with the wearable device, and can be projected and displayed on any plane according to the desire of a user, so that flexible projection of the synchronous video of the response audio is realized.
Further optionally, after the projection angle of the projection device disposed on the smart host is adjusted according to the indication of the gesture, the following steps may be further performed:
detecting whether the ambient brightness at the position of the projection surface is greater than the preset brightness;
if the brightness is less than or equal to the preset brightness, continuing to project the synchronous video corresponding to the response audio to the projection position;
if the brightness is larger than the preset brightness, the environment brightness at the position of the projection surface is adjusted to be lower than the preset brightness through the communication with the curtain control device and/or the lighting equipment of the environment where the position of the projection surface is located, and the synchronous video corresponding to the response audio is projected to the projection position continuously.
By implementing the method, when the ambient brightness at the position of the projection surface is greater than the preset brightness, the ambient brightness at the position of the projection surface is adjusted, so that a user can be ensured to watch a clear projection picture.
In this embodiment, if the voice interaction instruction is a word reading instruction, as an optional implementation manner, after step 503, the following steps may be further performed:
acquiring words to be read together;
and taking the standard pronunciation audio of the word to be read as the response audio aiming at the voice interaction instruction.
Wherein, the above-mentioned word of waiting to follow reading of obtaining can include:
sending indication information for requesting words to be read together to a parent end in communication connection with the wearable device, and receiving the words to be read together fed back by the parent end;
alternatively, the first and second electrodes may be,
searching a target learning time period to which the current time point belongs in a word library, wherein the word library stores a plurality of preset time periods and a word set corresponding to each preset time period; taking the words contained in the word set corresponding to the target learning time period as the words to be read after;
alternatively, the first and second liquid crystal display panels may be,
and when the book image is detected and collected, carrying out effective character recognition on the book image to obtain words to be read.
By implementing the method, the words to be read after can be obtained by the modes of immediate arrangement by parents, preset by students or immediate recording and the like, and a flexible acquisition mode of the words to be read after is provided.
Further, the following steps may be included after step 508:
collecting the follow-up reading audio of the user while playing the response audio;
and when receiving a follow-up reading termination instruction, comparing the follow-up reading audio with the response audio to obtain a follow-up reading score.
By implementing the method, when the voice interaction instruction is a word reading instruction, the reading audio of the user is scored, so that the aim of stimulating students can be fulfilled.
By implementing the method, the immersion of the user is improved, the situation that the ears of the user are uncomfortable due to the fact that the user continuously wears the earphone for a long time is avoided, the improvement of user experience is facilitated, the wearable device can be effectively prevented from being illegally used, the legal user of the wearable device can be immediately expanded, the flexibility of the wearable device is improved, the power consumption of the wearable device can be effectively reduced, the standby time is prolonged, the user experience can be further improved through intelligent music recommendation and synchronous video projection display, the flexible projection of synchronous videos responding to audios can be realized, the user can be guaranteed to watch clear projection pictures, a flexible word acquisition mode to be read after is provided, and the aim of stimulating students can be fulfilled.
Example four
Referring to fig. 6, fig. 6 is a schematic view of a wearable device according to an embodiment of the present disclosure. The smart host included in the wearable device can automatically rotate when standing up to be perpendicular to a horizontal plane, as shown in fig. 1 to 2, the smart host of the wearable device may include:
the detecting unit 601 is configured to detect whether an externally input voice interaction instruction is received through a microphone on the smart host of the wearable device.
An obtaining unit 602, configured to identify a sound source location corresponding to the voice interaction instruction when the voice interaction instruction is received.
In this embodiment of the present application, please refer to the description in the first embodiment for the description of the microphone on the smart host, which is not described again in this embodiment of the present application, wherein a manner for the obtaining unit 602 to identify the sound source location corresponding to the voice interaction instruction may specifically be: the obtaining unit 602 is configured to obtain time delay information of a voice interaction instruction received by the microphone array from an external device, and calculate a sound source direction corresponding to the voice interaction instruction according to the time delay information.
In this embodiment of the application, the smart host may further include:
a determining unit, configured to, when the voice interaction instruction is received, extract a user voiceprint feature of the voice interaction instruction, determine whether the user voiceprint feature matches a preset voiceprint feature, and, when the user voiceprint feature matches the preset voiceprint feature, trigger the obtaining unit 602 to perform the operation of identifying the sound source location corresponding to the voice interaction instruction.
In the embodiment of the application, the judging unit utilizes the voiceprint characteristics of the user to carry out validity check on the user identity, so that the privatization degree of the wearable equipment is favorably improved, and the wearable equipment can be effectively prevented from being illegally used.
Optionally, in this embodiment of the application, the determining unit may be further configured to acquire a user image when the voiceprint feature of the user is not matched with the preset voiceprint feature, and send the user image to the terminal device associated with the wearable device; detecting whether an authorization instruction sent by the terminal equipment is received or not; and when receiving the authorization instruction, triggering the obtaining unit 602 to execute the operation of recognizing the sound source direction corresponding to the voice interaction instruction.
By implementing the above mode, when the voiceprint feature verification of the user fails, the judgment unit can further verify the validity of the user in a mode of sending the user image to the terminal equipment associated with the wearable equipment, so that the legal user is not limited to the user corresponding to the preset voiceprint feature, the legal user of the wearable equipment can be immediately expanded in a mode of terminal equipment authorization, and the improvement of the flexibility of the wearable equipment is facilitated.
A rotating unit 603 configured to control the smart host to automatically rotate to a direction of a sound source of the directional speaker on the smart host when the smart host is erected to be perpendicular to a horizontal plane.
A playing unit 604, configured to play response audio for the voice interaction instruction through a directional speaker.
It should be noted that the rotating unit 603 is further configured to control the smart host to automatically rotate to a position where a directional speaker on the smart host faces a sound source when the smart host stands to be perpendicular to a horizontal plane, and then send a start instruction to the playing unit 604 to trigger the playing unit 604 to execute the above-mentioned playing of the response audio for the voice interaction instruction through the directional speaker.
Through implementing above-mentioned wearable equipment, promoted user's the sense of immersing, still avoided because of the user lasts to wear the earphone for a long time and lead to the uncomfortable condition of people's ear to take place, be favorable to promoting user experience, can also effectively avoid wearable equipment by illegal use, can also expand wearable equipment's legal user immediately, improved wearable equipment's flexibility.
EXAMPLE five
Referring to fig. 7, fig. 7 is a schematic block diagram of another wearable device disclosed in the embodiments of the present application. The wearable device shown in fig. 7 is optimized from the wearable device shown in fig. 6, the obtaining unit 602 in the smart host of the wearable device shown in fig. 7 is further configured to obtain a distance value between the sound source and the directional speaker after the smart host is automatically rotated to the direction of the sound source by the rotating unit 603 when the smart host is standing up to be perpendicular to the horizontal plane, and before the playing unit 604 plays the response audio for the voice interaction instruction through the directional speaker;
the smart host of the wearable device may further include:
the determining unit 605 is configured to determine the playing volume according to the distance value.
The manner of playing, by the playing unit 604, the response audio for the voice interaction instruction through the directional speaker may specifically be:
the playing unit 604 is configured to play the response audio corresponding to the voice interaction instruction at the playing volume through a directional speaker.
In this embodiment, the determining unit 605 determines the playing volume of the response audio of the voice interaction instruction according to the distance value between the sound source and the directional speaker, which does not require a user to adjust the volume, thereby further improving the user experience.
As an optional implementation manner, in this embodiment, the obtaining unit 602 may be further configured to determine whether the current electric quantity of the wearable device is greater than a preset electric quantity after obtaining the distance value between the sound source and the directional speaker, and when the current electric quantity of the wearable device is greater than the preset electric quantity, trigger the determining unit 605 to perform the operation of determining the play volume according to the distance value; and when the current electric quantity of the wearable device is smaller than or equal to the preset electric quantity, judging whether the distance value is larger than the preset distance value, outputting prompt information for prompting a user to get close to the wearable device when the distance value is larger than the preset distance value, and determining the preset volume corresponding to the preset distance value as the playing volume when the distance value between the user and the wearable device is smaller than or equal to the preset distance value.
According to the implementation mode, the current electric quantity of the wearable device is used as one of the influence factors of the playing volume, when the current electric quantity of the wearable device is smaller than or equal to the preset electric quantity, prompt information for prompting a user to approach the wearable device is output, and when the distance value between the user and the wearable device is detected to be smaller than or equal to the preset distance value, the preset volume corresponding to the preset distance value is determined as the playing volume, so that the power consumption of the wearable device can be effectively reduced, and the standby time length is prolonged.
Through implementing above-mentioned wearable equipment, promoted user's the sense of immersing, still avoided wearing the earphone for a long time because of the user and lead to the uncomfortable condition of people's ear to take place, be favorable to promoting user experience, can also effectively avoid wearable equipment by illegal use, can also expand wearable equipment's legal user immediately, improved wearable equipment's flexibility, can also effectively reduce wearable equipment's consumption, long when the extension standby.
EXAMPLE six
Referring to fig. 8, fig. 8 is a schematic block diagram of another wearable device disclosed in the embodiments of the present application. As shown in fig. 8, if the wearable device shown in fig. 8 is optimized from the wearable device shown in fig. 7, and the voice interaction instruction is a music playing instruction, the smart host of the wearable device may further include:
an image processing unit 606, configured to identify the user image of the collected sound source before the playing unit 604 plays the response audio corresponding to the voice interaction instruction at the playing volume through a directional speaker, so as to obtain the emotion category of the user.
The determining unit 605 is further configured to obtain a music list conforming to the emotion category in the music library, and use audio corresponding to the music list as response audio for the voice interaction instruction.
The projection unit 607 is configured to, when receiving a projection request, acquire a synchronized video of the response audio, and perform a projection operation on the synchronized video while playing the response audio.
In this embodiment of the application, when the voice interaction instruction is a music playing instruction, the emotion of the user can be recognized through the image processing unit 606, music meeting the emotion of the user is played to the user, and the projection unit 607 can also project a synchronous video corresponding to the music while playing the music, so that the defect that a display screen of a wearable device is small is overcome, and a better visual experience is provided for the user. In conclusion, the user experience can be further improved by the implementation of the above mode.
The manner of the projection unit 607 for performing the projection operation on the synchronous video of the response audio while playing the response audio may specifically be:
and the projection unit 607 is configured to detect whether the wearable device is in communication connection with the large-screen display device while playing the response audio, and send the synchronous video of the response audio to the large-screen display device if the wearable device is in communication connection with the large-screen display device, so that the synchronous video of the response audio is displayed on the large-screen display device.
Alternatively, the first and second electrodes may be,
the projection unit 607 is configured to detect whether a gesture for indicating the position of the projection surface is collected, which is input by a user, while the response audio is played, adjust the projection angle of the projection device provided on the smart host according to the indication of the gesture if the gesture for indicating the position of the projection surface is collected, so that the projection surface corresponding to the projection angle matches with the projection position, and project the synchronous video corresponding to the response audio to the projection position.
Further, the projection unit 607 is further configured to, when a zoom gesture of the user for the projection screen is detected, implement zoom adjustment of the projection screen according to the zoom gesture.
By means of the implementation mode, the synchronous video of the response audio can be output on a large-screen display device in communication connection with the wearable device, the synchronous video of the response audio can be projected and displayed on any plane according to the intention of a user, and flexible projection of the synchronous video of the response audio is achieved.
Further optionally, the projection unit 607 may be further configured to detect whether the ambient brightness at the position of the projection plane is greater than a preset brightness after adjusting the projection angle of the projection apparatus disposed on the smart host according to the indication of the gesture, and if the ambient brightness is less than or equal to the preset brightness, perform the above-mentioned projecting of the synchronous video corresponding to the response audio to the projection position; if the brightness is larger than the preset brightness, the environment brightness at the position of the projection surface is adjusted to be lower than the preset brightness through the communication with the curtain control device and/or the lighting equipment of the environment where the position of the projection surface is located, and the synchronous video corresponding to the response audio is projected to the projection position. By implementing the mode, when the ambient brightness at the position of the projection surface is greater than the preset brightness, the ambient brightness at the position of the projection surface is adjusted, so that a user can be ensured to watch a clear projection picture.
In this embodiment of the application, if the voice interaction instruction is a word reading instruction, optionally, the obtaining unit 602 is further configured to obtain a word to be read by reading before the playing unit 604 plays the response audio corresponding to the voice interaction instruction at the playing volume through a directional speaker;
the manner for acquiring the word to be read after by the acquiring unit 602 may specifically be:
the obtaining unit 602 is configured to send instruction information for requesting a word to be read after to a parent end in communication connection with the wearable device, and receive the word to be read after fed back by the parent end; or, the obtaining unit 602 is configured to search a word library for a target learning period to which a current time point belongs, where the word library stores a plurality of preset time periods and a word set corresponding to each preset time period; taking the words contained in the word set corresponding to the target learning time period as the words to be read after; or, the obtaining unit 602 is configured to perform effective character recognition on the book image when the book image is detected and collected, so as to obtain a word to be read. By implementing the method, the words to be read after can be obtained by the modes of immediate arrangement by parents, preset by students or immediate recording and the like, and a flexible acquisition mode of the words to be read after is provided.
The determining unit 605 is further configured to use the standard pronunciation audio of the word to be read with following as the response audio for the voice interaction instruction.
The smart host may further include:
and the comparison unit is used for collecting the reading following audio of the user while playing the response audio, and obtaining a reading following score by comparing the reading following audio with the response audio when receiving a reading following termination instruction.
In the embodiment of the application, when the voice interaction instruction is a word follow-up reading instruction, the comparison unit scores the follow-up reading audio of the user, so that the aim of exciting students can be fulfilled.
By implementing the wearable device, the immersion of the user is improved, the situation that the ears of the user are uncomfortable due to the fact that the user continuously wears the earphone for a long time is avoided, user experience is favorably improved, the wearable device can be effectively prevented from being illegally used, the legal user of the wearable device can be immediately expanded, the flexibility of the wearable device is improved, the power consumption of the wearable device can be effectively reduced, the standby time is prolonged, the user experience can be further improved through music intelligent recommendation and synchronous video projection display, the flexible projection of synchronous videos responding to audios can be realized, the user can be ensured to watch clear projection pictures, a flexible word acquisition mode to be read after is provided, and the aim of exciting students can be fulfilled.
Referring to fig. 9, fig. 9 is a schematic block diagram of another wearable device disclosed in the embodiments of the present application. As shown in fig. 9, the smart host of the wearable device can automatically rotate when standing to be vertical to a horizontal plane, as shown in fig. 1 to 2, the smart host of the wearable device may include:
memory 901 storing executable program code
A processor 902 coupled to a memory;
the processor 902 calls the executable program code stored in the memory 901 to execute the steps of the wearable device-based audio playing method described in any one of fig. 3 to 5.
It should be noted that, in this embodiment of the application, the intelligent host shown in fig. 9 may further include components that are not displayed, such as a display screen, a wireless communication module (such as a mobile communication module, a WIFI module, a bluetooth module, and the like), a sensor module (such as a proximity sensor, and the like), an input module (such as a key), and a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired headset interface, and the like).
The embodiment of the application discloses a computer-readable storage medium, on which computer instructions are stored, and the computer instructions, when executed, cause a computer to execute the steps of the wearable device-based audio playing method described in any one of fig. 3 to fig. 5.
It will be understood by those skilled in the art that all or part of the steps of the methods of the embodiments described above may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, including Read-Only Memory (ROM), random Access Memory (RAM), programmable Read-Only Memory (PROM), erasable Programmable Read-Only Memory (EPROM), one-time Programmable Read-Only Memory (OTPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM) or other Memory capable of storing data, a magnetic tape, or any other computer-readable medium capable of storing data.
The audio playing method based on the wearable device and the wearable device disclosed in the embodiments of the present application are described in detail above, specific examples are applied in the present application to explain the principles and embodiments of the present application, and the description of the embodiments above is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A wearable device-based audio playing method is characterized in that a smart host included in a wearable device can automatically rotate when the smart host stands up to be vertical to a horizontal plane, and the method comprises the following steps:
detecting whether an externally input voice interaction instruction is received or not through a microphone on the intelligent host;
if the voice interaction instruction is received, extracting the voiceprint characteristics of the user of the voice interaction instruction; judging whether the user voiceprint features are matched with preset voiceprint features or not; when the user voiceprint features are matched with the preset voiceprint features, recognizing a sound source position corresponding to the voice interaction instruction;
controlling the intelligent host to automatically rotate to the orientation of the directional loudspeaker on the intelligent host towards the sound source orientation when the intelligent host stands to be vertical to the horizontal plane;
playing response audio for the voice interaction instruction through the directional loudspeaker;
the method further comprises the following steps:
when the user voiceprint features are not matched with the preset voiceprint features, acquiring user images, and sending the user images to terminal equipment associated with the wearable equipment; detecting whether an authorization instruction sent by the terminal equipment is received or not; and if the authorization instruction is received, executing the recognition of the sound source position corresponding to the voice interaction instruction.
2. The method of claim 1, wherein after controlling the smart host to automatically rotate to a position where a directional speaker on the smart host is oriented towards the sound source when standing to be perpendicular to a horizontal plane and before playing a response audio for the voice interaction command through the directional speaker, the method further comprises:
acquiring a distance value between a sound source and the directional loudspeaker;
determining the playing volume according to the distance value;
the playing, by the directional speaker, the response audio for the voice interaction instruction includes:
and playing the response audio aiming at the voice interaction instruction at the playing volume through the directional loudspeaker.
3. The method of claim 2, wherein if the voice interaction command is a music play command, the method further comprises, before playing, by the directional speaker, a response audio to the voice interaction command at the play volume, the method further comprising:
identifying the collected user image of the sound source to obtain the emotion type of the user;
acquiring a music list conforming to the emotion category from a music library;
taking the audio corresponding to the music list as response audio aiming at the voice interaction instruction;
the method further comprises the following steps:
when a projection request is received, acquiring a synchronous video of the response audio;
and performing projection operation on the synchronous video while playing the response audio.
4. The method of claim 2, wherein if the voice interaction command is a word reading command, before the response audio to the voice interaction command is played at the play volume through the directional speaker, the method comprises:
acquiring words to be read with following;
taking the standard pronunciation audio of the word to be read as the response audio aiming at the voice interaction instruction;
the method further comprises the following steps:
collecting the follow-up reading audio of the user while playing the response audio;
and when a follow-up reading termination instruction is received, comparing the follow-up reading audio with the response audio to obtain a follow-up reading score.
5. A wearable device comprising a smart host that is capable of automatic rotation when raised to be perpendicular to a horizontal plane, the smart host comprising:
the detection unit is used for detecting whether an externally input voice interaction instruction is received or not through a microphone on the intelligent host;
the acquisition unit is used for identifying the sound source position corresponding to the voice interaction instruction when the voice interaction instruction is received;
the rotating unit is used for controlling the intelligent host to automatically rotate to the orientation of the directional loudspeaker on the intelligent host towards the sound source orientation when the intelligent host stands to be vertical to the horizontal plane;
the playing unit is used for playing response audio aiming at the voice interaction instruction through the directional loudspeaker;
the intelligent host further comprises:
the judging unit is used for extracting the user voiceprint characteristics of the voice interaction instruction when the voice interaction instruction is received, judging whether the user voiceprint characteristics are matched with preset voiceprint characteristics or not, and triggering the acquiring unit to execute the operation of identifying the sound source position corresponding to the voice interaction instruction when the user voiceprint characteristics are matched with the preset voiceprint characteristics; and the voice interaction unit is also used for acquiring a user image when the voiceprint characteristics of the user are not matched with the preset voiceprint characteristics, sending the user image to the terminal equipment associated with the wearable equipment, detecting whether an authorization instruction sent by the terminal equipment is received, and triggering the acquisition unit to execute the operation of identifying the sound source position corresponding to the voice interaction instruction when the authorization instruction is received.
6. The wearable device according to claim 5, wherein the obtaining unit is further configured to, when the rotation unit controls the smart host to automatically rotate to a position where a directional speaker on the smart host faces the sound source when the smart host stands up to be perpendicular to a horizontal plane, and before the playing unit plays the response audio for the voice interaction instruction through the directional speaker, obtain a distance value between the sound source and the directional speaker;
the intelligent host further comprises:
the determining unit is used for determining the playing volume according to the distance value;
the way that the playing unit plays the response audio aiming at the voice interaction instruction through the directional loudspeaker is specifically as follows:
and the playing unit is used for playing the response audio aiming at the voice interaction instruction by the directional loudspeaker at the playing volume.
7. The wearable device of claim 6, wherein if the voice interaction command is a music play command, the smart host further comprises:
the image processing unit is used for identifying the collected user image of the sound source before the playing unit plays the response audio aiming at the voice interaction instruction at the playing volume through the directional loudspeaker to obtain the emotion category of the user;
the determining unit is further configured to acquire a music list conforming to the emotion category in a music library, and take audio corresponding to the music list as response audio for the voice interaction instruction;
and the projection unit is used for acquiring the synchronous video of the response audio when receiving a projection request, and executing projection operation on the synchronous video while playing the response audio.
8. The wearable device according to claim 6, wherein if the voice interaction instruction is a word reading instruction, the obtaining unit is further configured to obtain, by the playing unit through the directional speaker, a word to be read before a response audio to the voice interaction instruction is played at the playing volume;
the determining unit is further configured to use the standard pronunciation audio of the word to be read after as a response audio for the voice interaction instruction;
the intelligent host further comprises:
and the comparison unit is used for collecting the reading following audio of the user while playing the response audio, and obtaining a reading following score by comparing the reading following audio with the response audio when receiving a reading following termination instruction.
9. A wearable device comprising a smart host that is capable of automatic rotation when erected to be perpendicular to a horizontal plane, the smart host comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the wearable device-based audio playing method according to any one of claims 1 to 4.
10. A computer-readable storage medium storing computer instructions that, when executed, cause a computer to perform the wearable device-based audio playback method according to any one of claims 1 to 4.
CN201911154106.6A 2019-11-22 2019-11-22 Audio playing method based on wearable device and wearable device Active CN111179923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911154106.6A CN111179923B (en) 2019-11-22 2019-11-22 Audio playing method based on wearable device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911154106.6A CN111179923B (en) 2019-11-22 2019-11-22 Audio playing method based on wearable device and wearable device

Publications (2)

Publication Number Publication Date
CN111179923A CN111179923A (en) 2020-05-19
CN111179923B true CN111179923B (en) 2022-11-01

Family

ID=70650135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911154106.6A Active CN111179923B (en) 2019-11-22 2019-11-22 Audio playing method based on wearable device and wearable device

Country Status (1)

Country Link
CN (1) CN111179923B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115248653A (en) * 2021-04-26 2022-10-28 深圳市万普拉斯科技有限公司 Control method and device of wearable device, wearable device and electronic device
CN113423039B (en) * 2021-06-18 2023-01-24 恒玄科技(上海)股份有限公司 Wireless loudspeaker assembly, intelligent device and intelligent system thereof
CN114461022A (en) * 2022-02-09 2022-05-10 维沃移动通信有限公司 Separable module management method and device of terminal and terminal
CN117193391B (en) * 2023-11-07 2024-01-23 北京铁力山科技股份有限公司 Intelligent control desk angle adjustment system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9653060B1 (en) * 2016-02-09 2017-05-16 Amazon Technologies, Inc. Hybrid reference signal for acoustic echo cancellation
CN107025050A (en) * 2016-02-02 2017-08-08 三星电子株式会社 Method for user interface and the electronic installation for performing this method
CN107300973A (en) * 2017-06-21 2017-10-27 深圳传音通讯有限公司 screen rotation control method, system and device
CN108231073A (en) * 2016-12-16 2018-06-29 深圳富泰宏精密工业有限公司 Phonetic controller, system and control method
CN108551619A (en) * 2018-04-13 2018-09-18 深圳市沃特沃德股份有限公司 Intelligent positioning sound system and its exchange method
CN108597263A (en) * 2018-04-26 2018-09-28 广州国铭职业技能培训有限公司 A kind of robot with department's professional knowledge training function

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160026317A (en) * 2014-08-29 2016-03-09 삼성전자주식회사 Method and apparatus for voice recording

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025050A (en) * 2016-02-02 2017-08-08 三星电子株式会社 Method for user interface and the electronic installation for performing this method
US9653060B1 (en) * 2016-02-09 2017-05-16 Amazon Technologies, Inc. Hybrid reference signal for acoustic echo cancellation
CN108231073A (en) * 2016-12-16 2018-06-29 深圳富泰宏精密工业有限公司 Phonetic controller, system and control method
CN107300973A (en) * 2017-06-21 2017-10-27 深圳传音通讯有限公司 screen rotation control method, system and device
CN108551619A (en) * 2018-04-13 2018-09-18 深圳市沃特沃德股份有限公司 Intelligent positioning sound system and its exchange method
CN108597263A (en) * 2018-04-26 2018-09-28 广州国铭职业技能培训有限公司 A kind of robot with department's professional knowledge training function

Also Published As

Publication number Publication date
CN111179923A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111179923B (en) Audio playing method based on wearable device and wearable device
JP6143975B1 (en) System and method for providing haptic feedback to assist in image capture
CN107172497B (en) Live broadcasting method, apparatus and system
CN105323648B (en) Caption concealment method and electronic device
EP3179474A1 (en) User focus activated voice recognition
CN105828101B (en) Generate the method and device of subtitle file
US20150088515A1 (en) Primary speaker identification from audio and video data
US20210168460A1 (en) Electronic device and subtitle expression method thereof
WO2021043121A1 (en) Image face changing method, apparatus, system, and device, and storage medium
US9491401B2 (en) Video call method and electronic device supporting the method
CN110931048A (en) Voice endpoint detection method and device, computer equipment and storage medium
CN110769280A (en) Method and device for continuously playing files
US20120287283A1 (en) Electronic device with voice prompt function and voice prompt method
CN104104987B (en) Picture and synchronous sound method and device in video playing
CN111176431A (en) Screen projection control method of sound box and sound box
US20150194154A1 (en) Method for processing audio signal and audio signal processing apparatus adopting the same
CN113301372A (en) Live broadcast method, device, terminal and storage medium
CN111768785A (en) Control method of smart watch and smart watch
CN110808021A (en) Audio playing method, device, terminal and storage medium
KR20200056754A (en) Apparatus and method for generating personalization lip reading model
CN111079495A (en) Point reading mode starting method and electronic equipment
CN111176538B (en) Screen switching method based on intelligent sound box and intelligent sound box
CN111696566B (en) Voice processing method, device and medium
CN113645510B (en) Video playing method and device, electronic equipment and storage medium
CN111176594B (en) Screen display method of intelligent sound box, intelligent sound box and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant