CN110213431A - Message method and mobile terminal - Google Patents
Message method and mobile terminal Download PDFInfo
- Publication number
- CN110213431A CN110213431A CN201910364263.3A CN201910364263A CN110213431A CN 110213431 A CN110213431 A CN 110213431A CN 201910364263 A CN201910364263 A CN 201910364263A CN 110213431 A CN110213431 A CN 110213431A
- Authority
- CN
- China
- Prior art keywords
- information
- user
- mobile terminal
- lip reading
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012163 sequencing technique Methods 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 239000002131 composite material Substances 0.000 claims description 3
- 230000006854 communication Effects 0.000 abstract description 12
- 238000004891 communication Methods 0.000 abstract description 11
- 230000006870 function Effects 0.000 description 10
- 230000001755 vocal effect Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 4
- 230000005611 electricity Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
- G10L15/25—Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/32—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
- H04L9/3226—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
- H04L9/3231—Biological data, e.g. fingerprint, voice or retina
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72433—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Telephone Function (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The present invention provides a kind of method for sending information and mobile terminals, which comprises receives the voice request that first terminal is sent;Obtain the biological information of user;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;In the case where current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio-frequency information of lip reading information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition, it is only necessary to pass through and obtain lip reading information, produce audio-frequency information, it realizes in the case where not making a sound, carries out voice-enabled chat with target terminal, promote the usage experience of user.
Description
Technical field
The present invention relates to technical field of mobile terminals, more particularly to a kind of message method and mobile terminal.
Background technique
With the arrival in mobile interchange epoch, intelligent mobile terminal almost becomes the indispensable representative of this epoch and matches
It sets.It needs largely to carry out interactive voice with other people in daily life and work, including makes a phone call voice communication, use chat tool
Carry out interactive voice.
As the time that intelligent mobile terminal user is spent on mobile terminals is also more and more, user is often in difference
Interactive voice is carried out using mobile phone under scene, video interactive also includes phonological component, however, currently, when being inconvenient to carry out voice
When interaction, quickly revert can only be carried out, or replied from edlin text by user by preset text.
Aforesaid way has the following deficiencies:
It on the one hand, is to edit in advance by sending the text of message informing other side, in reality, user can be in different rings
Under border, and face different objects, the covering that the text edited in advance can not be effective;On the other hand, it is sent out from edlin text
It send, increases time cost, and cannot achieve under specific condition, influence the usage experience of user.
Summary of the invention
The embodiment of the present invention provides a kind of chat method and mobile terminal, overcomes in the prior art through pre-set text content
Or user's edit text message is to send the defect of message.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of method for sending information, comprising: receive the language that first terminal is sent
Sound request;Obtain the biological information of user;Authentication is carried out to the user according to the biological information;Recognizing
It demonstrate,proves in successful situation, detects current ambient conditions;In the case where the current ambient conditions are default ambient condition, obtain
The lip reading information of the user;Obtain the corresponding audio-frequency information of the lip reading information;The audio is sent to the first terminal
Information.
Second aspect, the embodiment of the invention also provides a kind of mobile terminal, the mobile terminal includes: receiving module,
For receiving the voice request of first terminal transmission;First obtains module, for obtaining the biological information of user;Identity is recognized
Module is demonstrate,proved, for carrying out authentication to the user according to the biological information;Detection module, for authenticating successfully
In the case where, detect current ambient conditions;Second obtains module, for being default ambient condition in the current ambient conditions
In the case of, obtain the lip reading information of the user;Third obtains module, for obtaining the corresponding audio letter of the lip reading information
Breath;Sending module, for sending the audio-frequency information to the first terminal.
The third aspect the embodiment of the invention also provides a kind of mobile terminal, including processor, memory and is stored in institute
The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor
The step of realizing the message method.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage medium, described computer-readable to deposit
Computer program is stored on storage media, and the step of the message method is realized when the computer program is executed by processor
Suddenly.
In embodiments of the present invention, the voice request sent by receiving first terminal;Obtain the biological characteristic letter of user
Breath;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;?
In the case that current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio of lip reading information
Information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition,
Only need by obtain lip reading information, that is, produce audio-frequency information, realize in the case where not making a sound, with target terminal into
Row voice-enabled chat promotes the usage experience of user.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of message method of the embodiment of the present invention one;
Fig. 2 is a kind of step flow chart of message method of the embodiment of the present invention two;
Fig. 3 is a kind of structural block diagram of mobile terminal of the embodiment of the present invention three;
Fig. 4 is a kind of structural block diagram of mobile terminal of the embodiment of the present invention four;
Fig. 5 is a kind of hardware structural diagram of mobile terminal of the embodiment of the present invention five.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Embodiment one
Referring to Fig.1, a kind of step flow chart of message method of the embodiment of the present invention one is shown.
Message method provided in an embodiment of the present invention the following steps are included:
Step 101: receiving the voice request that first terminal is sent.
Step 102: obtaining the biological information of user.
It should be noted that biological information can be voiceprint and human face image information.
One section of preparatory typing is more than or equal to the voice of preset duration, extracts user's voiceprint, calls mobile terminal
Front camera or rear camera obtain user face information.
Vocal print is the sound wave spectrum for the carrying verbal information that electricity consumption acoustic instrument is shown.Modern scientific research shows vocal print
Not only there is specificity, but also have the characteristics of relative stability.After adult, the sound of people can keep it is long-term relatively it is stable not
Become.It is demonstrated experimentally that no matter talker is deliberately to imitate other people sound and the tone, or whisper in sb.'s ear is softly talked, even if imitating only
Wonderful only Xiao, vocal print is but identical always, therefore using voiceprint as one of element of user authentication.
It should be noted that those skilled in the art can according to the actual situation be configured preset duration, wherein pre-
If duration can be 5s, 10s, 15s etc., the embodiment of the present invention is not specifically limited this.
Step 103: authentication being carried out to user according to biological information.
During voice-enabled chat, when the voiceprint and face information that obtain user prestore at least with mobile terminal
One voiceprint and face information are matched.
Step 104: in the case of successful certification, detecting current ambient conditions.
At least one voiceprint for being prestored with mobile terminal when the voiceprint and face information of the user of acquisition and
In the matched situation of face information, show to authenticate successfully, the process of certification prevents other users from usurping equipment.
Step 105: in the case where current ambient conditions are default ambient condition, obtaining the lip reading information of user.
Wherein, default ambient condition includes but is not limited to following one: bluetooth earphone state, earphone state, mute state
And vibrating state.
Call the front camera of mobile terminal or the lip reading information of rear camera acquisition user's mouth.
Step 106: obtaining the corresponding audio-frequency information of lip reading information.
Step 107: sending audio-frequency information to first terminal.
In embodiments of the present invention, the voice request sent by receiving first terminal;Obtain the biological characteristic letter of user
Breath;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;?
In the case that current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio of lip reading information
Information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition,
Only need by obtain lip reading information, that is, produce audio-frequency information, realize in the case where not making a sound, with target terminal into
Row voice-enabled chat promotes the usage experience of user.
Embodiment two
Referring to Fig. 2, a kind of step flow chart of message method of the embodiment of the present invention two is shown.
Message method provided in an embodiment of the present invention the following steps are included:
Step 201: receiving the voice request that first terminal is sent.
Step 202: obtaining the biological information of user.
It should be noted that biological information can be voiceprint and human face image information.
One section of preparatory typing is more than or equal to the voice of preset duration, extracts user's voiceprint, calls mobile terminal
Front camera or rear camera obtain user face information.
It should be noted that those skilled in the art can according to the actual situation be configured preset duration, wherein pre-
If duration can be 5s, 10s, 15s etc., the embodiment of the present invention is not specifically limited this.
Vocal print is the sound wave spectrum for the carrying verbal information that electricity consumption acoustic instrument is shown.Modern scientific research shows vocal print
Not only there is specificity, but also have the characteristics of relative stability.After adult, the sound of people can keep it is long-term relatively it is stable not
Become.It is demonstrated experimentally that no matter talker is deliberately to imitate other people sound and the tone, or whisper in sb.'s ear is softly talked, even if imitating only
Wonderful only Xiao, vocal print is but identical always, therefore using voiceprint as one of element of user authentication.
Step 203: authentication being carried out to user according to biological information.
During voice-enabled chat, when the voiceprint and face information that obtain user prestore at least with mobile terminal
One voiceprint and face information are matched.
Step 204: in the case of successful certification, detecting current ambient conditions.
At least one voiceprint for being prestored with mobile terminal when the voiceprint and face information of the user of acquisition and
In the matched situation of face information, show to authenticate successfully, the process of certification prevents other users from usurping equipment.
Step 205: in the case where ambient condition is default ambient condition, obtaining multiple mouth images of user.
Wherein, default ambient condition includes but is not limited to following one: bluetooth earphone state, earphone state, mute state
And vibrating state.
In the case where ambient condition is default ambient condition, prompt information can be exported, to prompt the user whether to carry out
Voice-enabled chat is then to call the camera of mobile terminal when receiving being selected as user, is obtained in preset interval, user's is more
A mouth image.
Other than determining whether for above-mentioned state, the decibel value of the environment on mobile terminal periphery can also be detected.It will inspection
The decibel value of survey is compared with default decibel value, when decibel value is less than or equal to default decibel value, it is determined that work as front ring
Border is default environment, obtains the lip reading information of user.
It should be noted that those skilled in the art can according to the actual situation be configured default decibel value, wherein
Default decibel value can be set to 20db, 25db, 30db etc., and the embodiment of the present invention is not specifically limited this.
Step 206: according to the time sequencing of multiple mouth images, determining the corresponding text information of multiple mouth images.
The corresponding text information of different mouth images is prestored in mobile terminal, it will be in each mouth image and mobile terminal
Each mouth image prestored is matched, and after successful match, determines the corresponding text information of mouth image.
It obtains through a large amount of mouth image information, the model of training mouth image, each mouth image that will acquire exists
In training pattern, matching result is obtained, determines the corresponding text information of multiple mouth images.
Since the chronological order of the mouth image of acquisition is different, then according to time sequencing, multiple mouth images are determined
Corresponding text information.
Step 207: each text information is converted into audio-frequency information.
The audio-frequency information of step conversion is the audio-frequency information without user's tone color.Step 208: it is corresponding to obtain user
Timbre information.
Different people, corresponding different tone color, determines the corresponding timbre information of user.
Tone color, which refers in terms of the frequency of different sound shows waveform, always distinguished characteristic.
Step 209: synthesizing target audio information according to audio-frequency information and timbre information.
Audio-frequency information and timbre information are synthesized into target audio information, the target audio information of synthesis, the sound with user
Color, so that chat process is naturally, promote the usage experience of user.
Step 210: sending audio-frequency information to first terminal.
In embodiments of the present invention, the voice request sent by receiving first terminal;Obtain the biological characteristic letter of user
Breath;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;?
In the case that current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio of lip reading information
Information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition,
Only need by obtain lip reading information, that is, produce audio-frequency information, realize in the case where not making a sound, with target terminal into
Row voice-enabled chat promotes the usage experience of user.
Embodiment three
Referring to Fig. 3, a kind of structural block diagram of mobile terminal of the embodiment of the present invention three is shown.
Mobile terminal provided in an embodiment of the present invention includes: receiving module 301, for receiving the voice of first terminal transmission
Request;First obtains module 302, for obtaining the biological information of user;Authentication module 303, for according to described in
Biological information carries out authentication to the user;Detection module 304, in the case of successful certification, detection to be worked as
Preceding ambient condition;Second obtains module 305, for obtaining in the case where the current ambient conditions are default ambient condition
The lip reading information of the user;Third obtains module 306, for obtaining the corresponding audio-frequency information of the lip reading information;Send mould
Block 307, for sending the audio-frequency information to the first terminal.
In embodiments of the present invention, the voice request sent by receiving first terminal;Obtain the biological characteristic letter of user
Breath;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;?
In the case that current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio of lip reading information
Information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition,
Only need by obtain lip reading information, that is, produce audio-frequency information, realize in the case where not making a sound, with target terminal into
Row voice-enabled chat promotes the usage experience of user.
Example IV
Referring to Fig. 4, a kind of structural block diagram of mobile terminal of the embodiment of the present invention four is shown.
Mobile terminal provided in an embodiment of the present invention includes: receiving module 401, for receiving the voice of first terminal transmission
Request;First obtains module 402, for obtaining the biological information of user;Authentication module 403, for according to described in
Biological information carries out authentication to the user;Detection module 404, in the case of successful certification, detection to be worked as
Preceding ambient condition;Second obtains module 405, for obtaining in the case where the current ambient conditions are default ambient condition
The lip reading information of the user;Third obtains module 406, for obtaining the corresponding audio-frequency information of the lip reading information;Send mould
Block 407, for sending the audio-frequency information to the first terminal.
Preferably, the second acquisition module 405 includes: the first acquisition submodule 4051, for obtaining the user couple
The timbre information answered;Submodule 4052 is synthesized, for according to the lip reading information and the timbre information Composite tone information.
Preferably, the second acquisition module 405 includes: the second acquisition submodule 4053, for obtaining the user's
Multiple mouth images;The third obtains module 406 and comprises determining that submodule 4061, for according to mouth image described in multiple
Time sequencing, determine the corresponding text information of multiple described mouth images;Transform subblock 4062, for believing the text
Breath is converted to audio-frequency information.
Preferably, the default ambient condition includes following one of any: bluetooth earphone state, earphone state, mute shape
State and vibrating state.
Preferably, the detection module 404 includes: detection sub-module 4041, for detecting the decibel value of current environment;Institute
Stating the second acquisition module 405 includes: third acquisition submodule 4054, for being less than or equal to default decibel when the decibel value
When value, the lip reading information of the user is obtained.
Mobile terminal provided in an embodiment of the present invention can be realized mobile terminal in the embodiment of the method for Fig. 1 to Fig. 2 and realize
Each process, to avoid repeating, which is not described herein again.
In embodiments of the present invention, the voice request sent by receiving first terminal;Obtain the biological characteristic letter of user
Breath;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;?
In the case that current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio of lip reading information
Information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition,
Only need by obtain lip reading information, that is, produce audio-frequency information, realize in the case where not making a sound, with target terminal into
Row voice-enabled chat promotes the usage experience of user.
Embodiment five
Referring to Fig. 5, the hardware structural diagram of a kind of mobile terminal of each embodiment to realize the present invention.
The mobile terminal 500 includes but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, defeated
Enter unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor
The components such as 510 and power supply 511.It will be understood by those skilled in the art that mobile terminal structure shown in Fig. 5 is not constituted
Restriction to mobile terminal, mobile terminal may include than illustrating more or fewer components, perhaps combine certain components or
Different component layouts.In embodiments of the present invention, mobile terminal include but is not limited to mobile phone, tablet computer, laptop,
Palm PC, car-mounted terminal, wearable device and pedometer etc..
Processor 510 receives the voice request that first terminal is sent for controlling user input unit 507;Obtain user
Biological information;Authentication is carried out to the user according to the biological information;In the case of successful certification,
Detect current ambient conditions;In the case where the current ambient conditions are default ambient condition, the lip reading of the user is obtained
Information;Obtain the corresponding audio-frequency information of the lip reading information;The audio-frequency information is sent to the first terminal.
In embodiments of the present invention, the voice request sent by receiving first terminal;Obtain the biological characteristic letter of user
Breath;Authentication is carried out to user according to biological information;In the case of successful certification, current ambient conditions are detected;?
In the case that current ambient conditions are default ambient condition, the lip reading information of user is obtained;Obtain the corresponding audio of lip reading information
Information;Audio-frequency information is sent to first terminal, can be realized when user needs to carry out voice communication in default ambient condition,
Only need by obtain lip reading information, that is, produce audio-frequency information, realize in the case where not making a sound, with target terminal into
Row voice-enabled chat promotes the usage experience of user.
It should be understood that the embodiment of the present invention in, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 510 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set
Standby communication.
Mobile terminal provides wireless broadband internet by network module 502 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 503 can be received by radio frequency unit 501 or network module 502 or in memory 509
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 503 can also be provided and be moved
The relevant audio output of specific function that dynamic terminal 500 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 503 includes loudspeaker, buzzer and receiver etc..
Input unit 504 is for receiving audio or video signal.Input unit 504 may include graphics processor
(Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 506.Through graphics processor 5041, treated that picture frame can be deposited
Storage is sent in memory 509 (or other storage mediums) or via radio frequency unit 501 or network module 502.Mike
Wind 5042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 501 is converted in the case where telephone calling model.
Mobile terminal 500 further includes at least one sensor 505, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 5061, and proximity sensor can close when mobile terminal 500 is moved in one's ear
Display panel 5061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 505 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 506 is for showing information input by user or being supplied to the information of user.Display unit 506 can wrap
Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 5061.
User input unit 507 can be used for receiving the number or character information of input, and generate the use with mobile terminal
Family setting and the related key signals input of function control.Specifically, user input unit 507 include touch panel 5071 and
Other input equipments 5072.Touch panel 5071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 5071 or in touch panel 5071
Neighbouring operation).Touch panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receiving area
It manages the order that device 510 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Seed type realizes touch panel 5071.In addition to touch panel 5071, user input unit 507 can also include other input equipments
5072.Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 5071 can be covered on display panel 5061, when touch panel 5071 is detected at it
On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touching
The type for touching event provides corresponding visual output on display panel 5061.Although in Fig. 5, touch panel 5071 and display
Panel 5061 is the function that outputs and inputs of realizing mobile terminal as two independent components, but in some embodiments
In, can be integrated by touch panel 5071 and display panel 5061 and realize the function that outputs and inputs of mobile terminal, it is specific this
Place is without limitation.
Interface unit 508 is the interface that external device (ED) is connect with mobile terminal 500.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 508 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in mobile terminal 500 or can be used in 500 He of mobile terminal
Data are transmitted between external device (ED).
Memory 509 can be used for storing software program and various data.Memory 509 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 509 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the software program and/or module that are stored in memory 509, and calls and is stored in storage
Data in device 509 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place
Managing device 510 may include one or more processing units;Preferably, processor 510 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 510.
Mobile terminal 500 can also include the power supply 511 (such as battery) powered to all parts, it is preferred that power supply 511
Can be logically contiguous by power-supply management system and processor 510, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, mobile terminal 500 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 510, and memory 509 is stored in
On memory 509 and the computer program that can run on the processor 510, the computer program are executed by processor 510
Each process of the above-mentioned message method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating, here
It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned message method embodiment, and energy when being executed by processor
Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (12)
1. a kind of method for sending information is applied to mobile terminal, which is characterized in that the described method includes:
Receive the voice request that first terminal is sent;
Obtain the biological information of user;
Authentication is carried out to the user according to the biological information;
In the case of successful certification, current ambient conditions are detected;
In the case where the current ambient conditions are default ambient condition, the lip reading information of the user is obtained;
Obtain the corresponding audio-frequency information of the lip reading information;
The audio-frequency information is sent to the first terminal.
2. the method according to claim 1, wherein the lip reading information corresponding audio-frequency information of obtaining
Step, comprising:
Obtain the corresponding timbre information of the user;
According to the lip reading information and the timbre information Composite tone information.
3. the method according to claim 1, wherein the step of lip reading information for obtaining the user, packet
It includes:
Obtain multiple mouth images of the user;
Described the step of obtaining the lip reading information corresponding audio-frequency information, comprising:
According to the time sequencing of mouth image described in multiple, the corresponding text information of multiple described mouth images is determined;
The text information is converted into audio-frequency information.
4. the method according to claim 1, wherein the default ambient condition includes following one of any: blue
Tooth earphone state, earphone state, mute state and vibrating state.
5. the method according to claim 1, wherein the step of detection current ambient conditions, comprising:
Detect the decibel value of current environment;
It is described in the case where the current ambient conditions are default ambient condition, obtain the step of the lip reading information of the user
Suddenly, comprising:
When the decibel value is less than or equal to default decibel value, the lip reading information of the user is obtained.
6. a kind of mobile terminal, which is characterized in that the mobile terminal includes:
Receiving module, for receiving the voice request of first terminal transmission;
First obtains module, for obtaining the biological information of user;
Authentication module, for carrying out authentication to the user according to the biological information;
Detection module, in the case of successful certification, detecting current ambient conditions;
Second obtains module, for obtaining the user's in the case where the current ambient conditions are default ambient condition
Lip reading information;
Third obtains module, for obtaining the corresponding audio-frequency information of the lip reading information;
Sending module, for sending the audio-frequency information to the first terminal.
7. mobile terminal according to claim 6, which is characterized in that described second, which obtains module, includes:
First acquisition submodule, for obtaining the corresponding timbre information of the user;
Submodule is synthesized, for according to the lip reading information and the timbre information Composite tone information.
8. mobile terminal according to claim 6, which is characterized in that described second, which obtains module, includes:
Second acquisition submodule, for obtaining multiple mouth images of the user;
The third obtains module
It determines submodule, for the time sequencing according to mouth image described in multiple, determines that multiple described mouth images are corresponding
Text information;
Transform subblock, for the text information to be converted to audio-frequency information.
9. mobile terminal according to claim 6, which is characterized in that the default ambient condition include it is following it is any it
One: bluetooth earphone state, earphone state, mute state and vibrating state.
10. mobile terminal according to claim 6, which is characterized in that the detection module includes:
Detection sub-module, for detecting the decibel value of current environment;
Described second, which obtains module, includes:
Third acquisition submodule, for obtaining the lip of the user when the decibel value is less than or equal to default decibel value
Language information.
11. a kind of mobile terminal, which is characterized in that including processor, memory and be stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 5 is realized when the computer program is executed by the processor
Any one of described in message method the step of.
12. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium
Sequence, the message method as described in any one of claims 1 to 5 is realized when the computer program is executed by processor
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910364263.3A CN110213431B (en) | 2019-04-30 | 2019-04-30 | Message sending method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910364263.3A CN110213431B (en) | 2019-04-30 | 2019-04-30 | Message sending method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110213431A true CN110213431A (en) | 2019-09-06 |
CN110213431B CN110213431B (en) | 2021-06-25 |
Family
ID=67785451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910364263.3A Active CN110213431B (en) | 2019-04-30 | 2019-04-30 | Message sending method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110213431B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942064A (en) * | 2019-11-25 | 2020-03-31 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300650A1 (en) * | 2012-05-09 | 2013-11-14 | Hung-Ta LIU | Control system with input method using recognitioin of facial expressions |
CN105338282A (en) * | 2014-06-23 | 2016-02-17 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN107799125A (en) * | 2017-11-09 | 2018-03-13 | 维沃移动通信有限公司 | A kind of audio recognition method, mobile terminal and computer-readable recording medium |
CN108197572A (en) * | 2018-01-02 | 2018-06-22 | 京东方科技集团股份有限公司 | A kind of lip reading recognition methods and mobile terminal |
CN108537207A (en) * | 2018-04-24 | 2018-09-14 | Oppo广东移动通信有限公司 | Lip reading recognition methods, device, storage medium and mobile terminal |
-
2019
- 2019-04-30 CN CN201910364263.3A patent/CN110213431B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300650A1 (en) * | 2012-05-09 | 2013-11-14 | Hung-Ta LIU | Control system with input method using recognitioin of facial expressions |
CN105338282A (en) * | 2014-06-23 | 2016-02-17 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN107799125A (en) * | 2017-11-09 | 2018-03-13 | 维沃移动通信有限公司 | A kind of audio recognition method, mobile terminal and computer-readable recording medium |
CN108197572A (en) * | 2018-01-02 | 2018-06-22 | 京东方科技集团股份有限公司 | A kind of lip reading recognition methods and mobile terminal |
CN108537207A (en) * | 2018-04-24 | 2018-09-14 | Oppo广东移动通信有限公司 | Lip reading recognition methods, device, storage medium and mobile terminal |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942064A (en) * | 2019-11-25 | 2020-03-31 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
CN110942064B (en) * | 2019-11-25 | 2023-05-09 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110213431B (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107799125A (en) | A kind of audio recognition method, mobile terminal and computer-readable recording medium | |
CN107613362A (en) | A kind of video display control method and mobile terminal | |
CN108540655A (en) | A kind of caller identification processing method and mobile terminal | |
CN108874357A (en) | A kind of reminding method and mobile terminal | |
CN110177296A (en) | A kind of video broadcasting method and mobile terminal | |
CN111010608B (en) | Video playing method and electronic equipment | |
CN108521520A (en) | A kind of call handling method and mobile terminal | |
CN109451158B (en) | Reminding method and device | |
CN110465080A (en) | Control method, apparatus, mobile terminal and the computer readable storage medium of vibration | |
CN108989558A (en) | The method and device of terminal call | |
CN110062104A (en) | Application program launching method, device and mobile terminal | |
CN109361797A (en) | A kind of vocal technique and mobile terminal | |
CN109993821A (en) | A kind of expression playback method and mobile terminal | |
CN109525712A (en) | A kind of information processing method, mobile terminal and mobile unit | |
CN110012172A (en) | A kind of processing incoming call and terminal equipment | |
CN108881782A (en) | A kind of video call method and terminal device | |
CN109144703A (en) | A kind of processing method and its terminal device of multitask | |
CN108804898A (en) | A kind of message playback method and mobile terminal | |
CN107729100A (en) | A kind of interface display control method and mobile terminal | |
CN110995921A (en) | Call processing method, electronic device and computer readable storage medium | |
CN109981443A (en) | Voice interactive method, device and terminal device | |
CN108089830B (en) | Song information display methods, device and mobile terminal | |
CN109949809A (en) | A kind of sound control method and terminal device | |
CN110378677A (en) | A kind of red packet gets method, apparatus, mobile terminal and storage medium | |
CN109348035A (en) | A kind of recognition methods of telephone number and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |