CN107277224A - The method and terminal of a kind of speech play - Google Patents
The method and terminal of a kind of speech play Download PDFInfo
- Publication number
- CN107277224A CN107277224A CN201710301410.3A CN201710301410A CN107277224A CN 107277224 A CN107277224 A CN 107277224A CN 201710301410 A CN201710301410 A CN 201710301410A CN 107277224 A CN107277224 A CN 107277224A
- Authority
- CN
- China
- Prior art keywords
- feature
- ear line
- screen touch
- shape graph
- ear
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72433—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Environmental & Geological Engineering (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a kind of method of speech play, including:Gather screen touch pressing shape graph;Recognize in the screen touch pressing shape graph and whether there is ear line feature;If there is ear line feature in the screen touch pressing shape graph, voice is played by handset mode.The embodiment of the invention also discloses a kind of terminal in addition.Implement the embodiment of the present invention, can realize and voice is effectively played, so as to allow user to get voice content accurately and in time, improve user experience.
Description
Technical field
The present invention relates to the method and terminal of field of terminal technology, more particularly to a kind of speech play.
Background technology
At present, with the development of the communication technology, the MSN such as wechat is very popular.MSN has changed
The mode of popular live and work is become, the daily life of many people MSN such as too busy to get away wechat, QQ.And
In the function of MSN, the function of speech message is liked by user deeply.
But the function of speech message also has weak point in terms of experience, the broadcasting of wherein speech message is a user
The focus of concern.Speech message is generally all played out using player mode, if being related to the reasons such as individual privacy,
Because user is not intended to known to stranger, go to play out so handset mode can be switched to.But it is too short in speech message
When, for example:2 seconds, 3 seconds, when the close preparation of mobile phone is listened speech message by certain customers, the possible speech message was
Finish, be at this moment accomplished by user and click on broadcasting again, there is unhandy problem again.
In a word, prior art can not effectively be played to speech message, and user can not obtain accurately and in time
To voice content, there is the problem of Consumer's Experience is low.
The content of the invention
The embodiment of the invention discloses a kind of method of speech play and terminal, it can realize and voice is effectively broadcast
Put, so as to allow user to get voice content accurately and in time, improve user experience.
First aspect of the embodiment of the present invention discloses a kind of method of speech play, including:Gather screen touch and press swaging
Shape figure;Recognize in the screen touch pressing shape graph and whether there is ear line feature;If in the screen touch pressing shape graph
There is ear line feature, then voice is played by handset mode.
Second aspect of the embodiment of the present invention discloses a kind of terminal, including:
Collecting unit, for gathering screen touch pressing shape graph;Feature identification unit, for recognizing the screen touch
Press and whether there is ear line feature in shape graph;Broadcast unit, for determining the screen touch pressing when the recognition unit
There is the ear line feature in shape graph, then voice is played by handset mode.
In embodiments of the present invention, collection screen touch pressing shape graph, recognition screen touch pressing shape graph in whether
There is ear line feature;If so, then playing voice by handset mode.As can be seen that what the embodiment of the present invention was recognized by ear line
Mode, carrys out real-time voice of the triggering terminal under receiver play mode and plays, can realize and voice is effectively played, from
And allow user to get voice content accurately and in time, improve user experience.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, below by using required in embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic diagram for the technology scene that a kind of user disclosed in the embodiment of the present invention answers voice;
Fig. 2 is a kind of schematic flow sheet of the method for speech play disclosed in the embodiment of the present invention;
Fig. 3 is the stream for the method that a kind of recognition screen disclosed in the embodiment of the present invention touches pressing shape graph middle ear line feature
Journey is illustrated;
Fig. 4 is the method that another recognition screen disclosed in the embodiment of the present invention touches pressing shape graph middle ear line feature
Flow is illustrated;
Fig. 5 is a kind of schematic flow sheet of image pre-processing method disclosed in the embodiment of the present invention;
Fig. 6 is a kind of structural representation of terminal disclosed in the embodiment of the present invention;
Fig. 7 is a kind of structural representation of terminal disclosed in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
The embodiment of the invention discloses a kind of method of speech play and terminal, it can realize and speech message is carried out effectively
Ground is played, so as to allow user to get message content accurately and in time, improves user experience.Carry out individually below detailed
Explanation.
Referring to Fig. 1, Fig. 1 is the schematic diagram for the technology scene that a kind of user disclosed in the embodiment of the present invention answers voice.
As shown in figure 1, in the technology scene, user answers voice with mobile phone close to the ear of oneself, and ear is to touch mobile phone
Screen, can produce the shape of an either large or small touch pressing, i.e. ear line on the position of contact screen, and the ear line is specific such as
Shown on the right of Fig. 1.Certainly, different according to the holding mode of user in practical application scene, finger, cheek etc. also can be in screens
Of different shapes touch is left on curtain and presses shape.
Referring to Fig. 2, Fig. 2 is a kind of schematic flow sheet of the method for speech play disclosed in the embodiment of the present invention.Such as Fig. 2
Shown, the method for the speech play may comprise steps of:
S201, terminal receive the request of user, and the request is used for instruction terminal and plays voice.
Terminal can be mobile phone, tablet personal computer, palm PC, notebook computer, mobile Internet in above-mentioned steps S201
Equipment and wearable device etc. (such as intelligent watch, Intelligent bracelet), or it is other can set up communication connection, and can
Realize the terminal device of data storage.
The speech message that voice can be received for terminal by MSN in above-mentioned steps S201.Certainly, in reality
In the application scenarios of border, the voice can also be the speech data of the other forms such as telephonograph, voice data.Optionally, the language
Sound can be terminal real-time reception to or terminal be stored in local speech data.
As an alternative embodiment, terminal receives the request of user, the request is used for instruction terminal and plays voice,
Specific implementation includes:User sends the request that voice is played in selection by clicking on the screen of terminal, and terminal receives user's
Request, the request plays the voice for instruction terminal.For example:In wechat, user disappears it is desirable that playing some voice
During breath, the operation for clicking on terminal screen can be made after some speech message is selected, this operation will be triggered and sent out to terminal
Go out to play the request of the voice.
Optionally, during voice is played in user's selection, terminal can point out user to select play mode, the broadcasting
Mode includes:Player plays pattern, receiver play mode, terminal, which can be configured, wherein in the embodiment of the present invention efficiently plays
Mode selecting operation, for example:In wechat, cunning is selection player plays pattern to the touch bar of wechat voice to the right, is slided to the left
It is receiver play mode.It should be noted that being not to carry out speech play, system at once after terminal selection play mode
There is default time delay for speech play, for example:2 seconds.But for the different user of operation, some user actions are fast, very
, it is necessary to wait when just carrying out the preparation that speech message is listened under handset mode soon;Some user actions are slower, may also
When not carrying out the warming-up exercise for listening to speech message, the voice, which has just been played, to be over, and this reduces Consumer's Experience.
So in embodiments of the present invention, terminal is received after the speech play request of user, passes through touching for acquisition terminal
Pressing shape graph is touched, is known otherwise by ear line, carrys out real-time voice of the triggering terminal under receiver play mode and plays.
S202, collection screen touch pressing shape graph.
In above-mentioned steps S202 screen touch pressing shape graph can for user prepare answer voice operating when, ear,
Cheek, finger etc. touch the shape graph that pressing terminal screen leaves, and specifically may be referred in Fig. 1 shown in right graphic.
As an alternative embodiment, the screen of above-mentioned terminal can be capacitance plate, above-mentioned terminal collection screen is touched
Pressing shape graph is touched, including:Terminal gathers the screen touch by above-mentioned capacitance plate and presses shape graph.
Capacitance technology touch panel CTP (Capacity Touch Panel) is operated using the electric current sensing of human body
's.Capacitance plate is one piece of four layers of compound glass screen, and the inner surface and interlayer of glass screen respectively apply one layer of nano indium tin metal oxide
(Indium Tin Oxide, ITO), outermost layer is the native glassivation of the thick silicons of only 0.0015mm, and interlayer ITO coatings are work
Make face, four electrodes are drawn at four angles, and internal layer ITO is screen layer to ensure working environment.
The operation principle of wherein capacitance plate is:When ear touches capacitance plate, due to human body electric field, the part of user's ear
Region and working face one coupled capacitor of formation, because being connected with high-frequency signal on working face, then ear absorbs one very
Small electric current, this electric current flows out from the electrode on four angles of screen respectively, and flow through in theory the electric currents of four electrodes with
The distance of finger to corner is proportional, and controller obtains out position, these are touched by calculating four the accurate of current ratio
The location sets touched, so as to obtain the image with the ear line of screen.
S203, recognition screen, which are touched in pressing shape graph, whether there is ear line feature.
It whether there is ear line feature as an alternative embodiment, recognizing in the screen touch pressing shape graph,
Including:
Fisrt feature group is extracted from screen touch pressing shape graph, it is special that the fisrt feature group includes at least one profile
Levy;The shape facility of fisrt feature group is matched with the ear line feature gathered in advance;If the match is successful, confirm that screen is touched
Touch in pressing shape graph and there is ear line feature.
Optionally, it is specific real wherein the shape facility of fisrt feature group is matched with the ear line feature gathered in advance
Now mode can be:Inquire about the quantity of the shape facility comprising fisrt feature group in the ear line feature gathered in advance;If the number
Amount is more than amount threshold, it is determined that include ear line feature in screen touch pressing shape graph.
In above-mentioned specific implementation, when answering voice due to user, the degree for pressing ear is different, so screen is pressed
Pressure shape graph, which differs to be set to, presses the pressing image that complete ear is generated, so only needing to that the ear gathered in advance can be matched
The feature of part in line feature, you can judge the feature for including ear in screen touch pressing shape graph.So, can be by feature
The amount threshold of point is configurable to relatively low numerical value, for example:1,2 etc., certainly, in actual applications, the amount threshold
It can be configured, be not specifically limited herein according to actual conditions by algorithm personnel.
As an alternative embodiment, ear line feature includes the ear line feature of destination object;Above-mentioned recognition screen is touched
Touch and whether there is target signature in pressing shape graph, including:
Fisrt feature group is extracted from screen touch pressing shape graph, it is special that the fisrt feature group includes at least one profile
Levy;The shape facility of fisrt feature group is matched with the ear line feature gathered in advance;If the match is successful, by fisrt feature
Ear line feature in group is matched with the ear feature for the destination object being pre-configured with;If the match is successful, confirm that screen is touched
Touch in pressing shape graph and there is target signature.
In this kind of embodiment, first determine that screen touch pressing shape graph has ear line feature, it is then determined that the ear line
The ear line feature of destination object is characterized as, authentication can be carried out, strengthens the security of speech play.
Optionally, in embodiments of the present invention, what the ear line feature gathered in advance was realized particular by following steps:In advance
First gather ear line pressing image;Ear line pressing image is pre-processed, the pretreatment includes smoothing processing, binaryzation, sharp
At least one of change and refinement;The ear line feature extracted in ear line pressing image is used as the ear line feature gathered in advance.
Optionally, in embodiments of the present invention, fisrt feature group is extracted from screen touch pressing shape graph, implemented
Mode can include:Screen touch pressing shape graph is pre-processed, the pretreatment includes smoothing processing, binaryzation, sharpening
At least one of and refinement;Fisrt feature group is extracted from the screen touch pressing shape image after the pretreatment.
In above-mentioned specific implementation, the fisrt feature point group include when time ear line pressing image in characteristic point or
Person's feature contour.It should be noted that, although in embodiments of the present invention, may have been used term " first ", " second " etc.
To describe each feature point group, but these feature point groups should not be limited by these terms, the use of these terms is only to be
It is intended merely to make a distinction a feature group and another feature point group.Fisrt feature group can include at least one feature
Profile, " first " and " second " does not do specific amount of restriction.
If there is ear line feature in S204, screen touch pressing shape graph, voice is played.
If as an alternative embodiment, there is ear line feature in screen touch pressing shape graph, playing voice,
Its operation principle is:If screen touch pressing shape graph can be gathered, illustrate user close to terminal, but be due to that have can
It can be finger or other positions of body close to terminal, be not ear, if terminal plays voice by receiver immediately, very
There may be user can not accurately hear the situation of voice, for example:User's hand just contacts terminal, then slowly close to ear,
But voice has been played and has been over;If recognition screen, which is touched in pressing shape graph, whether there is ear line feature, when screen touch is pressed
There is ear line feature in pressure shape graph, then trigger broadcasting voice, ensure that user answers the accuracy and promptness of voice,
It ensure that the privacy of speech play.As can be seen that the embodiment of the present invention can be accurately judged to the meaning that user answers voice
At the time point of figure, precisely triggering speech play, so as to allow user to get message content accurately and in time, improve voice and broadcast
The security put, improves user experience.
In the method described by Fig. 2, the request that user's instruction terminal plays voice is received, collection screen touch presses swaging
Shape figure, recognition screen, which is touched in pressing shape graph, whether there is ear line feature;If so, then playing voice by handset mode.Can
To find out, the present invention is known otherwise by ear line, is carried out real-time voice of the triggering terminal under receiver play mode and is played, can
Realization is effectively and safely played to voice, so as to allow user to get voice content accurately and in time, improves use
Family Experience Degree.
Referring to Fig. 3, Fig. 3, which is a kind of recognition screen disclosed in the embodiment of the present invention, touches pressing shape graph middle ear line feature
Method flow signal.Method as shown in Figure 3, can apply in the step S202 shown in above-mentioned Fig. 2, the recognition screen
The method for touching the ear line feature of pressing shape graph may comprise steps of:
S301, the extraction fisrt feature group from screen touch pressing shape graph, the fisrt feature group include at least one and taken turns
Wide feature.
Fisrt feature group includes at least one contour feature, it is necessary to which what is illustrated is in above-mentioned steps S301, although in this hair
In bright embodiment, it may have been used term " first ", " second " etc. to describe each feature point group, but these feature point groups
Should not be limited by these terms, it is used for the purpose of being intended merely to a feature group and another characteristic point using these terms
Group makes a distinction.In addition, the first " does not do specific amount of restriction with " second ".
S302, the shape facility of fisrt feature group matched with the ear line feature gathered in advance.
As an alternative embodiment, the shape facility of fisrt feature group and the ear line feature gathered in advance are carried out
Matching, specific implementation can include:
Inquire about the quantity of the shape facility comprising fisrt feature group in the ear line feature gathered in advance;If the quantity is more than
Amount threshold, it is determined that include ear line feature in screen touch pressing shape graph.
In above-mentioned specific implementation, when answering voice due to user, the degree for pressing ear is different, so screen is pressed
Pressure shape graph, which differs to be set to, presses the pressing image that complete ear is generated, so only needing to that the ear gathered in advance can be matched
The feature of part in line feature, you can judge the feature for including ear in screen touch pressing shape graph.So, can be by feature
The amount threshold of point is configurable to relatively low numerical value, for example:1,2 etc., certainly, in actual applications, the amount threshold
It can be configured, be not specifically limited herein according to actual conditions by algorithm personnel.
For example, if setting characteristic point amount threshold as 2, and included in terminal inquiry ear pressing shape database
The quantity of characteristic point is 3 in fisrt feature point group, then terminal judges the feature for including ear in screen touch pressing shape graph.
As an alternative embodiment, obtaining the above-mentioned ear line feature gathered in advance, specific implementation can be wrapped
Include:Ear line pressing image is gathered in advance;Ear line pressing image is pre-processed, the pretreatment includes smoothing processing, two-value
At least one of change, sharpen and refine;The ear line feature extracted in ear line pressing image is special as the ear line gathered in advance
Levy.
If S303, the match is successful, confirm there is ear line feature in screen touch pressing shape graph.
In the method described by Fig. 3, fisrt feature group, the fisrt feature group are extracted from screen touch pressing shape graph
Including at least one contour feature;Then the shape facility of fisrt feature group is matched with the ear line feature gathered in advance;
If the match is successful, confirm there is ear line feature in screen touch pressing shape graph.As can be seen that the embodiment of the present invention can be accurate
Really, the ear line feature in screen touch pressing shape graph is efficiently identified, the identification of ear line is provided for the method for speech play
Technical foundation.
Referring to Fig. 4, Fig. 3, which is another recognition screen disclosed in the embodiment of the present invention, touches pressing shape graph middle ear line spy
The flow signal for the method levied.Method as shown in Figure 4, can apply in the step S202 shown in above-mentioned Fig. 2, this method can
To comprise the following steps:
S401, the extraction fisrt feature group from screen touch pressing shape graph, the fisrt feature group include at least one and taken turns
Wide feature.
The definition and explanation of fisrt feature group are identical with Fig. 3 in above-mentioned steps S401.
S402, the shape facility of fisrt feature group matched with the ear line feature gathered in advance.
Optionally, the above-mentioned shape facility by fisrt feature group is matched with the ear line feature gathered in advance, and its is specific
Implementation is identical with Fig. 3.
If S403, the match is successful, by the ear line feature in fisrt feature group and the ear for the destination object being pre-configured with
Feature is matched.
As an alternative embodiment, the ear feature of the destination object being pre-configured with includes K unique marks
Remember feature, the K is positive integer;The ear feature of destination object by the ear line feature in fisrt feature group with being pre-configured with is entered
Row matching, specific implementation can include:
Ear line feature in fisrt feature group is matched with the ear feature for the destination object being pre-configured with, M is obtained
Individual target signature, the M is the integer more than 1;The quantity K of uniquely tagged feature is included in M target signature of inquiry, K is judged
Whether the amount threshold of default uniquely tagged feature is exceeded, and K is the positive integer less than or equal to M;If, it is determined that match into
Work(.
For example:The ear feature of pre-configured user includes 3 uniquely tagged features, if by fisrt feature group
Ear line feature matched with the ear feature for the user being pre-configured with, obtain 4 identical target signatures, 4 targets
Feature includes 3 uniquely tagged features, if the amount threshold of default uniquely tagged feature is 2, it is determined that the match is successful,
Determine that the ear is characterized as the ear feature of the user.
If S404, the match is successful, confirm there is target signature in screen touch pressing shape graph.
In above-mentioned steps S404, the target signature includes the ear line feature of destination object, and the destination object can be specifically
The specific user of terminal.
In the method described described in Fig. 4, first determine that screen touch pressing shape graph has ear line feature, it is then determined that should
Ear line is characterized as the ear line feature of destination object, can carry out authentication, strengthens the security of speech play.
, can be with referring to Fig. 5, Fig. 5 is a kind of schematic flow sheet of image pre-processing method disclosed in the embodiment of the present invention
In the step of recognition screen for Fig. 2, Fig. 3 or Fig. 4 touches pressing shape graph middle ear line feature, perform for pressing ear line
Image is pre-processed, or screen touch pressing shape graph is pre-processed.It is to be appreciated that in the embodiment of the present invention
In, ear line pressing image is pre-processed, or screen touch pressing shape graph is pre-processed, Fig. 5 can be only carried out
Image pre-processing method in any one or multiple steps, the embodiment of the present invention is only with the image described in Fig. 5
The flow of preprocess method is not specifically limited as exemplary illustration.As shown in figure 5, the image pre-processing method can be wrapped
Include following steps:
S501, to image carry out Optimal Space processing, obtain filtering image.
Optionally, above-mentioned image can be any image for supporting to be digitized processing, in embodiments of the present invention, tool
Body can be ear line pressing image or screen touch pressing shape graph in Fig. 2, Fig. 3 or Fig. 4.
Optimal Space processing is the maximum image of protecting in order to which the details to image is protected in above-mentioned steps
Authenticity.Because the embodiment of the present invention needs recognition screen to press in shape graph with the presence or absence of ear feature, it is necessary to carry out spy again
Levy before extracting, it is necessary to be protected to the minutia in image.
In the embodiment of the present invention, to handling image by the way of Optimal Space, concrete operations are:Setting
Matrix [Xij] digitized image, wherein i are represented, j represents position a little, W [Xij] represent centered on point (i, j) to figure
Point X as inijWindow operation is done, then to window W [Xij] in take intermediate value a little.
S502, to filtering image smoothing processing, obtain the relatively low smoothed image of noise.
Smoothing processing refers to the protrusion image of enlarged regions to(for) image, low-frequency component, trunk in above-mentioned steps S503
Part suppresses picture noise and disturbs radio-frequency component to be handled, it is therefore an objective to make the gentle gradual change of brightness of image, reduces mutation ladder
Degree, improves picture quality.The method of image smoothing includes:Interpolation method, linear smoothing method, convolution method etc..Optionally, exist
In the embodiment of the present invention, linear smoothing method operation is carried out using to image, the linear averaging sliding window of N number of pixel is set
Mouthful, N is the positive integer more than 1, and the method that all pixels point in linear averaging sliding window is filtered according to linear averaging is entered
Row processing, obtains the 3rd image.After smoothing processing, the noise of the image is reduced.
S503, processing is sharpened to smoothed image, obtains sharp-edged sharpening image.
(image sharpening) is sharpened in above-mentioned steps S503, the profile for compensating image strengthens the side of image
The part of edge and Gray Level Jump, is apparent from image, is also divided to spatial processing and frequency domain to handle two classes.Because image smoothing
The border in image, profile is often set to thicken, using image sharpening techniques, make that the edge of image becomes is clear.
S504, to sharpening image carry out binary conversion treatment, obtain the binary image of black and white effect.
As an alternative embodiment, carrying out binary conversion treatment to above-mentioned sharpening image, binary image is obtained, is had
Body implementation includes:A pixel threshold T is preset, judges whether the pixel of each pixel of above-mentioned sharpening image exceedes picture
Plain threshold value T, if pixel exceedes pixel threshold T, 255 are set to by the gray value of the pixel, if pixel is no more than pixel threshold
Value T, then be set to 0, final output binary image only has black and white visual effect by the gray value of the pixel.This kind of side
Method can simplify the processing in later stage, improve the speed of image procossing.
S505, to binary image carry out micronization processes, obtain including the refined image of image framework.
Optionally, micronization processes are carried out to binary image, including:The point of binary image is removed layer by layer, obtained
The shape of image is obtained, the skeleton until obtaining image, the skeleton can be the axis of image, image bone is included after most refining at last
The image of frame is defined as refined image.
As an alternative embodiment, Burning can specifically be used by carrying out micronization processes to binary image
Algorithm, i.e., using iteration method place to go image border, obtain border using scan line, or Zhang fast parallel
Fast thinning algorithm is realized.
In the method described by Fig. 5, by image carry out Optimal Space, smoothing processing, sharpening, binaryzation,
The pretreatment of refinement, finally obtains clean, muting image, and the image includes the skeleton of image, can realize to enter image
Row pretreatment operation, in order to subsequently carry out feature extraction to image.
Referring to Fig. 6, Fig. 6 is a kind of structural representation of terminal disclosed in the embodiment of the present invention, it can be used for performing sheet
The method of speech play disclosed in inventive embodiments.As shown in fig. 6, the terminal 600 can include:
Collecting unit 601, for gathering screen touch pressing shape graph;
Feature identification unit 602, ear line feature is whether there is for recognizing in the screen touch pressing shape graph;
Broadcast unit 603, for being determined when the recognition unit 602 in the screen touch pressing shape graph in the presence of described
Target signature, voice is played by handset mode.
Optionally, in above-mentioned terminal, the feature identification unit 602 specifically for:
Fisrt feature group is extracted from screen touch pressing shape graph, the fisrt feature group is taken turns including at least one
Wide feature;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;
If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
Optionally, in above-mentioned terminal, the target signature includes the ear line feature of destination object;The feature identification unit
602 specifically for:
Fisrt feature group is extracted from screen touch pressing shape graph, the fisrt feature group is taken turns including at least one
Wide feature;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;
It is if the match is successful, the ear of the ear line feature in the fisrt feature group and the destination object being pre-configured with is special
Levy and matched;
If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
Optionally, above-mentioned terminal also includes:
The collecting unit 601, is additionally operable to gather ear line pressing image in advance;
The recognition unit 602, is additionally operable to pre-process the ear line pressing image, and the pretreatment includes smooth
At least one of processing, binaryzation, sharpen and refine, and the ear line feature in the ear line pressing image is extracted as pre-
The ear line feature first gathered.
Optionally, feature identification unit 602 described in above-mentioned terminal also particularly useful for:
Screen touch pressing shape graph is pre-processed, the pretreatment includes smoothing processing, binaryzation, sharpens
At least one of and refinement;
Fisrt feature group is extracted from the screen touch pressing shape image after the pretreatment.
Specifically, the terminal introduced in the embodiment of the present invention can implement the present invention and combine Fig. 2, Fig. 3, Fig. 4 or Fig. 5 introduction
Embodiment of the method in part or all of flow.
Unit or subelement in all embodiments of the invention, can be by universal integrated circuit, such as CPU, or passes through
ASIC (Application Specific Integrated Circuit, application specific integrated circuit) is realized.
A kind of structural representation for terminal that Fig. 7 provides for the application, the terminal 700 includes at least one processor 701,
At least one memory 702 and at least one communication interface 703.The processor 701, the memory 702 and described logical
Letter interface 703 is connected by the communication bus and completes mutual communication.
Processor 701 can be general central processor (CPU), microprocessor, ASIC
(application-specific integrated circuit, ASIC), or it is one or more for controlling above scheme journey
The integrated circuit that sequence is performed.
Communication interface 703, for other equipment or communication, such as Ethernet, wireless access network (RAN), nothing
Line LAN (Wireless Local Area Networks, WLAN) etc..
Memory 702 can be read-only storage (read-only memory, ROM) or can store static information and instruction
Other kinds of static storage device, random access memory (random access memory, RAM) or letter can be stored
Breath and the other kinds of dynamic memory or EEPROM (Electrically of instruction
Erasable Programmable Read-Only Memory, EEPROM), read-only optical disc (Compact Disc Read-
Only Memory, CD-ROM) or other optical disc storages, laser disc storage (including compression laser disc, laser disc, laser disc, digital universal
Laser disc, Blu-ray Disc etc.), magnetic disk storage medium or other magnetic storage apparatus or can be used in carrying or store with referring to
The desired program code of order or data structure form and can by computer access any other medium, but not limited to this.
Memory can be individually present, and be connected by bus with processor.Memory can also be integrated with processor.
Wherein, the memory 702 is used to store the application code for performing above scheme, and the processor 701 is used
Following operation is performed in calling the application code stored in the memory 702:
Gather screen touch pressing shape graph;Recognize in the screen touch pressing shape graph and whether there is ear line feature;
If there is ear line feature in the screen touch pressing shape graph, voice is played by handset mode.
Optionally, it whether there is ear line feature, including following operation in the identification screen touch pressing shape graph:
Fisrt feature group is extracted from screen touch pressing shape graph, the fisrt feature group includes at least one contour feature;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;If the match is successful, confirm described
There is ear line feature in screen touch pressing shape graph.
Optionally, the target signature includes the ear line feature of destination object;The identification screen touch presses swaging
It whether there is ear line feature, including following operation in shape figure:Fisrt feature group is extracted from screen touch pressing shape graph,
The fisrt feature group includes at least one contour feature;By the shape facility of the fisrt feature group and the ear line gathered in advance
Feature is matched;If the match is successful, by the ear line feature in the fisrt feature group and the destination object that is pre-configured with
Ear feature is matched;If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
Optionally, extract fisrt feature group from screen touch pressing shape graph and specifically include following operation:To institute
State screen touch pressing shape graph to be pre-processed, the pretreatment includes smoothing processing, binaryzation, in sharpening and refining
It is at least one;Fisrt feature group is extracted from the screen touch pressing shape image after the pretreatment.
Optionally, the processor 701 is additionally operable to call the application code stored in the memory 702 to perform such as
Lower operation:
Ear line pressing image is gathered in advance;The ear line pressing image is pre-processed, the pretreatment includes smooth
At least one of processing, binaryzation, sharpen and refine;The ear line feature in the ear line pressing image is extracted as advance
The ear line feature of collection.
It should be noted that for each foregoing embodiment of the method, in order to be briefly described, therefore it is all expressed as to one it is
The combination of actions of row, but those skilled in the art should know, the present invention is not limited by described sequence of movement, because
For according to the application, certain some step can be carried out sequentially or simultaneously using other.Secondly, those skilled in the art also should
Know, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily this Shen
Please be necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and is not described in some embodiment
Part, may refer to the associated description of other embodiment.
Step in present invention method can be sequentially adjusted, merged and deleted according to actual needs.
Unit in user terminal of the embodiment of the present invention can be combined, divided and deleted according to actual needs.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is instructed to complete by computer program, described program can be stored in a computer read/write memory medium
In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access
Memory, abbreviation RAM) etc..
The method and terminal to a kind of speech play disclosed in the embodiment of the present invention are described in detail above, herein
Apply specific case to be set forth the principle and embodiment of the present invention, the explanation of above example is only intended to help
Understand the method and its core concept of the present invention;Simultaneously for those of ordinary skill in the art, according to the thought of the present invention,
It will change in specific embodiments and applications, in summary, this specification content should not be construed as to this
The limitation of invention.
Claims (10)
1. a kind of method of speech play, it is characterised in that including:
Gather screen touch pressing shape graph;
Recognize in the screen touch pressing shape graph and whether there is ear line feature;
If there is ear line feature in the screen touch pressing shape graph, voice is played by handset mode.
2. according to the method described in claim 1, it is characterised in that in the identification screen touch pressing shape graph whether
There is ear line feature, including:
Fisrt feature group is extracted from screen touch pressing shape graph, it is special that the fisrt feature group includes at least one profile
Levy;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;
If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
3. according to the method described in claim 1, it is characterised in that the ear line feature includes the ear line feature of destination object;
It whether there is ear line feature in the identification screen touch pressing shape graph, including:
Fisrt feature group is extracted from screen touch pressing shape graph, it is special that the fisrt feature group includes at least one profile
Levy;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;
If the match is successful, the ear feature of the destination object by the ear line feature in the fisrt feature group with being pre-configured with is entered
Row matching;
If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
4. according to the method in claim 2 or 3, it is characterised in that also include:
Ear line pressing image is gathered in advance;
The ear line pressing image is pre-processed, the pretreatment includes smoothing processing, binaryzation, in sharpening and refining
At least one;
The ear line feature in the ear line pressing image is extracted as the ear line feature gathered in advance.
5. according to the method in claim 2 or 3, it is characterised in that carried in the pressing shape graph from the screen touch
Fisrt feature group is taken, including:
Screen touch pressing shape graph is pre-processed, the pretreatment includes smoothing processing, binaryzation, sharpening and
At least one of refinement;
Fisrt feature group is extracted from the screen touch pressing shape image after the pretreatment.
6. a kind of terminal, it is characterised in that including:
Collecting unit, for gathering screen touch pressing shape graph;
Feature identification unit, ear line feature is whether there is for recognizing in the screen touch pressing shape graph;
Broadcast unit, for determining there is the ear line feature in the screen touch pressing shape graph when the recognition unit,
Voice is then played by handset mode.
7. terminal according to claim 6, it is characterised in that the feature identification unit specifically for:
Fisrt feature group is extracted from screen touch pressing shape graph, it is special that the fisrt feature group includes at least one profile
Levy;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;
If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
8. terminal according to claim 6, it is characterised in that the target signature includes the ear line feature of destination object;
The feature identification unit specifically for:
Fisrt feature group is extracted from screen touch pressing shape graph, it is special that the fisrt feature group includes at least one profile
Levy;
The shape facility of the fisrt feature group is matched with the ear line feature gathered in advance;
If the match is successful, the ear feature of the destination object by the ear line feature in the fisrt feature group with being pre-configured with is entered
Row matching;
If the match is successful, confirm there is ear line feature in the screen touch pressing shape graph.
9. the terminal according to claim 6 or 7, it is characterised in that the terminal also includes:
The collecting unit, is additionally operable to gather ear line pressing image in advance;
The feature identification unit, is additionally operable to pre-process the ear line pressing image, and the pretreatment includes smooth place
At least one of reason, binaryzation, sharpen and refine, and the ear line feature in the ear line pressing image is extracted as advance
The ear line feature of collection.
10. the terminal according to claim 6 or 7, it is characterised in that the feature identification unit specifically for:
Screen touch pressing shape graph is pre-processed, the pretreatment includes smoothing processing, binaryzation, sharpening and
At least one of refinement;
Fisrt feature group is extracted from the screen touch pressing shape image after the pretreatment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710301410.3A CN107277224A (en) | 2017-05-02 | 2017-05-02 | The method and terminal of a kind of speech play |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710301410.3A CN107277224A (en) | 2017-05-02 | 2017-05-02 | The method and terminal of a kind of speech play |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107277224A true CN107277224A (en) | 2017-10-20 |
Family
ID=60073651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710301410.3A Withdrawn CN107277224A (en) | 2017-05-02 | 2017-05-02 | The method and terminal of a kind of speech play |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107277224A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109116982A (en) * | 2018-07-09 | 2019-01-01 | Oppo广东移动通信有限公司 | Information broadcasting method, device and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010010527A (en) * | 1999-07-21 | 2001-02-15 | 서평원 | Method of Transformating Automatically Speaker Phone Mode in the Telephone |
CN101488081A (en) * | 2008-01-14 | 2009-07-22 | 鸿富锦精密工业(深圳)有限公司 | Audio device and its playing apparatus and method |
CN105120089A (en) * | 2015-08-17 | 2015-12-02 | 惠州Tcl移动通信有限公司 | A method for mobile terminal automatic telephone answering and a mobile terminal |
CN106231122A (en) * | 2016-09-05 | 2016-12-14 | 中科创达软件股份有限公司 | A kind of voice messaging player method, controller and system |
-
2017
- 2017-05-02 CN CN201710301410.3A patent/CN107277224A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010010527A (en) * | 1999-07-21 | 2001-02-15 | 서평원 | Method of Transformating Automatically Speaker Phone Mode in the Telephone |
CN101488081A (en) * | 2008-01-14 | 2009-07-22 | 鸿富锦精密工业(深圳)有限公司 | Audio device and its playing apparatus and method |
CN105120089A (en) * | 2015-08-17 | 2015-12-02 | 惠州Tcl移动通信有限公司 | A method for mobile terminal automatic telephone answering and a mobile terminal |
CN106231122A (en) * | 2016-09-05 | 2016-12-14 | 中科创达软件股份有限公司 | A kind of voice messaging player method, controller and system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109116982A (en) * | 2018-07-09 | 2019-01-01 | Oppo广东移动通信有限公司 | Information broadcasting method, device and electronic device |
CN109116982B (en) * | 2018-07-09 | 2021-09-14 | Oppo广东移动通信有限公司 | Information playing method and device and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110706179B (en) | Image processing method and electronic equipment | |
US10496804B2 (en) | Fingerprint authentication method and system, and terminal supporting fingerprint authentication | |
CN111464716B (en) | Certificate scanning method, device, equipment and storage medium | |
CN106127481B (en) | A kind of fingerprint method of payment and terminal | |
EP3637290B1 (en) | Unlocking control method and related product | |
CN105868613A (en) | Biometric feature recognition method, biometric feature recognition device and mobile terminal | |
WO2019024717A1 (en) | Anti-counterfeiting processing method and related product | |
CN109690563B (en) | Fingerprint registration method, terminal and computer-readable storage medium | |
CN107451454B (en) | Unlocking control method and related product | |
CN110263667B (en) | Image data processing method and device and electronic equipment | |
CN103136508A (en) | Gesture identification method and electronic equipment | |
WO2019011098A1 (en) | Unlocking control method and relevant product | |
CN106203326B (en) | A kind of image processing method, device and mobile terminal | |
WO2017005020A1 (en) | Mobile terminal, and method therefor for realizing automatic answering | |
CN109144245B (en) | Equipment control method and related product | |
CN109905393B (en) | E-commerce login method based on cloud security | |
CN110245607B (en) | Eyeball tracking method and related product | |
CN108053371A (en) | A kind of image processing method, terminal and computer readable storage medium | |
US9867134B2 (en) | Electronic device generating finger images at a progressively slower capture rate and related methods | |
CN111599460A (en) | Telemedicine method and system | |
CN111080747B (en) | Face image processing method and electronic equipment | |
CN107193526A (en) | The method and terminal of a kind of speech play | |
CN111199169A (en) | Image processing method and device | |
CN108491780A (en) | Image landscaping treatment method, apparatus, storage medium and terminal device | |
CN105426904B (en) | Photo processing method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171020 |