CN106713636B - Loading method, device and the mobile terminal of image data - Google Patents

Loading method, device and the mobile terminal of image data Download PDF

Info

Publication number
CN106713636B
CN106713636B CN201611209252.0A CN201611209252A CN106713636B CN 106713636 B CN106713636 B CN 106713636B CN 201611209252 A CN201611209252 A CN 201611209252A CN 106713636 B CN106713636 B CN 106713636B
Authority
CN
China
Prior art keywords
audio
identification information
image data
comment
audio identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611209252.0A
Other languages
Chinese (zh)
Other versions
CN106713636A (en
Inventor
车继红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anyun Century Technology Co Ltd
Original Assignee
Beijing Anyun Century Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anyun Century Technology Co Ltd filed Critical Beijing Anyun Century Technology Co Ltd
Priority to CN201611209252.0A priority Critical patent/CN106713636B/en
Publication of CN106713636A publication Critical patent/CN106713636A/en
Application granted granted Critical
Publication of CN106713636B publication Critical patent/CN106713636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present invention provides a kind of loading method of image data, device and mobile terminals.The described method includes: receiving the image data for carrying audio identification information;It identifies the character facial characteristic in the image data, and obtains the first audio identification information corresponding with the character facial characteristic in the audio identification information;Obtain corresponding first audio file of the first audio identification information;Image data described in loaded and displayed, and first audio file is played during the display.The technical solution makes mobile terminal that can not only show that is received carries the image data of audio identification information, the audio file synthesized in image data can also be played simultaneously, transmitting-receiving of the talking picture between different mobile terminal is realized, to add the enjoyment that user shares talking picture during communicating contact.

Description

Loading method, device and the mobile terminal of image data
Technical field
The present invention relates to fields of communication technology, eventually more particularly to a kind of loading method of image data, device and movement End.
Background technique
Currently, recording slice of life with the mode of photo or video is that people use a kind of more frequent mode.Hand Machine is taken a picture an additional function as mobile phone, and since mobile phone has, small in size, thickness is thin, carrying and easy to operate etc. special Point, therefore more have been favored by people.
The main purpose that photo generates is exactly for memory storage, but with remote, the memory of this photo of time It can fade away because of remembering without environment, and the recording of video, storage, browsing or sharing can not reach the letter of people's needs Single convenient requirement.On the one hand the requirement for mobile phone photographic function and photo diversity artistic expression is continuously improved people, separately On the one hand require to record again, storage, browsing, share etc. it is simple to operate, it is seen then that it is common to take pictures and camera function Meet the diversified demand of user.
In the prior art, some camera applications on mobile phone, which have, stays sound camera function, this stays sound camera function light Pine accomplishes that image is synchronous with sound, i.e., audio is saved directly on photo reach " sound spectrogram unification " to generate and stay sound photo Completely new boundary.In display sound-leaving photo, stay the audio carried on sound photo that can also be played simultaneously, so that it is more to meet user The demand of taking pictures of sample.Though however, it is existing stay sound photo can by stay sound camera function shoot and played on mobile phone, It can not still accomplish to stay the transmission of sound photo to be shown on another mobile phone even if user will stay sound photo to issue another mobile phone It is common photo, can not identifies the audio carried in photo.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind State the loading method, device and mobile terminal of the image data of problem.
According to one aspect of the present invention, a kind of loading method of image data is provided, mobile terminal is applied to, it is described Method includes:
Receive the image data for carrying audio identification information;
Identify the character facial characteristic in the image data, and obtain in the audio identification information with the people The corresponding first audio identification information of object face feature data;
Obtain corresponding first audio file of the first audio identification information;
Image data described in loaded and displayed, and first audio file is played during the display.
Optionally, the first audio identification corresponding with the character facial characteristic in the audio identification information is obtained Information, comprising:
Determine position of the audio identification information in the image data;
It is invoked at picture parsing interface ad hoc in the mobile terminal;
The data at the position are parsed using picture parsing interface, obtain the audio identification information;
According to the corresponding relationship between preset audio identification information and character facial characteristic, the audio mark is obtained Know the first audio identification information corresponding with the character facial characteristic in information.
Optionally, it is invoked at picture parsing interface ad hoc in the mobile terminal, comprising:
The ad hoc picture is obtained from the interface name of at least one interface open in advance in the mobile terminal Parse the corresponding interface name of interface;
The ad hoc picture is called to parse interface according to the interface name got.
It is invoked at picture parsing interface ad hoc in the mobile terminal, comprising:
Judge ad hoc picture parsing interface whether is integrated in the mobile terminal;
If so, calling the picture parsing interface;
If it is not, then calling the Software Development Kit of picture parsing interface integrated in advance from the mobile end; Source code parsing is carried out to the Software Development Kit of picture parsing interface, and the data parsed are mounted on the movement In terminal, so as to be integrated with the picture parsing interface in the mobile terminal;Call the picture parsing interface.
Optionally, position of the audio identification information in the image data is located at the format number of the image data In.
Optionally, corresponding first audio file of the first audio identification information is obtained, comprising:
The first audio identification information is sent to server, the first audio identification information is searched by the server Corresponding first audio file, and first audio file is back to the mobile terminal, wherein the server storage There are the mapping relations between the first audio identification information and first audio file;
Receive the first audio file from the server.
Optionally, first audio file is played, comprising:
Call the audio player in terminal system;
Utilize the first audio file described in the audio player plays.
Optionally, the method also includes:
It adds and is shown for identifying the first identifier that current image is talking picture on the image data;
The image data that will be displayed with the first identifier is stored into default picture library;
When checking instruction of the image data is checked by the default picture library when receiving user, calls terminal system In audio player, and utilize the first audio file described in the audio player plays.
It optionally, include multiple personages on the display picture of the image data;Identify the personage in the image data Face feature data, and obtain the first audio identification corresponding with the character facial characteristic in the audio identification information Information, comprising:
Identify the character facial characteristic of each personage respectively, and obtain in the audio identification information with each character facial The corresponding second audio identification information of characteristic.
Optionally, first audio file is played, comprising:
It is added respectively on the corresponding position of each personage and shows the broadcasting for triggering second audio file The second identifier of operation;
When receiving the trigger action to any second identifier, the audio player in terminal system is called;
Utilize the second audio file corresponding to the corresponding personage of second identifier described in the audio player plays.
It optionally, further include comment audio identification information in the audio identification information;The method also includes:
Obtain the comment audio identification information for including in the audio identification information;
The corresponding comment audio file of the comment audio identification information is obtained from server, is prestored in the server Mapping relations between the comment audio identification information and the comment audio file;
The comment audio file is played during showing the image data.
Optionally, the comment audio file is played, comprising:
During playing the comment audio file, the identity of the corresponding comment person of the comment audio file is shown Information, the identity information include head portrait, name, in the pet name at least one of.
Optionally, the comment person includes at least two, includes multiple consonant frequency files in the comment audio file;It is aobvious Show the identity information of the corresponding comment person of the comment audio file, comprising:
Obtain the corresponding relationship in the comment audio file between each consonant frequency file and each comment person;
During playing the comment audio file, currently playing consonant frequency file is determined, and according to described each Corresponding relationship between consonant frequency file and each comment person determines the currently playing corresponding comment person of consonant frequency file;
Show the identity information of the currently playing corresponding comment person of consonant frequency file.
Optionally, at least two image datas are stored in the mobile terminal;The method also includes:
According to the corresponding classification element of at least two image data, at least two image data is carried out Classified and stored, it is described sort out element include comment person's information, character facial characteristic, in sender information at least one of;
Wherein, the classifying method includes: that the identical image data of the classification element is classified as one kind.
Optionally, the method also includes:
The head image data to match with the character facial characteristic, the predetermined communication are searched from predetermined address list It include multiple contact informations with head image data in record, the predetermined address list includes system communication record and/or Instant Messenger Interrogate the address list in software;
Determine contact information corresponding with the head image data;
Specify information in the contact information is shown on the image data, the specify information includes head Picture, communication number, in name at least one of.
Optionally, the first audio identification information includes the uniform resource position mark URL of first audio file.
According to another aspect of the invention, a kind of loading device of image data is provided, mobile terminal is set to, institute Stating device includes:
Receiving module, suitable for receiving the image data for carrying audio identification information;
First obtains module, suitable for identifying the character facial characteristic in the image data, and obtains the audio The first audio identification information corresponding with the character facial characteristic in identification information;
Second obtains module, is suitable for obtaining corresponding first audio file of the first audio identification information;
Display and playing module are suitable for image data described in loaded and displayed, and play during the display described the One audio file.
Optionally, the first acquisition module is further adapted for:
Determine position of the audio identification information in the image data;
It is invoked at picture parsing interface ad hoc in the mobile terminal;
The data at the position are parsed using picture parsing interface, obtain the audio identification information;
According to the corresponding relationship between preset audio identification information and character facial characteristic, the audio mark is obtained Know the first audio identification information corresponding with the character facial characteristic in information.
Optionally, the first acquisition module is further adapted for:
The ad hoc picture is obtained from the interface name of at least one interface open in advance in the mobile terminal Parse the corresponding interface name of interface;
The ad hoc picture is called to parse interface according to the interface name got.
Optionally, the first acquisition module is further adapted for:
Judge ad hoc picture parsing interface whether is integrated in the mobile terminal;
If so, calling the picture parsing interface;
If it is not, then calling the Software Development Tools of picture parsing interface integrated in advance from the mobile terminal Packet;Source code parsing is carried out to the Software Development Kit of picture parsing interface, and the data parsed are mounted on described In mobile terminal, so as to be integrated with the picture parsing interface in the mobile terminal;Call the picture parsing interface.
Optionally, position of the audio identification information in the image data is located at the format number of the image data In.
Optionally, the second acquisition module is further adapted for:
The first audio identification information is sent to server, the first audio identification information is searched by the server Corresponding first audio file, and first audio file is back to the mobile terminal, wherein the server storage There are the mapping relations between the first audio identification information and first audio file;
Receive the first audio file from the server.
Optionally, the display and playing module are further adapted for:
Call the audio player in terminal system;
Utilize audio file described in the audio player plays.
Optionally, described device further include:
Adding module, suitable for for identifying that current image is talking picture is added and shown on the image data One mark;
Memory module, the image data suitable for will be displayed with the first identifier are stored into default picture library;
First playing module, suitable for checking finger by what the default picture library checked the image data when receiving user When enabling, the audio player in terminal system is called, and utilize the first audio file described in the audio player plays.
It optionally, include multiple personages on the display picture of the image data;The first acquisition module is further adapted for:
Identify the character facial characteristic of each personage respectively, and obtain in the audio identification information with each character facial The corresponding second audio identification information of characteristic.
Optionally, the display and playing module are further adapted for:
It is added respectively on the corresponding position of each personage and shows the broadcasting for triggering second audio file The second identifier of operation;
When receiving the trigger action to any second identifier, the audio player in terminal system is called;
Utilize the second audio file corresponding to the corresponding personage of second identifier described in the audio player plays.
It optionally, further include comment audio identification information in the audio identification information;Described device further include:
Third obtains module, suitable for obtaining the comment audio identification information for including in the audio identification information;
4th obtains module, is suitable for obtaining the corresponding comment audio file of the comment audio identification information from server, The mapping relations between the comment audio identification information and the comment audio file are prestored in the server;
Second playing module, suitable for playing the comment audio file during showing the image data.
Optionally, second playing module is further adapted for:
During playing the comment audio file, the identity of the corresponding comment person of the comment audio file is shown Information, the identity information include head portrait, name, in the pet name at least one of.
Optionally, the comment person includes at least two, includes multiple consonant frequency files in the comment audio file;Institute The second playing module is stated to be further adapted for:
Obtain the corresponding relationship in the comment audio file between each consonant frequency file and each comment person;
During playing the comment audio file, currently playing consonant frequency file is determined, and according to described each Corresponding relationship between consonant frequency file and each comment person determines the currently playing corresponding comment person of consonant frequency file;
Show the identity information of the currently playing corresponding comment person of consonant frequency file.
Optionally, at least two image datas are stored in the mobile terminal;Described device further include:
Classifying module is suitable for according to the corresponding classification element of at least two image data, to described at least two A image data carries out classified and stored, and the classification element includes comment person's information, character facial characteristic, sender information At least one of in;
Wherein, the classifying method includes: that the identical image data of the classification element is classified as one kind.
Optionally, described device further include:
Searching module, suitable for searching the head portrait number to match with the character facial characteristic from predetermined address list According to including multiple contact informations with head image data in the predetermined address list, the predetermined address list includes that system is logical Address list in news record and/or instant message applications;
Determining module is adapted to determine that contact information corresponding with the head image data;
Display module, suitable for showing the specify information in the contact information on the image data, the finger Determine information include head portrait, communication number, in name at least one of.
Optionally, the first audio identification information includes the uniform resource position mark URL of the audio file.
According to another aspect of the invention, a kind of mobile terminal, including processor and memory, the storage are provided Device is used to store the program for the loading method for executing above-mentioned image data, the processor is configured to for executing described deposit The program stored in reservoir.
Using technical solution provided in an embodiment of the present invention, the picture number for carrying audio identification information can received According to when, identify image data in character facial characteristic, and obtain in audio identification information with character facial characteristic Then corresponding first audio identification information obtains corresponding first audio file of the first audio identification information, and load aobvious Diagram sheet data, while the first audio file is played during showing image data, so that mobile terminal can not only be shown What is received carries the image data of audio identification information, moreover it is possible to while the audio file synthesized in image data is played, Transmitting-receiving of the talking picture between different mobile terminal is realized, has sound spectrogram to add user and share during communicating contact The enjoyment of piece.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
According to the following detailed description of specific embodiments of the present invention in conjunction with the accompanying drawings, those skilled in the art will be brighter The above and other objects, advantages and features of the present invention.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is a kind of schematic flow chart of the loading method of image data according to an embodiment of the invention;
Fig. 2 is a kind of schematic flow chart of the loading method of image data of specific embodiment one according to the present invention;
Fig. 3 is a kind of schematic flow chart of the loading method of image data of specific embodiment two according to the present invention;
Fig. 4 is a kind of schematic block diagram of the loading device of image data according to an embodiment of the invention;
Fig. 5 shows the block diagram of the part-structure of mobile phone relevant to mobile terminal provided in an embodiment of the present invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
Fig. 1 is a kind of schematic flow chart of the loading method of image data according to an embodiment of the invention.Such as Fig. 1 Shown, it generally may include following steps S101-S104 that this method, which is applied to mobile terminal:
Step S101 receives the image data for carrying audio identification information.
Step S102, identify image data in character facial characteristic, and obtain in audio identification information with personage The corresponding first audio identification information of face feature data.
Step S103 obtains corresponding first audio file of the first audio identification information.
Step S104, loaded and displayed image data, and the first audio file is played during display.
Using technical solution provided in an embodiment of the present invention, the picture number for carrying audio identification information can received According to when, identify image data in character facial characteristic, and obtain in audio identification information with character facial characteristic Then corresponding first audio identification information obtains corresponding first audio file of the first audio identification information, and load aobvious Diagram sheet data, while the first audio file is played during showing image data, so that mobile terminal can not only be shown What is received carries the image data of audio identification information, moreover it is possible to while the audio file synthesized in image data is played, Transmitting-receiving of the talking picture between different mobile terminal is realized, has sound spectrogram to add user and share during communicating contact The enjoyment of piece.
It is described in detail below for above-mentioned steps S101-S104.
Step S101 is first carried out, that is, receives the image data for carrying audio identification information.Wherein, audio identification is believed Breath can be uniform resource position mark URL, can also be preset numbers information.When image data carries audio identification information When, indicate that the image data is talking picture.It in one embodiment, can be when mobile terminal receives this kind of image data It adds and is shown for showing that the image data carries the mark of audio identification information, such as one sound of display on image data Symbol mark, user see the note identifications on image data, that is, may know that the image data carries audio identification information.
After receiving the image data for carrying audio identification information, step S102 is continued to execute, i.e. identification picture number Character facial characteristic in, and obtain the first audio mark corresponding with character facial characteristic in audio identification information Know information.It may include one or more personages on the display picture of image data, each personage respectively corresponds respective character facial Characteristic, therefore mobile terminal can identify one or more character facial characteristics from image data.When only identifying Out when personage's face feature data, audio identification information is the first audio mark corresponding with personage's face feature data Know information;It include multiple first audio identification information in audio identification information when identifying multiple character facial characteristics, And each first audio identification information is corresponding with each personage's face feature data respectively.
In one embodiment, it can obtain in accordance with the following steps corresponding with character facial characteristic in audio identification information The first audio identification information:
Step 1: determining position of the audio identification information in image data.Wherein, audio identification information is in image data In position can be located in the formatted data of image data, and position of the audio identification information in the formatted data of image data It can be initial position, end position or the specified middle position in formatted data.For example, it is assumed that audio identification information is located at The end position of the formatted data of image data, and audio identification information is URL, then the formatted data of image data can be following Form: 123.jpg.http: //www.1111, wherein http://www.1111 is audio identification information.
Step 2: picture ad hoc in the terminal is called to parse interface.It in one embodiment, can be first from mobile whole The ad hoc corresponding interface name of picture parsing interface is obtained in end in the interface name of at least one interface open in advance, so Ad hoc picture is called to parse interface according to the interface name got afterwards.In another embodiment, it first determines whether mobile whole Whether ad hoc picture parsing interface is integrated in end;If so, picture is called to parse interface;If it is not, then being adjusted from mobile end The Software Development Kit of interface is parsed with picture integrated in advance, and source code parsing is carried out to the Software Development Kit, so Afterwards in the terminal by the data parsed installation, so as to picture parsing interface is integrated in mobile terminal, so as to call The picture parses interface.
Step 3: parsing using picture parsing interface to the data at the position determined, audio identification letter is obtained Breath.
Step 4: obtaining sound according to the corresponding relationship between preset audio identification information and character facial characteristic The first audio identification information corresponding with character facial characteristic in frequency identification information.When only identifying a character facial spy When levying data, the audio identification information got is the corresponding first audio identification information of personage's face feature data;When When identifying multiple character facial characteristics (including multiple personages i.e. on the display picture of image data), then sound is obtained respectively The second audio identification information corresponding with each personage's face feature data in frequency identification information.
It is identifying the character facial characteristic in image data and is getting corresponding with character facial characteristic After first audio identification information, step S103 is continued to execute, that is, obtains the corresponding first audio text of the first audio identification information Part.Wherein, the first audio file can be stored in mobile terminal local, also can be stored in server end.When the first audio file is deposited When being stored in mobile terminal local, its corresponding first audio file can be obtained from local according to the first audio identification information.For section The memory space of province mobile terminal local, usually can be by the first audio files storage in server end, and server is stored with Mapping relations between one audio identification information and the first audio file, therefore the first audio identification information can be sent first to clothes Business device, by server according to the first audio identification of mapping relationship searching between the first audio identification information and the first audio file Corresponding first audio file of information, and the first audio file found is back to mobile terminal.
After getting corresponding first audio file of the first audio identification information, continue to execute step S104, i.e., plus Display image data is carried, and plays the first audio file during display.Step effect achieved is that mobile terminal exists The audio file synthesized in the image data can be played while showing image data, to make user that can not only check picture The display picture of data, moreover it is possible to which other information relevant to the image data is known by the audio file synthesized in it.It is broadcasting When putting the first audio file, the audio player in the terminal system of mobile terminal can be called, and utilize audio player plays First audio file.That is, special without being installed in mobile terminal when the audio file of synthesis in playing pictures data Player, only the audio player of mobile terminal itself need to be used to can play, therefore it is very easy to use.
In one embodiment, also the first audio file can be played as follows: firstly, in the correspondence position of each personage Set the mark for adding and showing the play operation for triggering the second audio file respectively;Secondly, when receiving to any mark When the trigger action of knowledge, the audio player in terminal system is called;Again, using audio player plays, the mark is corresponding Second audio file corresponding to personage.
For example, including two personages of personage A and personage B on the display picture of image data, and respectively correspond respective Audio file, then mobile terminal adds on the corresponding position of personage A and personage B respectively in the loaded and displayed image data And show the mark of a play operation for triggering audio file, which can be preset any symbol, icon, text Etc. forms mark.When the user clicks when mark on personage A corresponding position, mobile terminal will play the corresponding sound of personage A Frequency file;When the user clicks when mark on personage B corresponding position, mobile terminal will play the corresponding audio file of personage B.
Certainly, if on the display picture of image data only including personage, it is possible to use aforesaid way plays sound Frequency file, that is, the mark of a play operation for triggering audio file is added and shown in the corresponding position of the personage, it should Mark can be the icon of the forms such as preset any symbol, icon, text, and when user triggers the mark, mobile terminal is just It can the interior audio file synthesized of playing pictures data.When making in this way, user can choose whether to play according to current needs Audio file plays the feelings of audio file to avoid (such as attending class, in meeting) in the environment of being inconvenient to play sound suddenly Condition.
In one embodiment, the above method is further comprising the steps of: being used for firstly, adding and showing on image data The mark that current image is talking picture is identified, which can be any form of mark such as character, icon, text;Secondly, The image data that will be displayed with the mark is stored into default picture library;Again, figure is checked by default picture library when receiving user When checking instruction of sheet data calls the audio player in terminal system, and using in audio player plays image data The audio file of synthesis.For example, adding and showing at any position (such as lower left corner, lower right corner position) on image data One note symbol, and the image data for carrying note symbol is stored in the default picture library of mobile terminal, when user is from pre- If open the image data in picture library, the audio file of synthesis in the mobile terminal playing image data.
In addition, when being stored with multiple image datas for carrying audio identification information in mobile terminal, it can also be more to this A image data carries out classified and stored.Specifically, can be according to the corresponding classification element of multiple image datas to multiple figure Sheet data carries out classified and stored, the classification element include comment person's information, character facial characteristic, in sender information extremely One item missing, and classifying method is that will sort out the identical image data of element to be classified as one kind.Wherein, character facial characteristic is identical Image data be in the display picture of image data comprising identical personage;Comment person's information refers to audio file for other use When family is to the comment of the audio form of image data, communication information (such as communication number, instant messaging account of corresponding comment person Deng) and/or identity information (such as head portrait, name, the pet name);Sender information refers to the communication information for sending the user of image data (such as communication number, instant messaging account) and/or identity information (such as head portrait, name, the pet name).
It in one embodiment, further include comment audio identification information in audio identification information;The above method also wraps at this time Include following steps: firstly, obtaining the comment audio identification information for including in audio identification information;Secondly, obtaining point from server Comment the corresponding comment audio file of audio identification information, wherein comment audio identification information and comment sound are prestored in server Mapping relations between frequency file;Again, comment audio file is played during showing image data.
In addition, may also display the body of the corresponding comment person of comment audio file during playing comment audio file Part information, identity information include head portrait, name, in the pet name at least one of.When in same image data include multiple comment persons, And when in comment audio file including multiple consonant frequency files, the identity information of each comment person can be also shown in the following way: Firstly, obtaining the corresponding relationship in comment audio file between each consonant frequency file and each comment person;Secondly, commenting on sound playing During frequency file, currently playing consonant frequency file is determined, and according to pair between each consonant frequency file and each comment person It should be related to and determine the currently playing corresponding comment person of consonant frequency file;Again, show that currently playing consonant frequency file is corresponding Comment person identity information.
For example, including in the comment audio file synthesized in multiple comment persons and the image data in an image data Including multiple consonant frequency files, specifically, comment person A corresponds to consonant frequency file a, comment person B corresponds to consonant frequency file b, it is assumed that When playing comment audio file, broadcasting consonant frequency file a first then shows the identity information of comment person A on image data, Group audio file a is finished, and when continuing to play consonant frequency file b, cancels the identity of the comment person A shown on image data Information, while showing on image data the identity information of comment person B.
As it can be seen that in the embodiment, by the way that the identity information of each comment person is shown according to currently playing consonant frequency file On image data, user is enable to know comment person's information of currently playing consonant frequency file when checking image data, More information relevant to image data is provided for user.
In one embodiment, the above method is further comprising the steps of: firstly, searching and personage's face from predetermined address list The head image data that portion's characteristic matches, wherein it include multiple contact informations with head image data in predetermined address list, Predetermined address list includes the address list in system communication record and/or instant message applications;Secondly, determination is corresponding with head image data Contact information;Again, the specify information in contact information is shown on image data, specify information includes head portrait, leads to At least one of in signal code, name.
Illustrate the loading method of image data provided by the invention below by way of two specific embodiments.
Embodiment one
It only include a personage in the display picture for the image data that mobile terminal receives in specific embodiment one, Therefore the audio identification information carried in image data is audio corresponding to the corresponding character facial characteristic of the personage Identification information.Fig. 2 is a kind of schematic flow chart of the loading method of image data according to embodiment 1, such as Fig. 2 institute Show, this approach includes the following steps S201-S206:
Step S201 identifies the personage in image data when receiving the image data for carrying audio identification information Face feature data.
Step S202 determines audio identification information in the position in the formatted data of image data in formatted data End position.
In the embodiment one, audio identification information is in its formatted data in the position in the formatted data of image data End position, and in another embodiment, audio identification information may be located on other in the formatted data of image data Position, such as initial position, specified middle position.
Step S203 is called picture ad hoc in the terminal to parse interface, and is parsed using picture parsing interface Audio identification information at end position in formatted data out.
For example, the formatted data of image data is " 123.jpg.http: //www.1111 ", using ad hoc in mobile terminal Picture parsing interface can parse audio identification information be " http://www.1111 ".
The audio identification information parsed is sent to server, by server according to preset audio mark by step S204 The mapping relations known between information and audio file determine file in the corresponding audio of audio identification information.
Step S205 receives the corresponding audio file of audio identification information sent by server.
Step S206, loaded and displayed image data, while playing the audio file got.
For example, audio identification information is " http://www.1111 ", server is according to preset audio identification information and sound Mapping relations between frequency file determine that audio file corresponding with audio identification information " http://www.1111 " is song " going home ", then mobile terminal plays song " going home " in loaded and displayed image data.
Embodiment two
It include multiple personages in the display picture for the image data that mobile terminal receives in specific embodiment two.Fig. 3 It is a kind of schematic flow chart of the loading method of image data according to embodiment 2, as shown in figure 3, this method includes Following steps S301-S308:
Step S301 is identified multiple in image data when receiving the image data for carrying audio identification information Character facial characteristic.
Step S302 determines audio identification information in the position in the formatted data of image data in formatted data End position.
In the embodiment two, audio identification information is in its formatted data in the position in the formatted data of image data End position, and in another embodiment, audio identification information may be located on other in the formatted data of image data Position, such as initial position, specified middle position.
Step S303 is called picture ad hoc in the terminal to parse interface, and is parsed using picture parsing interface Audio identification information at end position in formatted data out.
For example, the formatted data of image data be " 123.jpg.http: //www.1111.http: // Www.1112http: //www.1113 ", audio identification information can be parsed using picture parsing interface ad hoc in mobile terminal For " http://www.1111.http: //www.1112http: //www.1113 ".
Step S304 is obtained according to the corresponding relationship between preset audio identification information and character facial characteristic The first audio identification information corresponding with each personage's face feature data in audio identification information.
For example, the audio identification information parsed be " http://www.1111.http: //www.1112http: // Www.1113 ", including three the first audio identification information, it is assumed that identify three from image data in the embodiment two Character facial characteristic, respectively character facial characteristic L, character facial characteristic M and character facial characteristic N, and in the corresponding relationship between preset audio identification information and character facial characteristic, with character facial characteristic The corresponding first audio identification information of L is " http://www.1111 ", the first audio corresponding with character facial characteristic M Identification information is " http://www.1112 ", and the first audio identification information corresponding with character facial characteristic N is " http://www.1113 ", then mobile terminal can be got corresponding with character facial characteristic L from audio identification information First audio identification information " http://www.1111 ", the corresponding first audio identification information of character facial characteristic M " http://www.1112 " and the corresponding first audio identification information of character facial characteristic N " http: // www.1113”。
Each first audio identification information is sent to server, by server according to preset audio identification by step S305 Mapping relations between information and audio file determine the corresponding audio file of each first audio identification information.
Step S306 receives the corresponding audio file of each first audio identification information sent by server.
Step S307, loaded and displayed image data, and each personage's face feature data correspond to personage's on image data Corresponding position shows a note identifications respectively, which is used to trigger the play operation to each audio file.
Step S308 plays the note identifications pair when receiving trigger action of the user to any note identifications The corresponding audio file of character facial characteristic answered.
In the embodiment two, it is assumed that it is corresponding with character facial characteristic L (i.e. with the first audio identification information " http://www.1111 " is corresponding) audio file is song " going home ", corresponding with character facial characteristic M (i.e. with the One audio identification information " http://www.1112 " is corresponding) audio file is a Duan Luyin " Xiao Ming plays today very happily ", And corresponding audio (i.e. corresponding with the first audio identification information " http://the www.1113 ") text of character facial characteristic N Part is song " local ", then when the note identifications for the corresponding position that character facial characteristic L corresponds to personage when the user clicks, Mobile terminal will play song " going home ";Character facial characteristic M corresponds to the corresponding position of personage when the user clicks When note identifications, mobile terminal will playback " Xiao Ming plays very happy today ";Character facial characteristic when the user clicks When corresponding to the note identifications of the corresponding position of personage according to N, mobile terminal will play song " local ".
By above-described embodiment one and embodiment two it is found that technical solution of the present invention can carry audio mark receiving When knowing the image data of information, the character facial characteristic in image data is identified, and obtain and character facial characteristic Then corresponding audio identification information obtains the corresponding audio file of audio identification information, and loaded and displayed image data, together When play audio file during showing image data so that mobile terminal can not only show that is received carries audio The image data of identification information, moreover it is possible to while the audio file synthesized in image data is played, talking picture is realized not With the transmitting-receiving between mobile terminal, to add the enjoyment that user shares talking picture during communicating contact.
Fig. 4 is a kind of schematic block diagram of the loading device of image data according to an embodiment of the invention.Such as Fig. 4 institute Show, which is set to mobile terminal, comprising:
Receiving module 410, suitable for receiving the image data for carrying audio identification information;
First obtains module 420, is coupled with receiving module 410, suitable for identifying the character facial in the image data Characteristic, and obtain the first audio identification letter corresponding with the character facial characteristic in the audio identification information Breath;
Second obtains module 430, is coupled with the first acquisition module 420, is suitable for obtaining the first audio identification information Corresponding first audio file;
Display and playing module 440 are coupled with the second acquisition module 430, are suitable for image data described in loaded and displayed, and First audio file is played during the display.
In one embodiment, the first acquisition module 420 is further adapted for:
Determine position of the audio identification information in the image data;
It is invoked at picture parsing interface ad hoc in the mobile terminal;
The data at the position are parsed using picture parsing interface, obtain the audio identification information;
According to the corresponding relationship between preset audio identification information and character facial characteristic, the audio mark is obtained Know the first audio identification information corresponding with the character facial characteristic in information.
In one embodiment, the first acquisition module 420 is further adapted for:
The ad hoc picture is obtained from the interface name of at least one interface open in advance in the mobile terminal Parse the corresponding interface name of interface;
The ad hoc picture is called to parse interface according to the interface name got.
In one embodiment, the first acquisition module 420 is further adapted for:
Judge ad hoc picture parsing interface whether is integrated in the mobile terminal;
If so, calling the picture parsing interface;
If it is not, then calling the Software Development Tools of picture parsing interface integrated in advance from the mobile terminal Packet;Source code parsing is carried out to the Software Development Kit of picture parsing interface, and the data parsed are mounted on described In mobile terminal, so as to be integrated with the picture parsing interface in the mobile terminal;Call the picture parsing interface.
In one embodiment, position of the audio identification information in the image data is located at the lattice of the image data In formula data.
In one embodiment, the second acquisition module 430 is further adapted for:
The first audio identification information is sent to server, the first audio identification information is searched by the server Corresponding first audio file, and first audio file is back to the mobile terminal, wherein the server storage There are the mapping relations between the first audio identification information and first audio file;
Receive the first audio file from the server.
In one embodiment, display and playing module 440 are further adapted for:
Call the audio player in terminal system;
Utilize audio file described in the audio player plays.
In one embodiment, above-mentioned apparatus further include:
Adding module is coupled with display and playing module 440, is used for suitable for adding and showing on the image data Identify the first identifier that current image is talking picture;
Memory module is coupled with adding module, and the image data suitable for will be displayed with the first identifier is stored to pre- If in picture library;
First playing module, is coupled with memory module, checks institute by the default picture library suitable for that ought receive user When checking instruction of image data is stated, calls the audio player in terminal system, and utilize the audio player plays institute State the first audio file.
It in one embodiment, include multiple personages on the display picture of image data;Described first obtains module 420 also It is suitable for:
Identify the character facial characteristic of each personage respectively, and obtain in the audio identification information with each character facial The corresponding second audio identification information of characteristic.
In one embodiment, display and playing module 440 are further adapted for:
It is added respectively on the corresponding position of each personage and shows the broadcasting for triggering second audio file The second identifier of operation;
When receiving the trigger action to any second identifier, the audio player in terminal system is called;
Utilize the second audio file corresponding to the corresponding personage of second identifier described in the audio player plays.
It in one embodiment, further include comment audio identification information in audio identification information;Above-mentioned apparatus further include:
Third obtains module, suitable for obtaining the comment audio identification information for including in the audio identification information;
4th obtains module, obtains module with third and is coupled, and is suitable for obtaining the comment audio identification letter from server Corresponding comment audio file is ceased, the comment audio identification information and the comment audio file are prestored in the server Between mapping relations;
Second playing module is coupled, suitable for playing during showing the image data with the 4th acquisition module The comment audio file.
In one embodiment, the second playing module is further adapted for:
During playing the comment audio file, the identity of the corresponding comment person of the comment audio file is shown Information, the identity information include head portrait, name, in the pet name at least one of.
In one embodiment, the comment person includes at least two, includes multiple consonants in the comment audio file Frequency file;Second playing module is further adapted for:
Obtain the corresponding relationship in the comment audio file between each consonant frequency file and each comment person;
During playing the comment audio file, currently playing consonant frequency file is determined, and according to described each Corresponding relationship between consonant frequency file and each comment person determines the currently playing corresponding comment person of consonant frequency file;
Show the identity information of the currently playing corresponding comment person of consonant frequency file.
In one embodiment, at least two image datas are stored in the mobile terminal;Above-mentioned apparatus further include:
Classifying module is suitable for according to the corresponding classification element of at least two image data, to described at least two A image data carries out classified and stored, and the classification element includes comment person's information, character facial characteristic, sender information At least one of in;
Wherein, the classifying method includes: that the identical image data of the classification element is classified as one kind.
In one embodiment, above-mentioned apparatus further include:
Searching module is coupled with display and playing module 440, is suitable for searching and personage's face from predetermined address list The head image data that portion's characteristic matches includes multiple contact informations with head image data in the predetermined address list, The predetermined address list includes the address list in system communication record and/or instant message applications;
Determining module is coupled with searching module, is adapted to determine that contact information corresponding with the head image data;
Display module is coupled with determining module, suitable for showing the specify information in the contact information described On image data, the specify information include head portrait, communication number, in name at least one of.
In one embodiment, the first audio identification information includes the uniform resource position mark URL of the audio file.
Using device provided in an embodiment of the present invention, the image data for carrying audio identification information can received When, identify image data in character facial characteristic, and obtain in audio identification information with character facial characteristic pair Then the first audio identification information answered obtains corresponding first audio file of the first audio identification information, and loaded and displayed Image data, while the first audio file is played during showing image data, it is connect so that mobile terminal can not only be shown What is received carries the image data of audio identification information, moreover it is possible to while the audio file synthesized in image data is played, it is real Transmitting-receiving of the talking picture between different mobile terminal is showed, so that adding user shares talking picture during communicating contact Enjoyment.
It should be understood that the loading device of the image data in Fig. 4 can be used to realize institute above The loading scheme for the image data stated, datail description therein should be described with method part above it is similar, it is cumbersome to avoid, herein It does not repeat separately.
The embodiment of the invention also provides a kind of mobile terminals, as shown in figure 5, for ease of description, illustrating only and this The relevant part of inventive embodiments, it is disclosed by specific technical details, please refer to present invention method part.The movement is whole End can be include mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), POS Any terminal device such as (Point of Sales, point-of-sale terminal), vehicle-mounted computer, taking the terminal as an example:
Fig. 5 shows the block diagram of the part-structure of mobile phone relevant to mobile terminal provided in an embodiment of the present invention.Ginseng Fig. 5 is examined, mobile phone includes: radio frequency (Radio Frequency, RF) circuit 510, memory 520, input unit 530, display unit 540, sensor 550, voicefrequency circuit 560, Wireless Fidelity (wireless-fidelity, Wi-Fi) module 570, processor 580, And the equal components of power supply 590.It will be understood by those skilled in the art that handset structure shown in Fig. 5 is not constituted to mobile phone It limits, may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
It is specifically introduced below with reference to each component parts of the Fig. 5 to mobile phone:
RF circuit 510 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, handled to processor 580;In addition, the data for designing uplink are sent to base station.In general, RF circuit 510 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuit 510 can also be communicated with network and other equipment by wireless communication. Any communication standard or agreement, including but not limited to global system for mobile communications (Global can be used in above-mentioned wireless communication System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), Email, short message service (Short Messaging Service, SMS) etc..
Memory 520 can be used for storing software program and module, and processor 580 is stored in memory 520 by operation Software program and module, thereby executing the various function application and data processing of mobile phone.Memory 520 can mainly include Storing program area and storage data area, wherein storing program area can application journey needed for storage program area, at least one function Sequence (such as sound-playing function, image player function etc.) etc.;Storage data area can be stored to be created according to using for mobile phone Data (such as audio data, phone directory etc.) etc..It, can be in addition, memory 520 may include high-speed random access memory Including nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-states Part.
Input unit 530 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with And the related key signals input of function control.Specifically, input unit 530 may include that touch panel 531 and other inputs are set Standby 532.Touch panel 531, also referred to as touch screen, collect user on it or nearby touch operation (such as user use The operation of any suitable object or attachment such as finger, stylus on touch panel 531 or near touch panel 531), and root Corresponding attachment device is driven according to preset formula.Optionally, touch panel 531 may include touch detecting apparatus and touch Two parts of controller.Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, Transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into touching Point coordinate, then gives processor 580, and can receive order that processor 580 is sent and be executed.Furthermore, it is possible to using electricity The multiple types such as resistive, condenser type, infrared ray and surface acoustic wave realize touch panel 531.In addition to touch panel 531, input Unit 530 can also include other input equipments 532.Specifically, other input equipments 532 can include but is not limited to secondary or physical bond One of disk, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. are a variety of.
Display unit 540 can be used for showing information input by user or be supplied to user information and mobile phone it is various Menu.Display unit 540 may include display panel 541, optionally, can use liquid crystal display (Liquid Crystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) it is aobvious to configure Show panel 541.Further, touch panel 531 can cover display panel 541, when touch panel 531 detect it is on it or attached After close touch operation, processor 580 is sent to determine the type of touch event, is followed by subsequent processing device 580 according to touch event Type corresponding visual output is provided on display panel 541.Although in Fig. 5, touch panel 531 and display panel 541 It is that the input and input function of mobile phone are realized as two independent components, but in some embodiments it is possible to by touch-control Panel 531 and display panel 541 are integrated and that realizes mobile phone output and input function.
Mobile phone may also include at least one sensor 550, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel 541, proximity sensor can close display panel 541 when mobile phone is moved in one's ear And/or backlight.As a kind of motion sensor, accelerometer sensor can detect (generally three axis) acceleration in all directions Size, can detect that size and the direction of gravity when static, can be used to identify the application of mobile phone posture, (for example horizontal/vertical screen is cut Change, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;May be used also as mobile phone The other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared sensor of configuration, details are not described herein.
Voicefrequency circuit 560, loudspeaker 561, microphone 562 can provide the audio interface between user and mobile phone.Audio-frequency electric Electric signal after the audio data received conversion can be transferred to loudspeaker 561, be converted to sound by loudspeaker 561 by road 560 Signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 562, is turned after being received by voicefrequency circuit 560 It is changed to audio data, then by after the processing of audio data output processor 580, such as another mobile phone is sent to through RF circuit 510, Or audio data is exported to memory 520 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 570 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 5 is shown WiFi module 570, but it is understood that, and it is not belonging to must be configured into for mobile phone, it can according to need do not changing completely Become in the range of the essence of invention and omits.
Processor 580 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, is led to It crosses operation or executes the software program and/or module being stored in memory 520, and call and be stored in memory 520 Data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor 580 can wrap Include one or more processing units;Preferably, processor 580 can integrate application processor and modem processor, wherein answer With the main processing operation system of processor, user interface and application program etc., modem processor mainly handles wireless communication. It is understood that above-mentioned modem processor can not also be integrated into processor 580.
Mobile phone further includes the power supply 590 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe Reason system and processor 580 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system Etc. functions.
Although being not shown, mobile phone can also include camera, bluetooth module etc., and details are not described herein.
In embodiments of the present invention, processor 580 included by the terminal is also with the following functions:
Receive the image data for carrying audio identification information;
Identify the character facial characteristic in the image data, and obtain in the audio identification information with the people The corresponding first audio identification information of object face feature data;
Obtain corresponding first audio file of the first audio identification information;
Image data described in loaded and displayed, and first audio file is played during the display.
Optionally, processor 580 is also with the following functions:
Determine position of the audio identification information in the image data;
It is invoked at picture parsing interface ad hoc in the mobile terminal;
The data at the position are parsed using picture parsing interface, obtain the audio identification information;
According to the corresponding relationship between preset audio identification information and character facial characteristic, the audio mark is obtained Know the first audio identification information corresponding with the character facial characteristic in information.
Optionally, processor 580 is also with the following functions:
The ad hoc picture is obtained from the interface name of at least one interface open in advance in the mobile terminal Parse the corresponding interface name of interface;
The ad hoc picture is called to parse interface according to the interface name got.
It is invoked at picture parsing interface ad hoc in the mobile terminal, comprising:
Judge ad hoc picture parsing interface whether is integrated in the mobile terminal;
If so, calling the picture parsing interface;
If it is not, then calling the Software Development Kit of picture parsing interface integrated in advance from the mobile end; Source code parsing is carried out to the Software Development Kit of picture parsing interface, and the data parsed are mounted on the movement In terminal, so as to be integrated with the picture parsing interface in the mobile terminal;Call the picture parsing interface.
Optionally, position of the audio identification information in the image data is located at the format number of the image data In.
Optionally, processor 580 is also with the following functions:
The first audio identification information is sent to server, the first audio identification information is searched by the server Corresponding first audio file, and first audio file is back to the mobile terminal, wherein the server storage There are the mapping relations between the first audio identification information and first audio file;
Receive the first audio file from the server.
Optionally, processor 580 is also with the following functions:
Call the audio player in terminal system;
Utilize the first audio file described in the audio player plays.
Optionally, processor 580 is also with the following functions:
It adds and is shown for identifying the first identifier that current image is talking picture on the image data;
The image data that will be displayed with the first identifier is stored into default picture library;
When checking instruction of the image data is checked by the default picture library when receiving user, calls terminal system In audio player, and utilize the first audio file described in the audio player plays.
It optionally, include multiple personages on the display picture of the image data;Processor 580 is also with the following functions:
Identify the character facial characteristic of each personage respectively, and obtain in the audio identification information with each character facial The corresponding second audio identification information of characteristic.
Optionally, processor 580 is also with the following functions:
It is added respectively on the corresponding position of each personage and shows the broadcasting for triggering second audio file The second identifier of operation;
When receiving the trigger action to any second identifier, the audio player in terminal system is called;
Utilize the second audio file corresponding to the corresponding personage of second identifier described in the audio player plays.
It optionally, further include comment audio identification information in the audio identification information;Processor 580 also has following function Can:
Obtain the comment audio identification information for including in the audio identification information;
The corresponding comment audio file of the comment audio identification information is obtained from server, is prestored in the server Mapping relations between the comment audio identification information and the comment audio file;
The comment audio file is played during showing the image data.
Optionally, processor 580 is also with the following functions:
During playing the comment audio file, the identity of the corresponding comment person of the comment audio file is shown Information, the identity information include head portrait, name, in the pet name at least one of.
Optionally, the comment person includes at least two, includes multiple consonant frequency files in the comment audio file;Place It is also with the following functions to manage device 580:
Obtain the corresponding relationship in the comment audio file between each consonant frequency file and each comment person;
During playing the comment audio file, currently playing consonant frequency file is determined, and according to described each Corresponding relationship between consonant frequency file and each comment person determines the currently playing corresponding comment person of consonant frequency file;
Show the identity information of the currently playing corresponding comment person of consonant frequency file.
Optionally, at least two image datas are stored in the mobile terminal;Processor 580 is also with the following functions:
According to the corresponding classification element of at least two image data, at least two image data is carried out Classified and stored, it is described sort out element include comment person's information, character facial characteristic, in sender information at least one of;
Wherein, the classifying method includes: that the identical image data of the classification element is classified as one kind.
Optionally, processor 580 is also with the following functions:
The head image data to match with the character facial characteristic, the predetermined communication are searched from predetermined address list It include multiple contact informations with head image data in record, the predetermined address list includes system communication record and/or Instant Messenger Interrogate the address list in software;
Determine contact information corresponding with the head image data;
Specify information in the contact information is shown on the image data, the specify information includes head Picture, communication number, in name at least one of.
Optionally, the first audio identification information includes the uniform resource position mark URL of first audio file.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice In the loading device of microprocessor or digital signal processor (DSP) to realize image data according to an embodiment of the present invention The some or all functions of some or all components.The present invention is also implemented as executing method as described herein Some or all device or device programs (for example, computer program and computer program product).Such reality Existing program of the invention can store on a computer-readable medium, or may be in the form of one or more signals. Such signal can be downloaded from an internet website to obtain, and perhaps be provided on the carrier signal or in any other forms It provides.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly Determine or deduce out many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes It is set to and covers all such other variations or modifications.

Claims (31)

1. a kind of loading method of image data is applied to mobile terminal, which comprises
Receive the image data for carrying audio identification information;
Identify the character facial characteristic in the image data, and obtain in the audio identification information with personage's face The corresponding first audio identification information of portion's characteristic;
Obtain corresponding first audio file of the first audio identification information;
Image data described in loaded and displayed, and first audio file is played during the display;
At least two image datas are stored in the mobile terminal;The method also includes:
According to the corresponding classification element of at least two image data, at least two image data is sorted out Storage, it is described sort out element include comment person's information, in sender information at least one of;
The classifying method includes: that the identical image data of the classification element is classified as one kind.
2. according to the method described in claim 1, wherein, obtain in the audio identification information with the character facial characteristic According to corresponding first audio identification information, comprising:
Determine position of the audio identification information in the image data;
It is invoked at picture parsing interface ad hoc in the mobile terminal;
The data at the position are parsed using picture parsing interface, obtain the audio identification information;
According to the corresponding relationship between preset audio identification information and character facial characteristic, the audio identification letter is obtained The first audio identification information corresponding with the character facial characteristic in breath.
3. according to the method described in claim 2, wherein, being invoked at picture parsing interface ad hoc in the mobile terminal, packet It includes:
The ad hoc picture parsing is obtained from the interface name of at least one interface open in advance in the mobile terminal The corresponding interface name of interface;
The ad hoc picture is called to parse interface according to the interface name got.
4. according to the method in claim 2 or 3, wherein it is invoked at picture parsing interface ad hoc in the mobile terminal, Include:
Judge ad hoc picture parsing interface whether is integrated in the mobile terminal;
If so, calling the picture parsing interface;
If it is not, then calling the Software Development Kit of picture parsing interface integrated in advance from the mobile end;To institute The Software Development Kit for stating picture parsing interface carries out source code parsing, and the data parsed are mounted on the mobile terminal In, so as to be integrated with the picture parsing interface in the mobile terminal;Call the picture parsing interface.
5. according to the method in claim 2 or 3, wherein position of the audio identification information in the image data In the formatted data of the image data.
6. method according to claim 1-3, wherein obtain the first audio identification information corresponding first Audio file, comprising:
The first audio identification information is sent to server, it is corresponding that the first audio identification information is searched by the server The first audio file, and first audio file is back to the mobile terminal, wherein the server storage is State the mapping relations between the first audio identification information and first audio file;
Receive the first audio file from the server.
7. method according to any one of claim 1-3, wherein play first audio file, comprising:
Call the audio player in terminal system;
Utilize the first audio file described in the audio player plays.
8. method according to any one of claim 1-3, wherein the method also includes:
It adds and is shown for identifying the first identifier that current image is talking picture on the image data;
The image data that will be displayed with the first identifier is stored into default picture library;
When checking instruction of the image data is checked by the default picture library when receiving user, is called in terminal system Audio player, and utilize the first audio file described in the audio player plays.
9. method according to any one of claim 1-3, wherein include multiple on the display picture of the image data Personage;Identify the character facial characteristic in the image data, and obtain in the audio identification information with the personage The corresponding first audio identification information of face feature data, comprising:
Identify the character facial characteristic of each personage respectively, and obtain in the audio identification information with each personage's face feature The corresponding second audio identification information of data.
10. according to the method described in claim 9, wherein, playing first audio file, comprising:
It is added respectively on the corresponding position of each personage and shows the play operation for triggering second audio file Second identifier;
When receiving the trigger action to any second identifier, the audio player in terminal system is called;
Utilize the second audio file corresponding to the corresponding personage of second identifier described in the audio player plays.
11. method according to any one of claim 1-3, wherein further include comment sound in the audio identification information Frequency identification information;The method also includes:
Obtain the comment audio identification information for including in the audio identification information;
The corresponding comment audio file of the comment audio identification information is obtained from server, is prestored in the server described Comment on the mapping relations between audio identification information and the comment audio file;
The comment audio file is played during showing the image data.
12. according to the method for claim 11, wherein play the comment audio file, comprising:
During playing the comment audio file, the identity letter of the corresponding comment person of the comment audio file is shown Breath, the identity information include head portrait, name, in the pet name at least one of.
13. according to the method for claim 12, wherein the comment person includes at least two, the comment audio file In include multiple consonant frequency files;Show the identity information of the corresponding comment person of the comment audio file, comprising:
Obtain the corresponding relationship in the comment audio file between each consonant frequency file and each comment person;
During playing the comment audio file, currently playing consonant frequency file is determined, and according to each consonant Corresponding relationship between frequency file and each comment person determines the currently playing corresponding comment person of consonant frequency file;
Show the identity information of the currently playing corresponding comment person of consonant frequency file.
14. method according to any one of claim 1-3, wherein the method also includes:
The head image data to match with the character facial characteristic is searched from predetermined address list, in the predetermined address list Including multiple contact informations with head image data, the predetermined address list includes that system communication record and/or instant messaging are soft Address list in part;
Determine contact information corresponding with the head image data;
Specify information in the contact information is shown on the image data, the specify information includes head portrait, leads to At least one of in signal code, name.
15. method according to any one of claim 1-3, wherein the first audio identification information includes described The uniform resource position mark URL of one audio file.
16. a kind of loading device of image data, is set to mobile terminal, described device includes:
Receiving module, suitable for receiving the image data for carrying audio identification information;
First obtains module, suitable for identifying the character facial characteristic in the image data, and obtains the audio identification The first audio identification information corresponding with the character facial characteristic in information;
Second obtains module, is suitable for obtaining corresponding first audio file of the first audio identification information;
Display and playing module are suitable for image data described in loaded and displayed, and play first sound during display Frequency file;
At least two image datas are stored in the mobile terminal;Described device further include:
Classifying module is suitable for according to the corresponding classification element of at least two image data, at least two figure Sheet data carries out classified and stored, it is described sort out element include comment person's information, in sender information at least one of;
The classifying method includes: that the identical image data of the classification element is classified as one kind.
17. device according to claim 16, wherein the first acquisition module is further adapted for:
Determine position of the audio identification information in the image data;
It is invoked at picture parsing interface ad hoc in the mobile terminal;
The data at the position are parsed using picture parsing interface, obtain the audio identification information;
According to the corresponding relationship between preset audio identification information and character facial characteristic, the audio identification letter is obtained The first audio identification information corresponding with the character facial characteristic in breath.
18. device according to claim 17, wherein the first acquisition module is further adapted for:
The ad hoc picture parsing is obtained from the interface name of at least one interface open in advance in the mobile terminal The corresponding interface name of interface;
The ad hoc picture is called to parse interface according to the interface name got.
19. device described in 7 or 18 according to claim 1, wherein the first acquisition module is further adapted for:
Judge ad hoc picture parsing interface whether is integrated in the mobile terminal;
If so, calling the picture parsing interface;
If it is not, then calling the Software Development Kit of picture parsing interface integrated in advance from the mobile terminal;It is right The Software Development Kit of the picture parsing interface carries out source code parsing, and the data parsed are mounted on the movement eventually In end, so as to be integrated with the picture parsing interface in the mobile terminal;Call the picture parsing interface.
20. device described in 7 or 18 according to claim 1, wherein position of the audio identification information in the image data Setting in the formatted data of the image data.
21. device described in any one of 6-18 according to claim 1, wherein the second acquisition module is further adapted for:
The first audio identification information is sent to server, it is corresponding that the first audio identification information is searched by the server The first audio file, and first audio file is back to the mobile terminal, wherein the server storage is State the mapping relations between the first audio identification information and first audio file;
Receive the first audio file from the server.
22. device described in any one of 6-18 according to claim 1, wherein the display and playing module are further adapted for:
Call the audio player in terminal system;
Utilize audio file described in the audio player plays.
23. device described in any one of 6-18 according to claim 1, wherein described device further include:
Adding module, suitable for adding and showing on the image data for identifying the first mark that current image is talking picture Know;
Memory module, the image data suitable for will be displayed with the first identifier are stored into default picture library;
First playing module, suitable for checking instruction by what the default picture library checked the image data when receiving user When, the audio player in terminal system is called, and utilize the first audio file described in the audio player plays.
24. device described in any one of 6-18 according to claim 1, wherein include on the display picture of the image data Multiple personages;The first acquisition module is further adapted for:
Identify the character facial characteristic of each personage respectively, and obtain in the audio identification information with each personage's face feature The corresponding second audio identification information of data.
25. device according to claim 24, wherein the display and playing module are further adapted for:
It is added respectively on the corresponding position of each personage and shows the play operation for triggering second audio file Second identifier;
When receiving the trigger action to any second identifier, the audio player in terminal system is called;
Utilize the second audio file corresponding to the corresponding personage of second identifier described in the audio player plays.
26. device described in any one of 6-18 according to claim 1, wherein further include comment in the audio identification information Audio identification information;Described device further include:
Third obtains module, suitable for obtaining the comment audio identification information for including in the audio identification information;
4th obtains module, is suitable for obtaining the corresponding comment audio file of the comment audio identification information from server, described The mapping relations between the comment audio identification information and the comment audio file are prestored in server;
Second playing module, suitable for playing the comment audio file during showing the image data.
27. device according to claim 26, wherein second playing module is further adapted for:
During playing the comment audio file, the identity letter of the corresponding comment person of the comment audio file is shown Breath, the identity information include head portrait, name, in the pet name at least one of.
28. device according to claim 27, wherein the comment person includes at least two, the comment audio file In include multiple consonant frequency files;Second playing module is further adapted for:
Obtain the corresponding relationship in the comment audio file between each consonant frequency file and each comment person;
During playing the comment audio file, currently playing consonant frequency file is determined, and according to each consonant Corresponding relationship between frequency file and each comment person determines the currently playing corresponding comment person of consonant frequency file;
Show the identity information of the currently playing corresponding comment person of consonant frequency file.
29. device described in any one of 6-18 according to claim 1, wherein described device further include:
Searching module, suitable for searching the head image data to match with the character facial characteristic, institute from predetermined address list Stating includes multiple contact informations with head image data in predetermined address list, and the predetermined address list includes system communication record And/or the address list in instant message applications;
Determining module is adapted to determine that contact information corresponding with the head image data;
Display module, suitable for showing the specify information in the contact information on the image data, the specified letter Breath includes at least one in head portrait, communication number, name.
30. device described in any one of 6-18 according to claim 1, wherein the first audio identification information includes described The uniform resource position mark URL of audio file.
31. a kind of mobile terminal, including processor and memory:
The memory is used to store the program that perform claim requires any one of 1 to 15 method,
The processor is configured to for executing the program stored in the memory.
CN201611209252.0A 2016-12-23 2016-12-23 Loading method, device and the mobile terminal of image data Active CN106713636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611209252.0A CN106713636B (en) 2016-12-23 2016-12-23 Loading method, device and the mobile terminal of image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611209252.0A CN106713636B (en) 2016-12-23 2016-12-23 Loading method, device and the mobile terminal of image data

Publications (2)

Publication Number Publication Date
CN106713636A CN106713636A (en) 2017-05-24
CN106713636B true CN106713636B (en) 2019-10-25

Family

ID=58895653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611209252.0A Active CN106713636B (en) 2016-12-23 2016-12-23 Loading method, device and the mobile terminal of image data

Country Status (1)

Country Link
CN (1) CN106713636B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053831A (en) * 2017-12-05 2018-05-18 广州酷狗计算机科技有限公司 Music generation, broadcasting, recognition methods, device and storage medium
CN109166165A (en) * 2018-06-25 2019-01-08 网宿科技股份有限公司 A kind of playback method of dynamic picture, terminal and can storage medium
CN111596841B (en) * 2020-04-28 2021-09-07 维沃移动通信有限公司 Image display method and electronic equipment
CN113704529B (en) * 2021-07-30 2023-03-24 荣耀终端有限公司 Photo classification method with audio identification, searching method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174448A (en) * 2007-12-10 2008-05-07 北京炬力北方微电子有限公司 Talking picture playing method and device, method for generating index file of talking picture
CN101986302A (en) * 2010-10-28 2011-03-16 华为终端有限公司 Media file association method and device
CN103279496A (en) * 2013-05-07 2013-09-04 深圳市同洲电子股份有限公司 Terminal and display method of associated information
CN104065869A (en) * 2013-03-18 2014-09-24 三星电子株式会社 Method for displaying image combined with playing audio in an electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9325776B2 (en) * 2013-01-08 2016-04-26 Tangome, Inc. Mixed media communication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174448A (en) * 2007-12-10 2008-05-07 北京炬力北方微电子有限公司 Talking picture playing method and device, method for generating index file of talking picture
CN101986302A (en) * 2010-10-28 2011-03-16 华为终端有限公司 Media file association method and device
CN104065869A (en) * 2013-03-18 2014-09-24 三星电子株式会社 Method for displaying image combined with playing audio in an electronic device
CN103279496A (en) * 2013-05-07 2013-09-04 深圳市同洲电子股份有限公司 Terminal and display method of associated information

Also Published As

Publication number Publication date
CN106713636A (en) 2017-05-24

Similar Documents

Publication Publication Date Title
US11355157B2 (en) Special effect synchronization method and apparatus, and mobile terminal
CN103702297B (en) Short message enhancement, apparatus and system
CN109194973A (en) A kind of more main broadcaster's direct broadcasting rooms give the methods of exhibiting, device and equipment of virtual present
CN107943683B (en) Test script generation method and device, electronic equipment and storage medium
CN108496150A (en) A kind of method and terminal of screenshot capture and reading
CN107329985B (en) Page collection method and device and mobile terminal
CN108156508B (en) Barrage information processing method and device, mobile terminal, server and system
CN106713636B (en) Loading method, device and the mobile terminal of image data
CN105430600B (en) A kind of data transmission method and the terminal of data transmission
CN108055490A (en) A kind of method for processing video frequency, device, mobile terminal and storage medium
CN106507482B (en) A kind of network locating method and terminal device
CN108958680A (en) Display control method, device, display system and computer readable storage medium
CN106202422B (en) The treating method and apparatus of Web page icon
CN104426747B (en) Instant communicating method, terminal and system
CN104751092B (en) Method and device for processing graphic code
CN106534452A (en) Quick communication method and apparatus, and mobile terminal
CN106375182B (en) Voice communication method and device based on instant messaging application
CN108549681A (en) Data processing method and device, electronic equipment, computer readable storage medium
CN104836717B (en) A kind of data processing method, device and terminal device
CN104731806B (en) A kind of method and terminal for quickly searching user information in social networks
CN104424203B (en) Photo in mobile device shares state inspection method and system
KR20190117753A (en) Message notification method and terminal
CN110392158A (en) A kind of message treatment method, device and terminal device
CN107678822B (en) A kind of information processing method and device, terminal and readable storage medium storing program for executing
CN106228994B (en) A kind of method and apparatus detecting sound quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20170724

Address after: 100102, 18 floor, building 2, Wangjing street, Beijing, Chaoyang District, 1801

Applicant after: BEIJING ANYUN SHIJI SCIENCE AND TECHNOLOGY CO., LTD.

Address before: 100088 Beijing city Xicheng District xinjiekouwai Street 28, block D room 112 (Desheng Park)

Applicant before: Beijing Qihu Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant