CN106791442B - A kind of image pickup method and mobile terminal - Google Patents
A kind of image pickup method and mobile terminal Download PDFInfo
- Publication number
- CN106791442B CN106791442B CN201710052115.9A CN201710052115A CN106791442B CN 106791442 B CN106791442 B CN 106791442B CN 201710052115 A CN201710052115 A CN 201710052115A CN 106791442 B CN106791442 B CN 106791442B
- Authority
- CN
- China
- Prior art keywords
- everyone
- voice data
- mark
- image
- age
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
Abstract
The embodiment of the invention provides a kind of image pickup method and mobile terminals, applied to the mobile terminal with camera, wherein this method comprises: recording the voice data that everyone speaks in shooting preview picture in image shoot process;In voice data recording process, everyone speak sequence and everyone biological information are recorded;The incidence relation establishing everyone biological information, speaking between sequence and voice data;After the completion of image taking, target image is generated;Target image and the voice data that everyone speaks are associated storage, using the embodiment of the present invention, target image and the voice data that everyone speaks are associated storage, the voice data being associated is played in browsing objective image, solve current image recording is still picture, the problem of scene when can not be through image experience shooting, allows users to play everyone voice data for speaking when shooting by image, has reappeared scene when shooting.
Description
Technical field
The present invention relates to fields of communication technology, more particularly, to a kind of image pickup method and mobile terminal.
Background technique
With the development of electronic technology and the communication technology, wireless mobile communications industry development is rapid, mobile terminal at
For the necessary article of people's routine work, life.
Currently, since mobile terminal is easy to carry, shot very universal, such as is closed using mobile terminal using mobile terminal
Picture etc. is shot according to taking a photograph, meetting in the anniversary.However, the image recording that shooting obtains is static picture, the information in image
It is relatively simple, scene when shooting can not be reappeared, so that field when shooting can not be experienced by image after shooting many years
Scape.
Summary of the invention
The embodiment of the present invention provides a kind of image pickup method and mobile terminal, due to image recording is at present static to solve
The problem of picture, information is relatively simple, can not experience scene when shooting by image.
In a first aspect, the embodiment of the invention provides a kind of image pickup method, applied to the mobile terminal with camera, institute
The method of stating includes:
In image shoot process, the voice data that everyone speaks in shooting preview picture is recorded;
In voice data recording process, everyone speak sequence and everyone biological information are recorded;
The biological information, the association spoken between sequence and the voice data for establishing everyone are closed
System;
After the completion of image taking, target image is generated;
The target image and the voice data that everyone speaks are associated storage.
Second aspect provides a kind of mobile terminal, including camera, the mobile terminal further include:
Voice data records module, in image shoot process, everyone to speak in recording shooting preview picture
Voice data;
Logging modle, the biology of speak sequence and everyone in voice data recording process, recording everyone
Characteristic information;
First incidence relation establishes module, for establish everyone the biological information, it is described speak sequence and
Incidence relation between the voice data;
Target image generation module, for generating target image after the completion of image taking;
Memory module, target image and the voice data for generating the target image generation module record mould
Everyone voice data spoken that block is recorded is associated storage.
In this way, in the embodiment of the present invention, mobile terminal records in shooting preview picture everyone in image shoot process
The voice data spoken, while everyone speak sequence and everyone biological information are recorded, establish everyone life
Object characteristic information, the incidence relation spoken between sequence and voice data, after generating target image, by target image and each
The voice data that people speaks is associated storage.Using the embodiment of the present invention, the voice number that target image is spoken with everyone
According to storage is associated, the voice data being associated can be played in browsing objective image, solves current image recording
Be still picture, can not through image experience shooting when scene the problem of, allow users to by image play shoot
When everyone voice data for speaking, reappeared scene when shooting.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is a kind of flow chart of image pickup method of the embodiment of the present invention one;
Fig. 2 is a kind of flow chart of image pickup method of the embodiment of the present invention two;
Fig. 3 is a kind of flow chart of image pickup method of the embodiment of the present invention three;
Fig. 4 is a kind of flow chart of image pickup method of the embodiment of the present invention four;
Fig. 5 is a kind of flow chart of image pickup method of the embodiment of the present invention five;
Fig. 6 is in the embodiment of the present invention five according to the data source of audio data or voiceprint, real-time update and target
The flow chart of each voice data of image associated storage;
Fig. 7 is a kind of one of the structural block diagram of mobile terminal of the embodiment of the present invention six;
Fig. 7 a is the two of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention six;
Fig. 7 b is the three of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention six;
Fig. 7 c is the four of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention six;
Fig. 7 d is the five of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention six;
Fig. 8 is a kind of structural block diagram of mobile terminal of the embodiment of the present invention seven;
Fig. 9 is a kind of structural schematic diagram of mobile terminal of the embodiment of the present invention eight.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Embodiment one
Fig. 1 is a kind of flow chart of image pickup method of the embodiment of the present invention one.
Shown in referring to Fig.1, a kind of image pickup method of the embodiment of the present invention should applied to the mobile terminal with camera
Method includes:
Step 101, in image shoot process, the voice data that everyone speaks in shooting preview picture is recorded.
Microphone has can be set in the mobile terminal of the embodiment of the present invention, records shooting in shooting process by microphone
When preview screen in everyone voice data for speaking, such as when photographing panorama picture or the focusing of non-panoramic image are found a view it is aobvious
Show shooting preview picture, the voice data that everyone speaks in shooting preview picture can be recorded.Voice data can for mp3,
The formats such as wma, wav, the embodiment of the present invention to the format of voice data with no restrictions.
Step 102, in voice data recording process, everyone speak sequence and everyone biological characteristic letter are recorded
Breath.
During voice data is recorded, the camera of mobile terminal is being found a view simultaneously, is shown in shooting preview picture
Image can carry out face recognition to each of shooting preview picture and obtain everyone life by facial recognition techniques
Object characteristic information, biological information can be everyone face feature information, be also possible to everyone gender, age etc.
Information, while everyone sequence of speaking is recorded, sequence of speaking can be used number and record be numbered, naturally it is also possible to use
Other way carries out record and speaks sequentially.
Step 103, establish everyone the biological information, it is described speak sequence the voice data between
Incidence relation.
In order to enable everyone voice data and biological information correspond, the biology that can establish everyone is special
Reference breath, speak sequence and voice data incidence relation, specifically, can by everyone speak sequence according to number into
Row number, then the number according to sequence of speaking, is identically numbered to everyone biological information and voice data addition,
It numbers identical biological information and voice data belongs to the same person, certainly, the embodiment of the present invention can also be by other
Mode is associated, and the embodiment of the present invention is without limitation.This step, the biological characteristic by establishing everyone are believed
Breath, the incidence relation spoken between sequence and the voice data, can play and speak and is suitable in browsing objective image
Sequence or the associated voice data of biological information.
Step 104, after the completion of image taking, target image is generated.
After the completion of voice data and biological information that everyone speaks record, the camera of mobile terminal is carried out pair
After the image procossings such as burnt, exposure, target image is generated, the target image of generation can be panorama sketch according to screening-mode
Picture, is also possible to non-panoramic image, and target image format can be jpg.jpeg.gif.png.bmp etc..
Step 105, the target image and the voice data that everyone speaks are associated storage.
After generating target image, target image and the voice data that everyone speaks can be associated storage, specifically
, target image and everyone biological information can be stored in a memory, by voice data, voice number
It is stored in another memory according to the incidence relation with biological information, sequence of speaking, naturally it is also possible to by target image
It is stored in a memory with everyone voice data spoken, while the incidence relation that storing step 103 is established.
In the embodiment of the present invention, mobile terminal is in image shoot process, everyone speaks in recording shooting preview picture
Voice data, while recording everyone speak sequence and everyone biological information, the biology for establishing everyone is special
Reference breath, the incidence relation spoken between sequence and voice data say target image with everyone after generating target image
The voice data of words is associated storage.Using the embodiment of the present invention, by establishing everyone biological information, institute
State words sequence the voice data between incidence relation, can in browsing objective image, play and speak sequence or
The associated voice data of person's biological information, solve image recording is still picture, can not be through image experience shooting
When scene the problem of, allow users to play everyone voice data for speaking when shooting by image, reappeared bat
Scene when taking the photograph.
Embodiment two
Fig. 2 is a kind of flow chart of image pickup method of the embodiment of the present invention two.
Referring to shown in Fig. 2, the method for a kind of image pickup method of the embodiment of the present invention, applied to the mobile end with camera
End, this method comprises:
Step 201, in image shoot process, the voice data that everyone speaks in shooting preview picture is recorded.
Microphone has can be set in the mobile terminal of the embodiment of the present invention, records shooting in shooting process by microphone
When preview screen in everyone voice data for speaking, for example, it may be shooting image is either graduated ten weeks when student graduates
Year party shooting image, naturally it is also possible to be the scene of other shootings, for example, shooting family according to etc., the embodiment of the present invention to this not
It is limited.Image taking can be panoramic picture, be also possible to non-panoramic image, such as can be in shooting collective group photo panorama
Everyone voice data is recorded when image.The embodiment of the present invention is illustrated using shooting graduating photograph as example, such as shooting is learned
When giving birth to collective, class group photo image when graduation, the graduation sense of each student in shooting preview picture can be recorded when shooting
Speech etc..
Step 202, in voice data recording process, everyone speak sequence and everyone biological characteristic letter are recorded
Breath.
It, can be by facial recognition techniques, to each of shooting preview picture during voice data is recorded
Carry out face recognition and obtain everyone biological information, biological information can be everyone face feature information,
Gender information etc., while everyone sequence of speaking is recorded, sequence of speaking can be used number and record is numbered, certainly may be used
It speaks sequentially so as to otherwise carry out record, for example, it may be during shooting collective group photo, according to participation shooting
The sequence of positions that everyone stands is spoken, and then records everyone speak sequence and everyone biological information, certainly
It is also possible to according to preset sequence, for example, during for above-mentioned shooting graduation collective, class group photo image, each student
It says some brief graduation comments, can recorde the time sequencing that everyone speaks, then record everyone biological characteristic letter
Breath.
Step 203, the gender information in everyone biological information is extracted.
In one preferred embodiment of the invention, biological information includes gender information, which can be
Add or generate when recording everyone biological information, for example, by facial recognition techniques to participate in shooting everyone
Face recognition is carried out, identifies that gender information automatically generates in biological information, is also possible to special to proprietary biology
After the completion of reference breath identification, user manual editing adds gender information, for example, the collective, class when graduating for shooting takes a group photo,
When shooting collective's group photo, everyone gender information for participating in shooting can be extracted, which can be in biological characteristic
It is extracted in information, is also possible to user and is manually entered for everyone.
Step 204, according to everyone gender information, the voice data that respectively everyone speaks adds gender mark
Know.
The voice data that everyone speaks can be to be stored respectively, can be voice data that everyone speaks therefore
Add gender mark, gender mark can be " male " of written form perhaps " female " be also possible to other characterization males or
The character of women, such as " M " represent male, and " F " represents female etc., for example, can be collective group photo in gender be male people language
Sound data add the gender mark of upper " male ", are that the voice data for the people that gender is women adds the gender mark of upper " female ".
Step 205, the incidence relation between everyone voice data and gender mark is established;Wherein, property
It Biao Shi not include male and female.
In the embodiment of the present invention, the incidence relation between everyone voice data and gender mark can establish, specifically
, it can establish the mapping relations or index relative between everyone voice data and gender mark, when some human nature
Not Wei male when, gender is identified as male, when search gender be identified as male's voice data when, mapping relations or rope can be passed through
Draw relationship, finds gender and be identified as all voice data of male, by the way that voice data and gender mark are associated, Neng Goushi
Existing voice data is classified or is retrieved according to gender, and user is facilitated to play voice data according to personage's gender, for example, finishing
In collective's group photo image of industry class, the voice data of all males and gender mark " male " in collective's group photo image are established and closed
The voice data of all women and gender mark " female " are established incidence relation by connection relationship.
Step 206, after the completion of image taking, target image is generated.
After the completion of voice data and biological information that everyone speaks record, the camera of mobile terminal is carried out pair
After the image procossings such as burnt, exposure, target image is generated, the target image of generation can be panorama sketch according to screening-mode
Picture is also possible to non-panoramic image, and target image format can be jpg.jpeg.gif.png.bmp etc., for example, for graduation
Collective's group photo image of class, can be the panoramic picture of formats or the non-panoramic images such as jpg.jpeg.gif.png.bmp.
Step 207, the target image and the voice data that everyone speaks are associated storage.
After generating target image, target image and the voice data that everyone speaks can be associated storage, specifically
, target image, biological information, voice data, gender mark and voice data can be associated with what gender identified
Relationship storage plays the voice data that everyone speaks in target image according to target image can be realized, for example, graduating
In collective's group photo of class, collective can be taken a group photo in the collective's group photo image, everyone the biological characteristic letter in image that generate
The incidence relation that breath, voice data, gender mark, voice data and gender identify is associated storage, can thus pass through
Collective's gender of personage in image of taking a group photo plays everyone voice data in image.
Step 208, the first voice play instruction that mobile terminal user inputs the target image is received.
Menu has can be set in target image, and menu includes play options, and play options include male voice data playback
Option and female voice data playback option.When user is on the target image of display, male voice data playback option is selected
Or when female voice data playback option, mobile terminal receives the selection operation of user, generates the comprising gender mark
One voice play instruction, for example, the collective, class when user enters the browsing graduation of mobile terminal photograph album takes a group photo when image, Ke Yi
Show that play options, play options include male voice data playback option and female voice data playback on collective's group photo image
Option, when user selects one of them, mobile terminal receives the selection operation of user, generates the first voice play instruction.
Step 209, the target gender mark in the first voice play instruction is extracted.
After receiving the first voice play instruction, the target gender mark for including in the first voice play instruction can be extracted
Know, specifically, selection operation of the first voice play instruction according to user, it includes gender be identified as male or female, can also
To be expressivity other character phase when the gender character for including adds gender mark with user comprising other other characters of representativeness
It is corresponding.Feature field in first voice play instruction can be extracted and be matched, feature field can be other for inclusive
Character, extract this feature field after, with preset gender character match, it can know target gender mark represent be male
Property or women, for example, graduation when shoot collective group photo image in, user select female voice play options after, first
The target gender that voice play instruction includes is identified as " female ", then can extract target gender mark " female ".
Step 210, it is identified according to the target gender, plays all voice numbers associated with target gender mark
According to.
After extracting target gender mark, if target gender is identified as male, according to the pass of gender mark and voice data
Connection relationship, obtaining gender is that all voice data of male or target gender are identified as female, and acquisition gender is all voices of woman
Data will acquire all voice data corresponding with target gender mark and play out, in this way, user can press voice data
It plays out, voice data when shooting can be reappeared according to personage's gender according to personage's gender on target image, when
When user wants to play the voice data of all schoolgirls or boy student, multiple click play operation is not needed, it is only necessary to which input is primary
Instruction, that is, can play the voice data of all schoolgirls or boy student.For example, in the collective's group photo image shot in graduation, it can
Voice data when shooting to be played out according to the gender of the personage in collective's group photo image.
In the embodiment of the present invention, the voice data addition gender for everyone identifies and establishes gender mark and voice data
Incidence relation, can be realized on target image according to personage's gender play out, can by voice data when shooting,
It is reappeared according to personage's gender.
In the embodiment of the present invention, mobile terminal is in image shoot process, everyone speaks in recording shooting preview picture
Voice data, while recording everyone speak sequence and everyone biological information, the biology for establishing everyone is special
Reference breath, the incidence relation spoken between sequence and voice data say target image with everyone after generating target image
The voice data of words is associated storage.Using the embodiment of the present invention, by establishing everyone biological information, institute
State words sequence the voice data between incidence relation, can in browsing objective image, play and speak sequence or
The associated voice data of person's biological information, solve image recording is still picture, can not be through image experience shooting
When scene the problem of, allow users to play everyone voice data for speaking when shooting by image, reappeared bat
Scene when taking the photograph.
Embodiment three
Fig. 3 is a kind of flow chart of image pickup method of the embodiment of the present invention three.
Referring to shown in Fig. 3, the method for a kind of image pickup method of the embodiment of the present invention, applied to the mobile end with camera
End, this method comprises:
Step 301, in image shoot process, the voice data that everyone speaks in shooting preview picture is recorded.
Microphone has can be set in the mobile terminal of the embodiment of the present invention, records shooting in shooting process by microphone
When preview screen in everyone voice data for speaking, for example, it may be shooting image is either graduated ten weeks when student graduates
Year party shooting image, naturally it is also possible to be the scene of other shootings, for example, shooting family according to etc., the embodiment of the present invention to this not
It is limited.Image taking can be panoramic picture, be also possible to non-panoramic image, such as can be in shooting collective group photo panorama
Everyone voice data is recorded when image.The embodiment of the present invention is illustrated using shooting graduating photograph as example, such as shooting is learned
When giving birth to collective, class group photo image when graduation, the graduation sense of each student in shooting preview picture can be recorded when shooting
Speech etc..
Step 302, in voice data recording process, everyone speak sequence and everyone biological characteristic letter are recorded
Breath.
It, can be by facial recognition techniques, to each of shooting preview picture during voice data is recorded
Carry out face recognition and obtain everyone biological information, biological information can be everyone face feature information,
Age information etc., while everyone sequence of speaking is recorded, sequence of speaking can be used number and record is numbered, certainly may be used
It speaks sequentially so as to otherwise carry out record.For example, it may be being clapped during shooting graduation collective group photo according to participation
Everyone sequence of positions of standing taken the photograph is spoken, everyone speak sequence and everyone biological information are then recorded,
Can certainly be according to preset sequence, for example, for above-mentioned shooting graduation collective, class group photo image during, each
Student says some brief graduation comments, can recorde the time sequencing that everyone speaks, and the biology for then recording everyone is special
Reference breath, the sequence that the embodiment of the present invention speaks to everyone is with no restrictions.
Step 303, the age information in everyone biological information is extracted.
In one preferred embodiment of the invention, biological information includes age information, which can be
It adds or generates when recording everyone biological information, such as can be facial recognition techniques and the face of personage is known
Not, it identifies that the approximate age of personage automatically generates in biological information, is also possible to believe proprietary biological characteristic
After the completion of breath identification, user manual editing adds age information, for example, the collective, class when graduating for shooting takes a group photo, is clapping
When taking the photograph collective's group photo, everyone age information for participating in shooting can be extracted, which can be in biological information
Middle extraction is also possible to user and is manually entered for everyone.
Step 304, according to everyone age information and preset age segment information, respectively everyone speaks
Voice data adds age mark.
In the embodiment of the present invention, preset age segment information can be set, such as it is an age that age bracket, which is 1-10 years old,
Section, age are identified as 0;It is another age bracket that age bracket, which is 11-20 years old, and the age is identified as 1;And so on.It certainly can be with
It is that age bracket is arranged in other modes and the age identifies, the embodiment of the present invention is without limitation.
For example, everyone age for participating in collective's group photo can be believed when taking a group photo for the collective, class of student's graduation
Breath and preset age segment information are compared, and the age for obtaining everyone belongs to specific age bracket, can say for everyone
The voice data of words adds corresponding age mark, specifically, what the people that the age is 1-10 years old in collective being taken a group photo spoke
Voice data adds age mark " 0 ", and the age is that the voice data that 11-20 years old people speaks adds age mark " 1 " etc..
Step 305, the incidence relation between everyone voice data and age mark is established;Wherein, belong to
It is identical in the age mark of the same age bracket.
In the embodiment of the present invention, the incidence relation between everyone voice data and age mark can establish, specifically
, it can establish the mapping relations or index relative between everyone voice data and age mark, when requiring to look up
When age identifies associated voice data, all languages of age mark can be found by mapping relations or index relative
Sound data represent for example, when someone's age is 15 years old, age mark can be 1 in the collective's group photo for class of graduating
Age bracket of the age at 11-20 years old can find the age in collective's group photo when being identified as 1 voice data at the lookup age
In 11-20 years old proprietary voice data, it can be realized in this way and voice data classified and retrieved according to the age, it is convenient
User plays out voice data according to the personage of different age group.
Step 306, after the completion of image taking, target image is generated.
After the completion of voice data and biological information that everyone speaks record, the camera of mobile terminal is carried out pair
After the image procossings such as burnt, exposure, target image is generated, target image format can be jpg.jpeg.gif.png.bmp etc., example
Such as, take a group photo image for the collective for class of graduating, can for the panoramic picture of formats such as jpg.jpeg.gif.png.bmp or
Non-panoramic image.
Step 307, the target image and the voice data that everyone speaks are associated storage.
After generating target image, target image and the voice data that everyone speaks can be associated storage, specifically
, what target image, biological information, voice data, age mark and voice data and age can be identified is associated with
Relationship storage plays the voice data that everyone speaks in target image according to target image can be realized, for example, graduating
In collective's group photo of class, collective can be taken a group photo in the collective's group photo image, everyone the biological characteristic letter in image that generate
The incidence relation that breath, voice data, age mark, voice data and age identify is associated storage, can thus pass through
Everyone voice data in age bracket broadcasting image in collective's group photo image where the age of each personage.
Step 308, the second voice play instruction that mobile terminal user inputs the target image is received.
Menu has can be set in target image, and menu includes play options, and play options include multiple age bracket options.When
User is on the target image of display, and when selecting corresponding age bracket, mobile terminal receives the selection operation of user, generates packet
Second voice play instruction of the mark containing the age, for example, the collective, class when user enters the browsing graduation of mobile terminal photograph album closes
Shadow image can show that play options, play options include multiple age bracket options on collective's group photo image, when user selects
One of them when, mobile terminal receives the selection operation of user, the second voice play instruction is generated, when user selects age bracket
At 0-10 years old, the corresponding age is identified as 0, includes age mark 0 in the second voice play instruction at this time.
Step 309, the target age mark in the second voice play instruction is extracted.
After receiving the second voice play instruction, target age mark can be extracted, specifically, can broadcast to the second voice
The feature field put in instruction is extracted and is matched, and feature field can be the character identified comprising the age, extracts this feature
After field, with preset character match, it can know that the age identifies.For example, the collective shot when user browses and graduates closes
When shadow image, when selecting 0-10 years old in the age bracket in play options, the corresponding age is identified as 0, at this time target age mark
Know is 0.
Step 310, it is identified according to the target age, plays all voice numbers associated with target age mark
According to.
After extracting target age mark, the incidence relation with voice data can be identified according to the age, obtains age mark
Know the proprietary voice data spoken of corresponding age bracket, all voice data that will acquire play out, in this way, user
Voice data can be played out according to the age of the personage on target image, can by voice data when shooting, according to
Personage reappears at the age, when user wants to play the voice data of all young men or elder, does not need repeatedly to click and broadcast
Put operation, it is only necessary to input once command, that is, can play the voice data of all young men or elder.For example, for graduation
When collective, class take a group photo image, the voice number spoken of people of collective's group photo middle finger dating section when may be implemented to play graduation
According to voice data when shooting being played out according to the age of personage, such as the voice number of student or teacher can be played
According to.
In the embodiment of the present invention, age mark is added for everyone voice data, establishes age mark and voice data
Incidence relation, the people of same age bracket age mark it is identical, can on target image by voice data according to personage
Age plays out, and voice data when shooting can be reappeared at the age according to personage.
In the embodiment of the present invention, mobile terminal is in image shoot process, everyone speaks in recording shooting preview picture
Voice data, while recording everyone speak sequence and everyone biological information, the biology for establishing everyone is special
Reference breath, the incidence relation spoken between sequence and voice data say target image with everyone after generating target image
The voice data of words is associated storage.Using the embodiment of the present invention, by establishing everyone biological information, institute
State words sequence the voice data between incidence relation, can in browsing objective image, play and speak sequence or
The associated voice data of person's biological information, solve image recording is still picture, can not be through image experience shooting
When scene the problem of, allow users to play everyone voice data spoken when shooting by image, reappeared shooting
When scene.
Example IV
Fig. 4 is a kind of flow chart of image pickup method of the embodiment of the present invention four.
Referring to shown in Fig. 4, the method for a kind of image pickup method of the embodiment of the present invention, applied to the mobile end with camera
End, this method comprises:
Step 401, in image shoot process, the voice data that everyone speaks in shooting preview picture is recorded.
Microphone has can be set in the mobile terminal of the embodiment of the present invention, records shooting in shooting process by microphone
When preview screen in everyone voice data for speaking.For example, it may be shooting image is either graduated ten weeks when student graduates
Year party shooting image, naturally it is also possible to be the scene of other shootings, for example, shooting family according to etc., the embodiment of the present invention to this not
It is limited.Image taking can be panoramic picture, be also possible to non-panoramic image, such as can be in shooting collective group photo panorama
Everyone voice data is recorded when image.The embodiment of the present invention is illustrated using shooting graduating photograph as example, such as shooting is learned
When giving birth to collective, class group photo image when graduation, the graduation sense of each student in shooting preview picture can be recorded when shooting
Speech etc..
Step 402, in voice data recording process, everyone speak sequence and everyone biological characteristic letter are recorded
Breath.
The embodiment of the present invention can carry out face recognition to each of shooting preview picture by facial recognition techniques
Everyone biological information is obtained, biological information can be everyone face feature information, while record each
The sequence of speaking of people, sequence of speaking can be used number and record are numbered, naturally it is also possible to be recorded using other way
It speaks sequentially.For example, it may be during shooting graduation collective group photo, it is suitable according to everyone position of standing for participating in shooting
Sequence is spoken, and then records everyone speak sequence and everyone biological information, naturally it is also possible to be according to preset
Sequentially, for example, the collective, class when graduating for above-mentioned shooting takes a group photo during image, each student says some brief finish
Industry comment can recorde the time sequencing that everyone speaks, then record everyone biological information, the embodiment of the present invention
The sequence spoken to everyone is with no restrictions.
Step 403, establish everyone the biological information, it is described speak sequence the voice data between
Incidence relation.
In order to enable everyone voice data and biological information correspond, the biology that can establish everyone is special
The incidence relation of reference breath, speak sequence and voice data is being shot for example, the collective, class when graduating for shooting takes a group photo
When collective takes a group photo, everyone sequence of speaking can be numbered according to number, then the number according to sequence of speaking, to every
Personal biological information and voice data addition are identically numbered, and number identical biological information and voice data category
In the same person, certainly, the embodiment of the present invention can also be associated otherwise, and the embodiment of the present invention does not limit this
System.
Step 404, after the completion of image taking, target image is generated.
After the completion of voice data and biological information that everyone speaks record, the camera of mobile terminal is carried out pair
After the image procossings such as burnt, exposure, target image is generated, target image format can be jpg.jpeg.gif.png.bmp etc., example
Such as, take a group photo image for the collective for class of graduating, can for the panoramic picture of formats such as jpg.jpeg.gif.png.bmp or
Non-panoramic image.
Step 405, the target image and the voice data that everyone speaks are associated storage.
After generating target image, target image and the voice data that everyone speaks can be associated storage, specifically
, it can be by target image, biological information, voice data, everyone speak sequence and speak sequence and voice number
According to incidence relation stored, everyone the voice number spoken in target image is played according to target image can be realized
According to, for example, in collective's group photo of graduation class, collective can be taken a group photo in collective's group photo image, each in image for generating
Biological information, voice data, sequence of speaking, voice data and the incidence relation for sequence of speaking of people is associated storage,
Image can be thus taken a group photo by collective according to everyone voice data in preset played in order image.
Step 406, the third voice play instruction that mobile terminal user inputs the target image is received.
Menu has can be set in target image, and menu includes played in order option, when user is on the target image of display,
When selecting sequence play options, mobile terminal receives the selection operation of user, generates the third voice comprising playing sequence and broadcasts
Instruction is put, for example, user enters mobile terminal photograph album browsing collective group photo image for the collective's group photo image shot when graduation
When, it can show that play options, play options include played in order option on collective's group photo image, when user's selecting sequence is broadcast
When putting, mobile terminal receives the selection operation of user, generates third voice play instruction.
Step 407, according to everyone sequence of speaking described in record, the voice number that everyone speaks is playd in order
According to.
After receiving third voice play instruction, according to the playing sequence in third voice play instruction, for example, for finishing
Collective's group photo image of industry shooting records the speaking sequence when voice data that everyone speaks when according to shooting collective group photo,
It successively extracts corresponding voice data to play out, the played in order of speaking when can be realized in collective's group photo according to shooting is every
The voice data that individual speaks can more really reappear scene when shooting.
During the present invention is implemented, according to everyone sequence of speaking of record, the voice data that everyone speaks is playd in order,
Scene when reproduction that can be more true to nature is shot.
In the embodiment of the present invention, mobile terminal is in image shoot process, everyone speaks in recording shooting preview picture
Voice data, while recording everyone speak sequence and everyone biological information, the biology for establishing everyone is special
Reference breath, the incidence relation spoken between sequence and voice data say target image with everyone after generating target image
The voice data of words is associated storage.Using the embodiment of the present invention, by establishing everyone biological information, institute
State words sequence the voice data between incidence relation, can in browsing objective image, play and speak sequence or
The associated voice data of person's biological information, solve image recording is still picture, can not be through image experience shooting
When scene the problem of, allow users to play everyone voice data spoken when shooting by image, reappeared shooting
When scene.
Embodiment five
Fig. 5 is a kind of flow chart of image pickup method of the embodiment of the present invention five.
Referring to Figure 5, the method for a kind of image pickup method of the embodiment of the present invention, applied to the mobile end with camera
End, this method comprises:
Step 501, in image shoot process, the voice data that everyone speaks in shooting preview picture is recorded.
Microphone has can be set in the mobile terminal of the embodiment of the present invention, records shooting in shooting process by microphone
When preview screen in everyone voice data for speaking.For example, it may be shooting image is either graduated ten weeks when student graduates
Year party shooting image, naturally it is also possible to be the scene of other shootings, for example, shooting family according to etc., the embodiment of the present invention to this not
It is limited.Image taking can be panoramic picture, be also possible to non-panoramic image, such as can be in shooting collective group photo panorama
Everyone voice data is recorded when image.The embodiment of the present invention is illustrated using shooting graduating photograph as example, such as shooting is learned
When giving birth to collective, class group photo image when graduation, the graduation sense of each student in shooting preview picture can be recorded when shooting
Speech etc..
Step 502, in voice data recording process, everyone speak sequence and everyone biological characteristic letter are recorded
Breath.
It, can be by facial recognition techniques, to each of shooting preview picture during voice data is recorded
It carrying out face recognition and obtains everyone biological information, biological information can be everyone face feature information,
Gender information, age information etc., while recording everyone sequence of speaking.For example, it may be the mistake of shooting graduation collective group photo
Cheng Zhong, according to participate in shooting everyone stand sequence of positions speak, then record everyone speak sequence and everyone
Biological information, naturally it is also possible to be according to preset sequence, for example, for above-mentioned shooting graduate when collective, class close
During shadow image, each student says some brief graduation comments, can recorde the time sequencing that everyone speaks, then
Record everyone biological information, the sequence that the embodiment of the present invention speaks to everyone is with no restrictions.
Step 503, establish everyone the biological information, it is described speak sequence the voice data between
Incidence relation.
In order to enable everyone voice data and biological information correspond, the biology that can establish everyone is special
Reference breath, speak sequence and voice data incidence relation, specifically, can by everyone speak sequence according to number into
Row number, then the number according to sequence of speaking, is identically numbered to everyone biological information and voice data addition,
It numbers identical biological information and voice data belongs to the same person, certainly, the embodiment of the present invention can also be by other
Mode is associated, and the embodiment of the present invention is without limitation.
Step 504, after the completion of image taking, target image is generated.
After the completion of voice data and biological information that everyone speaks record, the camera of mobile terminal is carried out pair
After the image procossings such as burnt, exposure, target image is generated, target image format can be jpg.jpeg.gif.png.bmp etc., example
Such as, take a group photo image for the collective for class of graduating, can for the panoramic picture of formats such as jpg.jpeg.gif.png.bmp or
Non-panoramic image.
Step 505, the target image and the voice data that everyone speaks are associated storage.
After generating target image, target image and the voice data that everyone speaks can be associated storage, specifically
, target image, biological information, voice data and voice data and the incidence relation for sequence of speaking can be stored,
The voice data that everyone speaks in target image is played according to target image can be realized, for example, in the collection of graduation class
In body group photo, collective can be taken a group photo in the collective's group photo image, everyone biological information, the voice number in image that generate
According to, speak sequence, voice data with speak sequence incidence relation store.
Step 506, everyone name information is obtained in the target image.
Everyone has name in target image, everyone available name information can be and record everyone
When biological information, everyone name information on target picture is inputted according to user, specifically, can be record one
User inputs the name information of a personage when biological information of individual, is also possible to record proprietary biological characteristic letter
After breath, user edits everyone name information of input, in practical applications, can be the surname for obtaining in collective's group photo everyone
Name, for example, in collective's group photo image of graduation shooting, each student of available user's input or the name of teacher.
Step 507, according to everyone name information, in the target image everyone add name mark.
The voice data that everyone speaks can be to be stored respectively, can be voice data that everyone speaks therefore
It adds name to identify, everyone name mark can be everyone name information in target image, for shooting when graduation
Collective group photo, can by collective take a group photo image in the name of each student or teacher, as name identify.
Step 508, the incidence relation between everyone voice data and name mark is established.
In the embodiment of the present invention, it can be identified by name, by the voice data that everyone talks about in target image
It is corresponding, since each of target image has been added to name mark, voice data and name mark can be established and be closed
Connection relationship, so that it may the voice data that everyone talks about in target image is corresponding.For example, can establish everyone language
Mapping relations or index relative between sound data and name mark, the image for example, collective shot in graduation takes a group photo
In, when the name of some student is " Zhang San ", name mark can be " Zhang San ", when the voice for requiring to look up student Zhang San and speaking
When data, it can be identified as the mapping relations or index relative of " Zhang San " by name, find corresponding voice data, such energy
Enough realize the voice data spoken according to name look-up voice data, easy-to-look-up specific someone.
Step 509, the audio data increased newly in mobile terminal is obtained.
In the embodiment of the present invention, the personage in target image when shooting can be added in the same chat group, such as QQ
Group, wechat group etc., everyone voice data for speaking in available chat group, common, everyone speaks in the group that chats
Voice data is stored in the terminal with audio data, the audio data increased newly in available mobile terminal.Certainly, the sound
Frequency is according to the audio data for being also possible to someone in the image that mobile terminal is recorded again, for example, there is foundation after student's graduation
Class QQ groups perhaps wechat group etc. can carry out voice-enabled chat in QQ groups or wechat group, such as graduation anniversary, each
People says brief anniversary comment etc. again, and the audio data of these voice-enabled chats can store in the terminal.
Step 510, the data source or voiceprint of the audio data are extracted.
If newly-increased audio data belongs to everyone chat voice data of chat group the inside, the audio number is extracted
According to specific data source, such as from the voice data of QQ crowds of Zhang San;When newly-increased audio data is mobile terminal Mike
Everyone sound of speaking of wind direct recording, then can directly extract everyone voiceprint of recording, and voiceprint can be with
It is directed to record the file path etc. for the voice data that everyone speaks, for example, the data source at class QQ groups can be
From the voice data of QQ crowds of Zhang San, or recording from local Zhang San etc..
Step 511, it is closed according to the data source or voiceprint of the audio data, real-time update and the target image
Join each voice data of storage.
As shown in fig. 6, in the embodiment of the present invention, according to the data source or voiceprint of the audio data, in real time more
Newly each voice data with the target image associated storage includes following steps:
Step 5111, according to the data source or voiceprint of the audio data, determine that the audio data is corresponding
Speaker's name.
In practical applications, everyone name can be used as the business card inside chat group in chat group, as chat group
When the inside carries out voice-enabled chat, the audio data that everyone is spoken is stored by file designation of name, such as is stored as " QQ groups
The voice-enabled chat of Zhang San " can also save as with the text of the name name of speaker the audio data that mobile terminal is recorded
Part, can certainly be that other way is stored, in this way, can be named according to storage file, to obtain audio data corresponding
Speaker's name.
Step 5112, according to the corresponding speaker's name of the audio data, the corresponding mesh of speaker's name is determined
Mark name mark.
It, can be by the name and all name mark progress after determining the corresponding speaker's name of audio data
Match, when finding matched name mark, name mark is determined as target name mark.
Step 5113, the target name is identified into the associated voice data and replaces with the audio data.
After determining target name mark, according to the incidence relation of voice data and name mark, obtain and the target surname
Name identifies associated voice data, is replaced using new audio data and identifies associated voice data with target name, such as
By covering storage mode, original voice data is covered, realizes that replacement updates.For example, for collective's group photo image of graduation,
Someone is when there are new audio data in QQ groups, wechat group or local in collective's group photo image, using the new audio data
The people's corresponding voice data in collective's image is replaced, in this way, passing through name mark and audio newly-increased in mobile terminal
Speaker's name of data matches, and realizes the real-time update of voice data in target image.
In the embodiment of the present invention, everyone name information in acquisition target image, and everyone name information is made
It is identified for everyone name in target image, then establishes the incidence relation of name mark and voice data, work as mobile terminal
In when increasing audio data newly, identify and match according to audio data speaker name and name, more by newly-increased audio data
It is newly corresponding voice data in target image, realizes the real-time update of voice data in target image.
In the embodiment of the present invention, mobile terminal is in image shoot process, everyone speaks in recording shooting preview picture
Voice data, while recording everyone speak sequence and everyone biological information, the biology for establishing everyone is special
Reference breath, the incidence relation spoken between sequence and voice data say target image with everyone after generating target image
The voice data of words is associated storage.Using the embodiment of the present invention, by establishing everyone biological information, institute
State words sequence the voice data between incidence relation, can in browsing objective image, play and speak sequence or
The associated voice data of person's biological information, solve image recording is still picture, can not be through image experience shooting
When scene the problem of, allow users to play everyone voice data spoken when shooting by image, reappeared shooting
When scene.
Embodiment six
Fig. 7 is a kind of structural block diagram of mobile terminal of the embodiment of the present invention six.
Referring to shown in Fig. 7, a kind of mobile terminal 600 provided in an embodiment of the present invention includes camera, further includes:
Voice data records module 601, in image shoot process, everyone to speak in recording shooting preview picture
Voice data;
Logging modle 602, the life of speak sequence and everyone in voice data recording process, recording everyone
Object characteristic information;
First incidence relation establishes module 603, for establish everyone the biological information, it is described speak sequence
Incidence relation between the voice data;
Target image generation module 604, for generating target image after the completion of image taking;
Memory module 605, target image and the voice data for generating the target image generation module are recorded
Everyone voice data spoken that molding block is recorded is associated storage.
Referring to shown in Fig. 7 a, in another mobile terminal 600 of the embodiment of the present invention, the first incidence relation establishes module
603 include:
Gender information's extracting sub-module 6031, the gender information in the biological information for extracting everyone;
Gender mark addition submodule 6032, everyone institute for being extracted according to gender information's extracting sub-module
Gender information is stated, the voice data addition gender that respectively everyone speaks identifies;
First incidence relation setting up submodule 6033, the voice data and the gender for establishing everyone identify
Add the incidence relation between the gender mark of submodule addition;
Wherein, gender mark includes male and female.
Mobile terminal 600 further include:
First play instruction receiving module 606, the target image is inputted for receiving mobile terminal user
One voice play instruction;
Target gender marker extraction module 607, for extracting received first language of the first play instruction receiving module
Target gender mark in sound play instruction;
First playing module 608, the target gender mark for being extracted according to the target gender marker extraction module, broadcasts
Put all voice data associated with target gender mark.
Referring to shown in Fig. 7 b, in another mobile terminal 600 of the embodiment of the present invention, the first incidence relation establishes module
603 include:
Age information extracting sub-module 6034, the age information in the biological information for extracting everyone;
Age mark addition submodule 6035, everyone institute for being extracted according to the age information extracting sub-module
Age information and preset age segment information are stated, the voice data addition age that respectively everyone speaks identifies;
Second incidence relation setting up submodule 6036, the voice data and the age for establishing everyone identify
Add the incidence relation between the age mark of submodule addition;
Wherein, the age mark for belonging to the same age bracket is identical.
Mobile terminal 600 further include:
Second play instruction receiving module 609, the target image is inputted for receiving mobile terminal user
Two voice play instructions;
Target age marker extraction module 610, for extracting received second language of the second play instruction receiving module
Target age mark in sound play instruction;
Second playing module 611, the target age mark for being extracted according to the target age marker extraction module, broadcasts
Put all voice data associated with target age mark.
Referring to shown in Fig. 7 c, another mobile terminal 600 of the embodiment of the present invention, further includes:
Third play instruction receiving module 612, the target image is inputted for receiving mobile terminal user
Three voice play instructions;
Third playing module 613, for according to everyone sequence of speaking described in record, play in order it is described everyone
The voice data spoken.
Referring to shown in Fig. 7 d, another mobile terminal 600 of the embodiment of the present invention, further includes:
Name information obtains module 614, for obtaining in the target image everyone name information;
Name identifies adding module 615, believes for obtaining everyone name that module obtains according to the name information
Breath is that everyone adds name mark in the target image;
Second incidence relation establishes module 616, establishes everyone voice data and name mark addition mould
Incidence relation between the name mark of block addition.
Audio data obtains module 617, for obtaining the audio data increased newly in mobile terminal;
Audio data information extraction module 618 obtains audio data that module obtains for extracting the audio data
Data source or voiceprint;
Voice data update module 619, audio data for being extracted according to the audio data information extraction module
Data source or voiceprint, each voice data of real-time update and the target image associated storage.
Wherein, voice data update module 619 includes:
Name determines submodule 6191, for the data source or voiceprint according to the audio data, determine described in
The corresponding speaker's name of audio data;
Name, which identifies, determines submodule 6192, and the audio data for determining that submodule determines according to the name is corresponding
Speaker's name determines the corresponding target name mark of speaker's name;
Voice data replaces submodule 6193, determines that the target name that submodule determines identifies for identifying the name
The associated voice data replaces with the audio data.
Mobile terminal can be realized each process that mobile terminal is realized in the embodiment of the method for Fig. 1 to Fig. 6, to avoid weight
Multiple, which is not described herein again.
In the embodiment of the present invention, the voice data addition gender for everyone identifies and establishes gender mark and voice data
Incidence relation, can be realized on target image according to personage's gender play out, can by voice data when shooting,
It is reappeared according to personage's gender.
Further, it is everyone voice data addition age mark, establishes age mark and be associated with voice data
Relationship, the people of same age bracket age mark it is identical, can on target image by voice data according to personage's age into
Row plays, and voice data when shooting can be reappeared at the age according to personage.
Further, according to everyone sequence of speaking of record, the voice data that everyone speaks is playd in order, it can be with
Scene when reproduction more true to nature is shot.
Further, everyone name information in target image is obtained, and using everyone name information as target
Everyone name mark, then establishes the incidence relation of name mark and voice data in image, when newly-increased in mobile terminal
When audio data, is matched according to audio data speaker name with name mark, newly-increased audio data is updated to mesh
Corresponding voice data in logo image, realizes the real-time update of voice data in target image.
In the embodiment of the present invention, mobile terminal is in image shoot process, everyone speaks in recording shooting preview picture
Voice data, while recording everyone speak sequence and everyone biological information, the biology for establishing everyone is special
Reference breath, the incidence relation spoken between sequence and voice data say target image with everyone after generating target image
The voice data of words is associated storage.Using the embodiment of the present invention, by establishing everyone biological information, institute
State words sequence the voice data between incidence relation, can in browsing objective image, play and speak sequence or
The associated voice data of person's biological information, solve image recording is still picture, can not be through image experience shooting
When scene the problem of, allow users to play everyone voice data spoken when shooting by image, reappeared shooting
When scene.
Embodiment seven
Fig. 8 is a kind of structural block diagram for mobile terminal that the embodiment of the present invention seven provides, mobile terminal 1000 shown in Fig. 8
It include: at least one processor 1001, memory 1002, at least one network interface 1004, camera 1006, microphone 1007
With user interface 1003.Various components in mobile terminal 1000 are coupled by bus system 1005.It is understood that bus
System 1005 is for realizing the connection communication between these components.Bus system 1005 except include data/address bus in addition to, further include
Power bus, control bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as in fig. 8
Bus system 1005.
Wherein, user interface 1003 may include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memory 1002 in the embodiment of the present invention can be volatile memory or non-volatile memories
Device, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory
(Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), erasable programmable are only
Read memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM,
) or flash memory EEPROM.Volatile memory can be random access memory (Random Access Memory, RAM), use
Make External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random-access
Memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random-access
Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data
Rate SDRAM, DDRSDRAM), it is enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronous
Connect dynamic random access memory (Synchlink DRAM, SLDRAM) and direct rambus random access memory
(Direct Rambus RAM, DRRAM).The embodiment of the present invention description system and method memory 1002 be intended to include but
It is not limited to the memory of these and any other suitable type.
In some embodiments, memory 1002 stores following element, executable modules or data structures, or
Their subset of person or their superset: operating system 10021 and application program 10022.
Wherein, operating system 10021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and the hardware based task of processing.Application program 10022 includes various application programs, such as matchmaker
Body player (Media Player), browser (Browser) etc., for realizing various applied business.Realize that the present invention is implemented
The program of example method may be embodied in application program 10022.
In embodiments of the present invention, by the program or instruction of calling memory 1002 to store, specifically, can be application
The program or instruction stored in program 10022, processor 1001 are used in image shoot process, are recorded in shooting preview picture
The voice data that everyone speaks;In voice data recording process, everyone speak sequence and everyone biology are recorded
Characteristic information;The biological information, the association spoken between sequence and the voice data for establishing everyone are closed
System;After the completion of image taking, target image is generated;The target image and the voice data that everyone speaks are carried out
Associated storage.
The method that the embodiments of the present invention disclose can be applied in processor 1001, or real by processor 1001
It is existing.Processor 1001 may be a kind of IC chip, the processing capacity with signal.During realization, the above method
Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 1001 or software form.Above-mentioned
Processor 1001 can be general processor, digital signal processor (Digital Signal Processor, DSP), dedicated
Integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general
Processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with institute of the embodiment of the present invention
The step of disclosed method, can be embodied directly in hardware decoding processor and execute completion, or with the hardware in decoding processor
And software module combination executes completion.Software module can be located at random access memory, and flash memory, read-only memory may be programmed read-only
In the storage medium of this fields such as memory or electrically erasable programmable memory, register maturation.The storage medium is located at
The step of memory 1002, processor 1001 reads the information in memory 1002, completes the above method in conjunction with its hardware.
Camera 1006 generates target image for shooting when shooting, and preview is drawn when microphone 1007 is for recording shooting
The voice data that everyone speaks in face.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware,
Microcode or combinations thereof is realized.For hardware realization, processing unit be may be implemented in one or more specific integrated circuits
(Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal
Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable
Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general place
It manages in device, controller, microcontroller, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, can by execute the embodiment of the present invention described in function module (such as process, function etc.) come
Realize technology described in the embodiment of the present invention.Software code is storable in memory and is executed by processor.Memory can
With portion realizes in the processor or outside the processor.
Preferably, processor 1001 is also used to: extracting the gender information in everyone biological information;According to
Everyone gender information, the voice data addition gender that respectively everyone speaks identify;Establish everyone described
Incidence relation between voice data and gender mark;Wherein, gender mark includes male and female.
Preferably, processor 1001 is also used to: receiving the first language that mobile terminal user inputs the target image
Sound play instruction;Extract the target gender mark in the first voice play instruction;It is identified, is played according to the target gender
All voice data associated with target gender mark.
Preferably, processor 1001 is also used to: extracting the age information in everyone biological information;According to
Everyone age information and preset age segment information, the voice data addition age that respectively everyone speaks mark
Know;Establish the incidence relation between everyone voice data and age mark;Wherein, belong to the same age bracket
Age mark it is identical.
Preferably, processor 1001 is also used to: receiving the second language that mobile terminal user inputs the target image
Sound play instruction;Extract the target age mark in the second voice play instruction;It is identified, is played according to the target age
All voice data associated with target age mark.
Preferably, processor 1001 is also used to: receiving the third language that mobile terminal user inputs the target image
Sound play instruction;According to everyone sequence of speaking described in record, the voice data that everyone speaks is playd in order.
Preferably, processor 1001 is also used to: obtaining in the target image everyone name information;According to described every
Personal name information is that everyone adds name mark in the target image;Establish everyone voice data with
Incidence relation between the name mark.
Preferably, processor 1001 is also used to: obtaining the audio data increased newly in mobile terminal;Extract the audio data
Data source or voiceprint;According to the data source or voiceprint of the audio data, real-time update and the target
Each voice data of image associated storage.
Preferably, processor 1001 is also used to: according to the data source or voiceprint of the audio data, determine described in
The corresponding speaker's name of audio data;According to the corresponding speaker's name of the audio data, speaker's name is determined
Corresponding target name mark;The target name is identified into the associated voice data and replaces with the audio data.
Mobile terminal 1000 can be realized each process that mobile terminal is realized in the embodiment of the method for Fig. 1 to Fig. 6, to keep away
Exempt to repeat, which is not described herein again.
In the embodiment of the present invention, the voice data addition gender for everyone identifies and establishes gender mark and voice data
Incidence relation, can be realized on target image according to personage's gender play out, can by voice data when shooting,
It is reappeared according to personage's gender.
Further, it is everyone voice data addition age mark, establishes age mark and be associated with voice data
Relationship, the people of same age bracket age mark it is identical, can on target image by voice data according to personage's age into
Row plays, and voice data when shooting can be reappeared at the age according to personage.
Further, according to everyone sequence of speaking of record, the voice data that everyone speaks is playd in order, it can be with
Scene when reproduction more true to nature is shot.
Further, everyone name information in target image is obtained, and using everyone name information as target
Everyone name mark, then establishes the incidence relation of name mark and voice data in image, when newly-increased in mobile terminal
When audio data, is matched according to audio data speaker name with name mark, newly-increased audio data is updated to mesh
Corresponding voice data in logo image, realizes the real-time update of voice data in target image.
The mobile terminal 1000 of the embodiment of the present invention, in image shoot process, record shooting preview picture in everyone
The voice data spoken, while everyone speak sequence and everyone biological information are recorded, establish everyone life
Object characteristic information, the incidence relation spoken between sequence and voice data, after generating target image, by target image and each
The voice data that people speaks is associated storage.Using the embodiment of the present invention, the biological characteristic by establishing everyone is believed
Breath, the incidence relation spoken between sequence and the voice data, can play and speak and is suitable in browsing objective image
Sequence or the associated voice data of biological information, solve image recording is still picture, can not be experienced through image
The problem of scene when shooting, allows users to play the voice data that everyone speaks when shooting by image, reappear
Scene when shooting.
Embodiment eight
Fig. 9 is a kind of structural schematic diagram for mobile terminal that the embodiment of the present invention eight provides.Specifically, the movement in Fig. 9
Terminal 1100 can be mobile phone, tablet computer, personal digital assistant (Personal Digital Assistant, PDA) or vehicle
Carry computer etc..
Mobile terminal 1100 in Fig. 9 includes radio frequency (Radio Frequency, RF) circuit 1101, memory 1102, defeated
Enter unit 1103, display unit 1104, camera 1105, microphone 1110, processor 1106, voicefrequency circuit 1107, WiFi
(Wireless Fidelity) module 1108 and power supply 1109.
Wherein, input unit 1103 can be used for receiving the number or character information of user's input, and generate with movement eventually
The related signal input of the user setting and function control at end 1100.Specifically, in the embodiment of the present invention, the input unit
1103 may include touch panel 11031.Touch panel 11031, also referred to as touch screen collect user on it or nearby
Touch operation (for example user uses the operations of any suitable object or attachment on touch panel 11031 such as finger, stylus),
And corresponding attachment device is driven according to preset formula.Optionally, touch panel 11031 may include touch detecting apparatus
With two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation bring
Signal transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and it is converted
At contact coordinate, then the processor 1106 is given, and order that processor 1106 is sent can be received and executed.In addition, can
To realize touch panel 11031 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch surface
Plate 11031, input unit 1103 can also include other input equipments 11032, other input equipments 11032 may include but not
It is limited to one of physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc.
Or it is a variety of.
Wherein, display unit 1104 can be used for showing information input by user or be supplied to the information and movement of user
The various menu interfaces of terminal 1100.Display unit 1104 may include display panel 11041, optionally, using LCD or can have
The forms such as machine light emitting diode (Organic Light-Emitting Diode, OLED) configure display panel 11041.
It should be noted that touch panel 11031 can cover display panel 11041, touch display screen is formed, when the touch is shown
After screen detects touch operation on it or nearby, processor 1106 is sent to determine the type of touch event, is then located
It manages device 1106 and provides corresponding visual output in touch display screen according to the type of touch event.
Touch display screen includes Application Program Interface viewing area and common control viewing area.The Application Program Interface viewing area
And arrangement mode of the common control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface elements such as the icon comprising at least one application program and/or widget desktop control.The Application Program Interface viewing area
Or the empty interface not comprising any content.This commonly uses control viewing area for showing the higher control of utilization rate, for example,
Application icons such as button, interface number, scroll bar, phone directory icon etc. are set.
Wherein processor 1106 is the control centre of mobile terminal 1100, utilizes various interfaces and connection whole mobile phone
Various pieces, by running or execute the software program and/or module that are stored in first memory 11021, and call
The data being stored in second memory 11022 execute the various functions and processing data of mobile terminal 1100, thus to movement
Terminal 1100 carries out integral monitoring.Optionally, processor 1106 may include one or more processing units.
Camera 1105 generates target image for shooting when shooting, and preview is drawn when microphone 1110 is for recording shooting
The voice data that everyone speaks in face.
In embodiments of the present invention, by calling the software program and/or module that store in the first memory 11021
And/or the data in the second memory 11022, processor 1106 are used to record shooting preview in image shoot process
The voice data that everyone speaks in picture;In voice data recording process, record everyone speak sequence and everyone
Biological information;Establish everyone the biological information, it is described speak sequence the voice data between
Incidence relation;After the completion of image taking, target image is generated;By the target image and the voice number that everyone speaks
It is stored according to being associated.
Preferably, processor 1106 is also used to: extracting the gender information in everyone biological information;According to
Everyone gender information, the voice data addition gender that respectively everyone speaks identify;Establish everyone described
Incidence relation between voice data and gender mark;Wherein, gender mark includes male and female.
Preferably, processor 1106 is also used to: receiving the first language that mobile terminal user inputs the target image
Sound play instruction;Extract the target gender mark in the first voice play instruction;It is identified, is played according to the target gender
All voice data associated with target gender mark.
Preferably, processor 1106 is also used to: extracting the age information in everyone biological information;According to
Everyone age information and preset age segment information, the voice data addition age that respectively everyone speaks mark
Know;Establish the incidence relation between everyone voice data and age mark;Wherein, belong to the same age bracket
Age mark it is identical.
Preferably, processor 1106 is also used to: receiving the second language that mobile terminal user inputs the target image
Sound play instruction;Extract the target age mark in the second voice play instruction;It is identified, is played according to the target age
All voice data associated with target age mark.
Preferably, processor 1106 is also used to: receiving the third language that mobile terminal user inputs the target image
Sound play instruction;According to everyone sequence of speaking described in record, the voice data that everyone speaks is playd in order.
Preferably, processor 1106 is also used to: obtaining in the target image everyone name information;According to described every
Personal name information is that everyone adds name mark in the target image;Establish everyone voice data with
Incidence relation between the name mark.
Preferably, processor 1106 is also used to: obtaining the audio data increased newly in mobile terminal;Extract the audio data
Data source or voiceprint;According to the data source or voiceprint of the audio data, real-time update and the target
Each voice data of image associated storage.
Preferably, processor 1106 is also used to: according to the data source or voiceprint of the audio data, determine described in
The corresponding speaker's name of audio data;According to the corresponding speaker's name of the audio data, speaker's name is determined
Corresponding target name mark;The target name is identified into the associated voice data and replaces with the audio data.
Mobile terminal 1100 can be realized each process that mobile terminal is realized in previous embodiment, to avoid repeating, this
In repeat no more.
In the embodiment of the present invention, the voice data addition gender for everyone identifies and establishes gender mark and voice data
Incidence relation, can be realized on target image according to personage's gender play out, can by voice data when shooting,
It is reappeared according to personage's gender.
Further, it is everyone voice data addition age mark, establishes age mark and be associated with voice data
Relationship, the people of same age bracket age mark it is identical, can on target image by voice data according to personage's age into
Row plays, and voice data when shooting can be reappeared at the age according to personage.
Further, according to everyone sequence of speaking of record, the voice data that everyone speaks is playd in order, it can be with
Scene when reproduction more true to nature is shot.
Further, everyone name information in target image is obtained, and using everyone name information as target
Everyone name mark, then establishes the incidence relation of name mark and voice data in image, when newly-increased in mobile terminal
When audio data, is matched according to audio data speaker name with name mark, newly-increased audio data is updated to mesh
Corresponding voice data in logo image, realizes the real-time update of voice data in target image.
The mobile terminal 1100 of the embodiment of the present invention, in image shoot process, record shooting preview picture in everyone
The voice data spoken, while everyone speak sequence and everyone biological information are recorded, establish everyone life
Object characteristic information, the incidence relation spoken between sequence and voice data, after generating target image, by target image and each
The voice data that people speaks is associated storage.Using the embodiment of the present invention, the biological characteristic by establishing everyone is believed
Breath, the incidence relation spoken between sequence and the voice data, can play and speak and is suitable in browsing objective image
Sequence or the associated voice data of biological information, solve image recording is still picture, can not be experienced through image
The problem of scene when shooting, allows users to play the voice data that everyone speaks when shooting by image, reappear
Scene when shooting.
Those of ordinary skill in the art may be aware that the embodiment in conjunction with disclosed in the embodiment of the present invention describe it is each
Exemplary unit and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These
Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession
Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered
Think beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling or direct-coupling or call connection can be through some interfaces, the INDIRECT COUPLING of device or unit or call connect
It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (16)
1. a kind of image pickup method, applied to the mobile terminal with camera, which is characterized in that the described method includes:
In image shoot process, the voice data that everyone speaks in shooting preview picture is recorded;
In voice data recording process, everyone speak sequence and everyone biological information are recorded;
Establish everyone biological information, the incidence relation spoken between sequence and the voice data;
After the completion of image taking, target image is generated;
The target image and the voice data that everyone speaks are associated storage;It is described to establish everyone described
The step of biological information, the incidence relation spoken between sequence and the voice data, comprising:
Extract the gender information in everyone biological information;
According to everyone gender information, the voice data addition gender that respectively everyone speaks is identified;
Establish everyone voice data, the incidence relation spoken between sequence and gender mark;
Wherein, gender mark includes male and female.
2. the method according to claim 1, wherein it is described by the target image and it is described everyone speak
Voice data was associated after the step of storage, the method also includes:
Receive the first voice play instruction that mobile terminal user inputs the target image;
Extract the target gender mark in the first voice play instruction;
It is identified according to the target gender, plays all voice data associated with target gender mark.
3. the method according to claim 1, wherein the biological information for establishing everyone, institute
The step of stating the incidence relation between words sequence and the voice data, further includes:
Extract the age information in everyone biological information;
According to everyone age information and preset age segment information, the voice data that respectively everyone speaks is added
Age mark;
Establish the incidence relation between everyone voice data and age mark;
Wherein, the age mark for belonging to the same age bracket is identical.
4. according to the method described in claim 3, it is characterized in that, it is described by the target image and it is described everyone speak
Voice data was associated after the step of storage, the method also includes:
Receive the second voice play instruction that mobile terminal user inputs the target image;
Extract the target age mark in the second voice play instruction;
It is identified according to the target age, plays all voice data associated with target age mark.
5. the method according to claim 1, wherein it is described by the target image and it is described everyone speak
Voice data was associated after the step of storage, the method also includes:
Receive the third voice play instruction that mobile terminal user inputs the target image;
According to everyone sequence of speaking described in record, the voice data that everyone speaks is playd in order.
6. the method according to claim 1, wherein it is described by the target image and it is described everyone speak
Voice data was associated after the step of storage, the method also includes:
Obtain in the target image everyone name information;
According to everyone name information, in the target image everyone add name mark;
Establish the incidence relation between everyone voice data and name mark.
7. according to the method described in claim 6, it is characterized in that, the voice data for establishing everyone and the surname
Name mark between incidence relation the step of after, the method also includes:
Obtain the audio data increased newly in mobile terminal;
Extract the data source or voiceprint of the audio data;
According to the data source or voiceprint of the audio data, each language of real-time update and the target image associated storage
Sound data.
8. the method according to the description of claim 7 is characterized in that the data source or vocal print according to the audio data
The step of each voice data of information, real-time update and the target image associated storage, comprising:
According to the data source or voiceprint of the audio data, the corresponding speaker's name of the audio data is determined;
According to the corresponding speaker's name of the audio data, the corresponding target name mark of speaker's name is determined;
The target name is identified into the associated voice data and replaces with the audio data.
9. a kind of mobile terminal, including camera, which is characterized in that the mobile terminal further include:
Voice data records module, for recording the voice that everyone speaks in shooting preview picture in image shoot process
Data;
Logging modle, the biological characteristic of speak sequence and everyone in voice data recording process, recording everyone
Information;
First incidence relation establishes module, for establish everyone the biological information, it is described speak sequence and it is described
Incidence relation between voice data;
Target image generation module, for generating target image after the completion of image taking;
Memory module, target image and the voice data for generating the target image generation module record module record
Everyone voice data spoken of system is associated storage;First incidence relation establishes module and includes:
Gender information's extracting sub-module, the gender information in the biological information for extracting everyone;
Gender mark addition submodule, everyone gender letter for being extracted according to gender information's extracting sub-module
Breath, the voice data addition gender that respectively everyone speaks identify;
First incidence relation setting up submodule, for establish everyone the voice data, it is described speak sequence and the property
The incidence relation between the gender mark of submodule addition Biao Shi not added;
Wherein, gender mark includes male and female.
10. mobile terminal according to claim 9, which is characterized in that the mobile terminal further include:
First play instruction receiving module is broadcast for receiving the first voice that mobile terminal user inputs the target image
Put instruction;
Target gender marker extraction module refers to for extracting the received first voice broadcasting of the first play instruction receiving module
Target gender mark in order;
First playing module, the target gender mark for being extracted according to the target gender marker extraction module, broadcasting and institute
It states target gender and identifies associated all voice data.
11. mobile terminal according to claim 9, which is characterized in that first incidence relation establishes module and includes:
Age information extracting sub-module, the age information in the biological information for extracting everyone;
Age mark addition submodule, everyone age letter for being extracted according to the age information extracting sub-module
Breath and preset age segment information, the voice data addition age that respectively everyone speaks identify;
Second incidence relation setting up submodule, the voice data and age mark for establishing everyone add submodule
Incidence relation between the age mark of block addition;
Wherein, the age mark for belonging to the same age bracket is identical.
12. mobile terminal according to claim 11, which is characterized in that the mobile terminal further include:
Second play instruction receiving module is broadcast for receiving the second voice that mobile terminal user inputs the target image
Put instruction;
Target age marker extraction module refers to for extracting the received second voice broadcasting of the second play instruction receiving module
Target age mark in order;
Second playing module, the target age mark for being extracted according to the target age marker extraction module, broadcasting and institute
It states target age and identifies associated all voice data.
13. mobile terminal according to claim 9, which is characterized in that the mobile terminal further include:
Third play instruction receiving module is broadcast for receiving the third voice that mobile terminal user inputs the target image
Put instruction;
Third playing module, for according to everyone sequence of speaking described in record, play in order it is described everyone speak
Voice data.
14. mobile terminal according to claim 9, which is characterized in that further include:
Name information obtains module, for obtaining in the target image everyone name information;
Name identifies adding module, for obtaining everyone name information that module obtains according to the name information, for institute
It states in target image everyone and adds name mark;
Second incidence relation establishes module, and the voice data and the name for establishing everyone identify adding module addition
Incidence relation between name mark.
15. mobile terminal according to claim 14, which is characterized in that further include:
Audio data obtains module, for obtaining the audio data increased newly in mobile terminal;
Audio data information extraction module obtains the data source for the audio data that module obtains for extracting the audio data
Or voiceprint;
Speech digit update module, the data source of the audio data for being extracted according to the audio data information extraction module
Or voiceprint, each voice data of real-time update and the target image associated storage.
16. mobile terminal according to claim 15, which is characterized in that the voice data update module includes:
Name determines submodule, for the data source or voiceprint according to the audio data, determines the audio data
Corresponding speaker's name;
Name, which identifies, determines submodule, for determining the determining corresponding speaker's surname of audio data of submodule according to the name
Name determines the corresponding target name mark of speaker's name;
Voice data replaces submodule, associated for the name to be identified to the target name mark for determining that submodule determines
The voice data replaces with the audio data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710052115.9A CN106791442B (en) | 2017-01-20 | 2017-01-20 | A kind of image pickup method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710052115.9A CN106791442B (en) | 2017-01-20 | 2017-01-20 | A kind of image pickup method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106791442A CN106791442A (en) | 2017-05-31 |
CN106791442B true CN106791442B (en) | 2019-11-15 |
Family
ID=58942427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710052115.9A Active CN106791442B (en) | 2017-01-20 | 2017-01-20 | A kind of image pickup method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106791442B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107452409B (en) * | 2017-08-16 | 2024-04-26 | 柳州桂通科技股份有限公司 | Information recording method, apparatus, system, storage medium, and processor |
CN110740259B (en) * | 2019-10-21 | 2021-06-25 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN112584225A (en) * | 2020-12-03 | 2021-03-30 | 维沃移动通信有限公司 | Video recording processing method, video playing control method and electronic equipment |
CN113506578A (en) * | 2021-06-30 | 2021-10-15 | 中汽创智科技有限公司 | Voice and image matching method and device, storage medium and equipment |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102289468A (en) * | 2011-07-22 | 2011-12-21 | 北京航空航天大学 | Method for acquiring and recording photo information in camera |
CN103595925A (en) * | 2013-11-15 | 2014-02-19 | 深圳市中兴移动通信有限公司 | Method and device for synthesizing video with photos |
CN104199876B (en) * | 2014-08-20 | 2018-03-02 | 广州三星通信技术研究有限公司 | The method and apparatus for associating melody and picture |
US20160209733A1 (en) * | 2015-01-19 | 2016-07-21 | Yoshiko Akai | Holder for mobile devices to take self-portraits or selfies remotely |
CN104916298B (en) * | 2015-05-28 | 2018-05-11 | 努比亚技术有限公司 | Decoding method, coding and decoding device, electronic equipment and talking picture generation method |
CN104951549A (en) * | 2015-06-24 | 2015-09-30 | 努比亚技术有限公司 | Mobile terminal and photo/video sort management method thereof |
CN104991950A (en) * | 2015-07-16 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | Picture generating method, display method and corresponding devices |
CN105261054A (en) * | 2015-08-27 | 2016-01-20 | 努比亚技术有限公司 | Device and method for compositing audio GIF image |
CN105653154B (en) * | 2015-12-23 | 2020-02-28 | 广州三星通信技术研究有限公司 | Method and equipment for setting label for resource in terminal |
CN105677799A (en) * | 2015-12-31 | 2016-06-15 | 宇龙计算机通信科技(深圳)有限公司 | Picture retrieval method and system |
-
2017
- 2017-01-20 CN CN201710052115.9A patent/CN106791442B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106791442A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106024009B (en) | Audio processing method and device | |
CN107172497B (en) | Live broadcasting method, apparatus and system | |
CN106791442B (en) | A kind of image pickup method and mobile terminal | |
CN104427047B (en) | Mobile terminal and its control method | |
CN104104768B (en) | The device and method of additional information are provided by using calling party telephone number | |
CN106575361B (en) | Method for providing visual sound image and electronic equipment for implementing the method | |
CN103685728B (en) | Mobile terminal and its control method | |
CN106776890A (en) | The method of adjustment and device of video playback progress | |
CN105302315A (en) | Image processing method and device | |
CN105447150B (en) | Method for playing music, device and terminal device based on face photograph album | |
CN106792004A (en) | Content item method for pushing, apparatus and system | |
CN110147467A (en) | A kind of generation method, device, mobile terminal and the storage medium of text description | |
WO2016197708A1 (en) | Recording method and terminal | |
CN110334352A (en) | Guidance information display methods, device, terminal and storage medium | |
CN109729372A (en) | Method for switching between, device, terminal, server and storage medium is broadcast live | |
CN111625682B (en) | Video generation method, device, computer equipment and storage medium | |
CN104850238A (en) | Method and device for sorting candidate items generated by input method | |
CN109168062A (en) | Methods of exhibiting, device, terminal device and the storage medium of video playing | |
CN110121083A (en) | The generation method and device of barrage | |
CN109413478A (en) | Video editing method, device, electronic equipment and storage medium | |
CN109819288A (en) | Determination method, apparatus, electronic equipment and the storage medium of advertisement dispensing video | |
CN110401801A (en) | Video generation method, device, electronic equipment and storage medium | |
CN107181849A (en) | The way of recording and device | |
CN109257649A (en) | A kind of multimedia file producting method and terminal device | |
CN113411516A (en) | Video processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |