CN103517020A - Data processing method and device and electronic equipment - Google Patents

Data processing method and device and electronic equipment Download PDF

Info

Publication number
CN103517020A
CN103517020A CN201210226733.8A CN201210226733A CN103517020A CN 103517020 A CN103517020 A CN 103517020A CN 201210226733 A CN201210226733 A CN 201210226733A CN 103517020 A CN103517020 A CN 103517020A
Authority
CN
China
Prior art keywords
image
voice data
text message
corresponding relation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210226733.8A
Other languages
Chinese (zh)
Inventor
雷钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210226733.8A priority Critical patent/CN103517020A/en
Publication of CN103517020A publication Critical patent/CN103517020A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a data processing method and device and electronic equipment. The data processing method comprises the steps that a first image and voice data are acquired, the corresponding relation between the first image and information carried by the voice data is established, and the first image and the information carried by the voice data are stored according to the corresponding relation. Due to the fact that background information acquired by the image can be recorded by the voice data, the first image is stored, meanwhile, the background information acquired by the first image is also stored, so that a user can know the ground acquired by the first image conveniently when browsing the image in the future, and user experience is improved.

Description

A kind of data processing method and device, electronic equipment
Technical field
The present invention relates to multimedia process field, relate in particular to a kind of data processing method and device, electronic equipment.
Background technology
Conventionally, electronic equipment needs image to preserve after obtaining image, general, user can use the filename of system default to preserve image, also can user-defined file name be preserved by image, but, though be system default or self-defining filename all restricted in length, file name is in general sense not enough to the background of clarity image acquisition conventionally, for example, after user uses mobile phone or camera to take pictures, give tacit consent to custom system or user-defined file name comparison film is preserved, under the restriction of filename length, conventionally the time that photo cannot be obtained, place, the background informations such as mood are all included, when again browsing photo in the future, user may not remember the background information that photo obtains clearly, and it is bad to cause user to experience.
Summary of the invention
In view of this, the invention provides a kind of data processing method and device, object is to solve because preserving title and is subject to length restriction, makes image acquisition background unclear, and causes user to experience bad problem.
To achieve these goals, the embodiment of the present invention provides following technical scheme:
, comprising:
Obtain the first image and voice data;
Set up the corresponding relation between the information that described the first image and described voice data carry;
According to described corresponding relation, preserve the information that described the first image and described voice data carry.
Preferably, described in, obtaining the first image comprises:
Detect and whether meet the first default trigger condition;
When meeting described the first trigger condition, obtain the first image.
Preferably, described the first trigger condition comprises:
Receive audio data collecting END instruction, or receive voice data and hold instruction, or receive the first IMAQ instruction, or receive audio data collecting instruction.
Preferably, described in, obtaining voice data comprises:
Detect and whether meet the second default trigger condition;
When meeting described the second trigger condition, obtain voice data.
Preferably, described the second trigger condition comprises:
Receive the first IMAQ END instruction, or receive the first Image Saving instruction, or receive audio data collecting instruction, or receive the first IMAQ instruction.
Preferably, the described corresponding relation of setting up between the information that described the first image and described voice data carry comprises:
Described voice data is converted to text message;
Set up the corresponding relation of described text message and described the first image.
Preferably, the form of described text message is:
Word or image.
Preferably, when the form of described text message is word, the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
Preserve described the first image;
Described word is saved as to the descriptor of described the first image.
Preferably, when the form of described text message is image, the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
Described text message and described the first image are merged into the second image;
Preserve described the second image.
Preferably, describedly described text message and described the first image merged into the second image comprise:
According to described text message, determine that described text message merges to the position of described the first image;
Described text message is merged to the described position of described the first image, obtain described the second image.
Preferably, describedly described text message and described the first image merged into the second image comprise:
According to described the first image, determine the form of described text message;
Described text message is merged in described the first image with described form, obtain described the second image.
Preferably, when the form of described text message is image, the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
By described text message and synthetic the first multimedia file of described the first image;
Preserve described the first multimedia file;
Wherein, described the first multimedia file at least comprises the first display mode and the second display mode, only shows and under described the second display mode, only show described text message by described the first image under described the first display mode; And described the first display mode and described the second display mode can be switched according to the predetermined way detecting.
Preferably, the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
By described voice data and described the first Image Saving in identical file folder.
Preferably, the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
By described voice data and synthetic the second multimedia file of described the first image;
Described the second multimedia file is preserved.
, comprising:
Acquisition module, for obtaining the first image and voice data;
Respective modules, sets up the corresponding relation between the information that described the first image and described voice data carry;
Preserve module, according to described corresponding relation, preserve the information that described the first image and described voice data carry.
Preferably, described acquisition module comprises:
Detecting unit, meets the first default trigger condition and the default second clockwork spring part that touches for detection of whether;
Acquiring unit, for when meeting described the first trigger condition, obtains the first image, when meeting described the second trigger condition, obtains voice data.
Preferably, described respective modules comprises:
Converting unit, for described voice data is converted to text message, the form of described text message comprises word or image;
Corresponding unit, for setting up the corresponding relation of described text message and described the first image.
Preferably, when the form of described text message is word, described preservation module comprises:
The first Image Saving unit, for preserving the first image;
Descriptor creating unit, saves as the descriptor of described the first image for described word.
Preferably, when the form of described text message is image, described preservation module comprises:
Merge cells, for merging into the second image by described text message and described the first image;
The second Image Saving unit, for preserving described the second image.
Preferably, described merge cells comprises:
Location positioning subelement, for determining that according to described text message described text message merges to the position of described the first image;
Form is determined subelement, for determine the form of described text message according to described the first image;
The second image acquisition unit, for described text message and described form being merged to the described position of described the first image, obtains described the second image.
Preferably, when the form of described text message is image, described preservation module comprises:
The first multimedia file synthesis unit, for synthesizing the first multimedia file by described text message and described the first image;
The first multimedia file storage unit, for preserving described the first multimedia file;
Wherein, described the first multimedia file at least comprises the first display mode and the second display mode, only shows and under described the second display mode, only show described text message by described the first image under described the first display mode; And described the first display mode and described the second display mode can be switched according to the predetermined way detecting.
Preferably, described preservation module comprises:
File creating unit, for creating file;
Corresponding storage unit, for pressing from both sides described voice data and described the first Image Saving in identical file.
Preferably, described preservation module comprises:
The second multimedia synthesis unit, by described voice data and synthetic the second multimedia file of described the first image;
The second multimedia storage unit, for preserving described the second multimedia file.
, comprising:
Image acquisition unit, for obtaining the first image;
Voice data acquiring unit, for obtaining voice data;
Processor, for setting up the corresponding relation between the information that described the first image and described voice data carry, and according to described corresponding relation, preserves the information that described the first image and described voice data carry.
The data processing method that the embodiment of the present invention provides and device, obtain respectively the first image and the voice data of user's input, and set up the information of carrying in described voice data and the corresponding relation between described the first image, according to described corresponding relation, the information of carrying in described voice data and the first image are carried out to corresponding preservation, because the background information that voice data can document image obtains, thereby also the background information of the first image acquisition is preserved when making to preserve the first image, so that user can understand the background of the first image acquisition during image browsing in the future, promoted user's experience.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the flow chart of the disclosed a kind of data processing method of the embodiment of the present invention;
Fig. 2 is the flow chart of disclosed another data processing method of the embodiment of the present invention;
Fig. 3 is the flow chart of disclosed another data processing method of the embodiment of the present invention;
Fig. 4 is the flow chart of disclosed another data processing method of the embodiment of the present invention;
Fig. 5 is the flow chart of disclosed another data processing method of the embodiment of the present invention;
Fig. 6 is the flow chart of disclosed another data processing method of the embodiment of the present invention;
Fig. 7 is the flow chart of disclosed another data processing method of the embodiment of the present invention;
Fig. 8 is the structural representation of the disclosed a kind of data processing equipment of the embodiment of the present invention.
Embodiment
The invention provides a kind of data processing method and device, electronic equipment, its inventive concept is, obtains image and voice data, sets up the corresponding relation between the information of carrying in image and voice data, according to corresponding relation, preserve the information that image and voice data carry.Because the background information that voice data can document image obtains, thereby can provide detailed background extraction explanation to image, in the time of user's image browsing, can background extraction explanation, promoted user's experience.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
The embodiment of the invention discloses a kind of data processing method, be applied to electronic equipment, described electronic equipment has the acquisition device of image and audio frequency, as described in Figure 1, comprising:
S101: obtain the first image and voice data;
Wherein, can first obtain the first image, the complete signal of the first image acquisition of take is trigger condition, obtain again voice data, also can first obtain voice data, the voice data of take obtains complete signal as trigger condition, then obtains the first image, also same trigger condition be can use, voice data and image obtained simultaneously.
S102: set up the corresponding relation between the information that described the first image and described voice data carry;
In described voice data, can comprise speech data and attribute data etc., the information that voice data carries refers in voice data user's active data, the interested information of user in other words, but not the attribute datas such as data head structure.
Can directly set up the corresponding relation between the first image and voice data, also voice data can be converted to other forms of data, text message for example, then set up the corresponding relation of other forms of data and the first image.
S103: according to described corresponding relation, preserve the information that described the first image and described voice data carry.
Data processing method described in the present embodiment, set up the corresponding relation between the information that image and voice data carry, and the information of carrying according to corresponding relation preservation image and voice data, because the voice data background that at length document image obtains, such as obtaining time, mood, purposes of image etc., therefore, user can obtain corresponding with it background when checking described image, thereby has promoted user's experience.
Below above-described embodiment is further discussed.
The disclosed a kind of data processing method of the embodiment of the present invention, is applied to electronic equipment, and described electronic equipment has voice data acquisition device and image acquiring device, as shown in Figure 2, comprising:
S201: detect and whether meet the first default trigger condition, described the first trigger condition can be for receiving the first IMAQ instruction;
S202: when meeting the first default trigger condition,, when receiving the first IMAQ instruction, obtain the first image;
Wherein, obtain the first image and can be undertaken by IMAQ, for example, use the camera arrangement on electronic equipment to gather image.
S203: detect and whether to meet the second default trigger condition, described the second trigger condition can be for receiving the first IMAQ END instruction, or receive the first Image Saving instruction;
Wherein, the first IMAQ END instruction can be sent after IMAQ finishes.
S204: when meeting the second default trigger condition, when receiving the first IMAQ END instruction or receiving the first Image Saving instruction, obtain voice data;
S205: set up the corresponding relation between the information that described the first image and described voice data carry;
S206: according to described corresponding relation, preserve the information that described the first image and described voice data carry.
Above-mentioned is first to obtain the situation that the first image obtains voice data again, and except above-mentioned situation, the disclosed a kind of data processing method of the embodiment of the present invention can also as shown in Figure 3, comprise:
S301: detect and whether meet the second default trigger condition, here, the second trigger condition can be for receiving audio data collecting instruction;
S302: when meeting the second default trigger condition, that is, when receiving audio data collecting instruction, obtain voice data;
The concrete mode of obtaining can be that by audio collecting device, for example microphone and sound pick-up outfit gather sound, can gather the voice of user's input, also can gather other sound.
S303: detect and whether to meet the first default trigger condition, the first default trigger condition can be for receiving audio data collecting END instruction, or receive voice data and hold instruction;
S304: when meeting default first while setting out condition,, when receiving audio data collecting END instruction or receive voice data while holding instruction, illustrate that audio data collecting completes, obtain the first image;
S305: set up the corresponding relation between the information that described the first image and described voice data carry;
S306: according to described corresponding relation, preserve the information that described the first image and described voice data carry.
Except above-mentioned two situations, the first image and voice data can start to obtain simultaneously, and disclosed another data processing method of the embodiment of the present invention, is applied to electronic equipment, as shown in Figure 4, comprising:
S401: detect and whether meet the first default trigger condition, in the present embodiment, the first default trigger condition can preferably receive audio data collecting instruction;
S402: when meeting the first default trigger condition, obtain the first image and voice data;
When receiving audio data collecting instruction, illustrate that electronic equipment will start to gather voice data, now obtain the first image, when having guaranteed the first image and voice data, start to obtain.
Above-mentioned steps is to take audio data collecting instruction as obtaining the trigger condition of voice data and the first image, similarly, also can using the first IMAQ instruction as trigger condition, detect and whether meet the second default trigger condition, the second trigger condition can be for receiving the first IMAQ instruction, when receiving the first IMAQ instruction, obtain voice data, can guarantee that equally voice data and the first view data start to obtain simultaneously.
S403: set up the corresponding relation between the information that described the first image and described voice data carry;
S404: according to described corresponding relation, preserve the information that described the first image and described voice data carry.
Comprehensive above-described embodiment, obtaining of voice data and the first image can have three kinds of different orders, or first obtains the first view data, and then the instruction complete according to the first image data acquisition, starts to obtain voice data; Or first obtain voice data, then according to voice data, obtain complete instruction, start to obtain the first view data; Or take voice data or the first IMAQ instruction is triggering, obtains the first view data and voice data simultaneously.No matter be above which kind of situation, obtain the first image and voice data and can realize continuously and obtaining, and needn't first obtain after the first image (or voice data), then obtain voice data (or first view data) by user's manual triggers.
For example: the detailed process of existing shooting and recording audio is, first take pictures (or recording audio), after end, user need to exit the current program of taking pictures (or recorded program) and enter after recorded program (or the program of taking pictures), just can record (or taking pictures).And in the embodiment of the present invention, first take pictures (or recording audio), after end, electronic equipment or system can be according to the first default trigger condition (or second default trigger conditions), directly enter the process of follow-up recording audio (or taking pictures), and exit from present procedure without user, restart new program.
Therefore, the data processing method described in the embodiment of the present invention has the advantage that minimizing user manual operation, raising image or voice data obtain efficiency.
It should be noted that, described in the present embodiment, to obtain the process of image and voice data not identical with the process of capture video in prior art for data processing method, first, image is different from video, image only limits to a frame, and video is comprised of multiple image, secondly, in the embodiment of the present invention, the corresponding relation of image and voice data be take the flesh and blood of its reflection and is according to setting up, and each two field picture be take the time as foundation foundation with the corresponding of audio frequency in video record, again, the voice data that in the embodiment of the present invention, image can corresponding random length, and the length of existing video record acquisition sequence of image frames and voice data is identical.
Disclosed another data processing method of the embodiment of the present invention, is applied to electronic equipment, and described electronic equipment has image and audio frequency acquisition device, as shown in Figure 5, comprising:
S501: obtain the first image and voice data, concrete acquisition methods, described in above-described embodiment, repeats no more here;
S502: described voice data is converted to text message, and in the present embodiment, the form of text message preferably ought be not limited to image;
That is to say, voice data is carried out to speech recognition, then the form that is image by the text-converted identifying, for example .jpg form, the image of this form comprises the word that speech recognition goes out.
S503: set up the corresponding relation between described text message and the first image;
S504: according to described corresponding relation, text message and the first image are merged into the second image;
In actual merging, can determine that described text message merges to the position of the first image according to text message: can be according to the number of content in text message, determine that text message merges to the particular location in the first image, for example, if the content of text message is more, can merge to the first image Zhong top or bottom, if the content of text message is less, can merge to the first image Zhong lower left corner or the lower right corner, to reach the picture effect of harmony attractive in appearance.
In addition, can also determine according to the first image the form of text message, for example, according to the background color in the first image, determine the color of text message, if the background of the first image is white, color that can text message is set to black, text message is merged in the first image with described form, so that identification.
The second image that uses above-mentioned concrete merging mode to obtain, comprising the content of text message and the first image, text message and the first image can be shown to user simultaneously, when being convenient to for image browsing, browse to the text message with image correlation, simultaneously, compare with voice data, text message is more directly perceived, thereby makes user understand intuitively, quickly the background context of image acquisition.
S505: preserve described the second image.
Data processing method described in the present embodiment, can make user understand intuitively, quickly the background context of image acquisition, has promoted user's experience.
Except text message is merged in the first image with picture format, also can adopt the method described in following examples to preserve the first image and text message, disclosed another data processing method of the embodiment of the present invention, be applied to electronic equipment, described electronic equipment has image and audio frequency acquisition device, as shown in Figure 6, comprising:
S601: obtain the first image and voice data, concrete acquisition methods, described in above-described embodiment, repeats no more here;
S602: described voice data is converted to text message, and in the present embodiment, the form of text message preferably ought be not limited to image;
S603: set up the corresponding relation between described text message and the first image;
S604: by described text message and synthetic the first multimedia file of described the first image;
Wherein, described the first multimedia file at least comprises the first display mode and the second display mode, only shows and under described the second display mode, only show described text message by described the first image under described the first display mode; And described the first display mode and described the second display mode can be switched according to the predetermined way detecting.
For example, the first multimedia file is electronic photo, wherein, the first display mode is the front of photo, and displaying contents is the first image, the back side that the second display mode is photo, displaying contents is text message, and user can carry out by default shortcut the switching of front and back.
S605: preserve described the first multimedia file.
Data processing method described in the present embodiment, by the text message of voice data conversion and synthetic the first multimedia file of the first image, the first multimedia file and user's mutual convenience, fitted user's use habit, psychological feelings in the time of can meeting user's use, thus user's experience promoted.
Further, except picture format, the text message of being changed by voice data can be also word, when the form of described text message is word, and the described corresponding relation of described foundation, preserving the information that described the first image and described voice data carry can comprise:
Preserve described the first image;
Described word is saved as to the descriptor of described the first image, thereby saved handling process, with relatively succinct processing procedure, preserved the background extraction of image.
Except voice data is converted to text message, also voice data directly can be set up to corresponding relation, disclosed another data processing method of the embodiment of the present invention with the first image, be applied to electronic equipment, described electronic equipment has image and audio frequency acquisition device, as shown in Figure 7, comprising:
S701: obtain the first image and voice data, concrete acquisition methods, described in above-described embodiment, repeats no more here;
S702: set up the corresponding relation between described voice data and described the first image;
S703: by described voice data and described the first Image Saving in identical file folder.
Data processing method of the present invention, the background information using voice data as image acquisition, with the first Image Saving in identical file folder, user friendly checking, and can be selected voluntarily whether to check by user.
Further, after S702, also described voice data and described the first image can be synthesized to the second multimedia file, and the second multimedia file is preserved.Wherein, the second multimedia file can for can be when showing the first image displaying audio file ask media file.
Embodiment is corresponding with said method, the embodiment of the invention also discloses a kind of data processing equipment, as shown in Figure 8, comprising:
Acquisition module 801, for obtaining the first image and voice data;
Respective modules 802, sets up the corresponding relation between the information that described the first image and described voice data carry;
Preserve module 803, according to described corresponding relation, preserve the information that described the first image and described voice data carry.
Data processing equipment described in the embodiment of the present invention, comprise respective modules and preserve module, can set up the corresponding relation between the information that image and voice data carry, and the information of carrying according to corresponding relation preservation image and voice data, because the voice data background that at length document image obtains, such as obtaining time, mood, purposes of image etc., therefore, user can obtain corresponding with it background when checking described image, thereby has promoted user's experience.
Further, described acquisition module comprises:
Detecting unit, meets the first default trigger condition and the default second clockwork spring part that touches for detection of whether;
Acquiring unit, for when meeting described the first trigger condition, obtains the first image, when meeting described the second trigger condition, obtains voice data.
Further, described respective modules comprises:
Converting unit, for described voice data is converted to text message, the form of described text message comprises word or image;
Corresponding unit, for setting up the corresponding relation of described text message and described the first image.
Further, when the form of described text message is word, described preservation module comprises:
The first Image Saving unit, for preserving the first image;
Descriptor creating unit, saves as the descriptor of described the first image for described word.
When the form of described text message is image, described preservation module comprises:
Merge cells, for merging into the second image by described text message and described the first image;
Further, described merge cells can specifically comprise:
Location positioning subelement, for determining that according to described text message described text message merges to the position of described the first image;
Form is determined subelement, for determine the form of described text message according to described the first image;
The second image acquisition unit, for described text message and described form being merged to the described position of described the first image, obtains described the second image.
The second Image Saving unit, for preserving described the second image.
When the form of described text message is image, described preservation module can also comprise:
The first multimedia file synthesis unit, for synthesizing the first multimedia file by described text message and described the first image;
The first multimedia file storage unit, for preserving described the first multimedia file;
Wherein, described the first multimedia file at least comprises the first display mode and the second display mode, only shows and under described the second display mode, only show described text message by described the first image under described the first display mode; And described the first display mode and described the second display mode can be switched according to the predetermined way detecting.
Above-mentioned preservation module can arrange separately, also can integratedly arrange.
Further, described preservation module can also comprise:
File creating unit, for creating file;
Corresponding storage unit, for pressing from both sides described voice data and described the first Image Saving in identical file.
Or, preserve module and comprise:
The second multimedia synthesis unit, by described voice data and synthetic the second multimedia file of described the first image;
The second multimedia storage unit, for preserving described the second multimedia file.
The embodiment of the invention also discloses a kind of electronic equipment, comprising:
Image acquisition unit, for obtaining the first image;
Voice data acquiring unit, for obtaining voice data;
Processor, for setting up the corresponding relation between the information that described the first image and described voice data carry, and according to described corresponding relation, preserves the information that described the first image and described voice data carry.
Described electronic equipment can be used in and obtains image and obtain voice data, by the image of foundation and the corresponding relation of voice data, image and the background information of carrying in voice data are carried out to corresponding preservation, make user can know background information in image browsing, thereby can promote user's experience.
If the function described in the present embodiment method usings that the form of SFU software functional unit realizes and during as production marketing independently or use, can be stored in a computing equipment read/write memory medium.Understanding based on such, the part that the embodiment of the present invention contributes to prior art or the part of this technical scheme can embody with the form of software product, this software product is stored in a storage medium, comprise that some instructions are with so that a computing equipment (can be personal computer, server, mobile computing device or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CDs.
In this specification, each embodiment adopts the mode of going forward one by one to describe, and each embodiment stresses is the difference with other embodiment, between each embodiment same or similar part mutually referring to.
Above-mentioned explanation to the disclosed embodiments, makes professional and technical personnel in the field can realize or use the present invention.To the multiple modification of these embodiment, will be apparent for those skilled in the art, General Principle as defined herein can, in the situation that not departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention will can not be restricted to these embodiment shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (24)

1. a data processing method, is characterized in that, comprising:
Obtain the first image and voice data;
Set up the corresponding relation between the information that described the first image and described voice data carry;
According to described corresponding relation, preserve the information that described the first image and described voice data carry.
2. method according to claim 1, is characterized in that, described in obtain the first image and comprise:
Detect and whether meet the first default trigger condition;
When meeting described the first trigger condition, obtain the first image.
3. method according to claim 2, is characterized in that, described the first trigger condition comprises:
Receive audio data collecting END instruction, or receive voice data and hold instruction, or receive the first IMAQ instruction, or receive audio data collecting instruction.
4. method according to claim 1, is characterized in that, described in obtain voice data and comprise:
Detect and whether meet the second default trigger condition;
When meeting described the second trigger condition, obtain voice data.
5. method according to claim 4, is characterized in that, described the second trigger condition comprises:
Receive the first IMAQ END instruction, or receive the first Image Saving instruction, or receive audio data collecting instruction, or receive the first IMAQ instruction.
6. method according to claim 1, is characterized in that, the described corresponding relation of setting up between the information that described the first image and described voice data carry comprises:
Described voice data is converted to text message;
Set up the corresponding relation of described text message and described the first image.
7. method according to claim 6, is characterized in that, the form of described text message is:
Word or image.
8. method according to claim 7, is characterized in that, when the form of described text message is word, and the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
Preserve described the first image;
Described word is saved as to the descriptor of described the first image.
9. method according to claim 7, is characterized in that, when the form of described text message is image, and the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
Described text message and described the first image are merged into the second image;
Preserve described the second image.
10. method according to claim 9, is characterized in that, describedly described text message and described the first image are merged into the second image comprises:
According to described text message, determine that described text message merges to the position of described the first image;
Described text message is merged to the described position of described the first image, obtain described the second image.
11. methods according to claim 9, is characterized in that, describedly described text message and described the first image are merged into the second image comprise:
According to described the first image, determine the form of described text message;
Described text message is merged in described the first image with described form, obtain described the second image.
12. methods according to claim 7, is characterized in that, when the form of described text message is image, and the described corresponding relation of described foundation, preserve the information that described the first image and described voice data carry and comprise:
By described text message and synthetic the first multimedia file of described the first image;
Preserve described the first multimedia file;
Wherein, described the first multimedia file at least comprises the first display mode and the second display mode, only shows and under described the second display mode, only show described text message by described the first image under described the first display mode; And described the first display mode and described the second display mode can be switched according to the predetermined way detecting.
13. methods according to claim 1, is characterized in that, the described corresponding relation of described foundation is preserved the information that described the first image and described voice data carry and comprised:
By described voice data and described the first Image Saving in identical file folder.
14. methods according to claim 1, is characterized in that, the described corresponding relation of described foundation is preserved the information that described the first image and described voice data carry and comprised:
By described voice data and synthetic the second multimedia file of described the first image;
Described the second multimedia file is preserved.
15. 1 kinds of data processing equipments, is characterized in that, comprising:
Acquisition module, for obtaining the first image and voice data;
Respective modules, sets up the corresponding relation between the information that described the first image and described voice data carry;
Preserve module, according to described corresponding relation, preserve the information that described the first image and described voice data carry.
16. methods according to claim 15, is characterized in that, described acquisition module comprises:
Detecting unit, meets the first default trigger condition and the default second clockwork spring part that touches for detection of whether;
Acquiring unit, for when meeting described the first trigger condition, obtains the first image, when meeting described the second trigger condition, obtains voice data.
17. methods according to claim 15, is characterized in that, described respective modules comprises:
Converting unit, for described voice data is converted to text message, the form of described text message comprises word or image;
Corresponding unit, for setting up the corresponding relation of described text message and described the first image.
18. methods according to claim 17, is characterized in that, when the form of described text message is word, described preservation module comprises:
The first Image Saving unit, for preserving the first image;
Descriptor creating unit, saves as the descriptor of described the first image for described word.
19. methods according to claim 17, is characterized in that, when the form of described text message is image, described preservation module comprises:
Merge cells, for merging into the second image by described text message and described the first image;
The second Image Saving unit, for preserving described the second image.
20. methods according to claim 19, is characterized in that, described merge cells comprises:
Location positioning subelement, for determining that according to described text message described text message merges to the position of described the first image;
Form is determined subelement, for determine the form of described text message according to described the first image;
The second image acquisition unit, for described text message and described form being merged to the described position of described the first image, obtains described the second image.
21. methods according to claim 17, is characterized in that, when the form of described text message is image, described preservation module comprises:
The first multimedia file synthesis unit, for synthesizing the first multimedia file by described text message and described the first image;
The first multimedia file storage unit, for preserving described the first multimedia file;
Wherein, described the first multimedia file at least comprises the first display mode and the second display mode, only shows and under described the second display mode, only show described text message by described the first image under described the first display mode; And described the first display mode and described the second display mode can be switched according to the predetermined way detecting.
22. devices according to claim 15, is characterized in that, described preservation module comprises:
File creating unit, for creating file;
Corresponding storage unit, for pressing from both sides described voice data and described the first Image Saving in identical file.
23. devices according to claim 15, is characterized in that, described preservation module comprises:
The second multimedia synthesis unit, by described voice data and synthetic the second multimedia file of described the first image;
The second multimedia storage unit, for preserving described the second multimedia file.
24. 1 kinds of electronic equipments, is characterized in that, comprising:
Image acquisition unit, for obtaining the first image;
Voice data acquiring unit, for obtaining voice data;
Processor, for setting up the corresponding relation between the information that described the first image and described voice data carry, and according to described corresponding relation, preserves the information that described the first image and described voice data carry.
CN201210226733.8A 2012-06-29 2012-06-29 Data processing method and device and electronic equipment Pending CN103517020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210226733.8A CN103517020A (en) 2012-06-29 2012-06-29 Data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210226733.8A CN103517020A (en) 2012-06-29 2012-06-29 Data processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN103517020A true CN103517020A (en) 2014-01-15

Family

ID=49898950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210226733.8A Pending CN103517020A (en) 2012-06-29 2012-06-29 Data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN103517020A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580888A (en) * 2014-12-17 2015-04-29 广东欧珀移动通信有限公司 Picture processing method and terminal
CN106033421A (en) * 2015-03-10 2016-10-19 中兴通讯股份有限公司 A file output method and a terminal
CN107005629A (en) * 2014-12-02 2017-08-01 索尼公司 Information processor, information processing method and program
CN112087653A (en) * 2020-09-18 2020-12-15 北京搜狗科技发展有限公司 Data processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1477590A (en) * 2002-06-19 2004-02-25 微软公司 System and method for white writing board and voice frequency catching
CN101443730A (en) * 2006-05-11 2009-05-27 松下电器产业株式会社 Display object layout changing device
CN101527772A (en) * 2008-03-07 2009-09-09 鸿富锦精密工业(深圳)有限公司 Digital camera and information recording method
CN101895694A (en) * 2010-07-23 2010-11-24 中兴通讯股份有限公司 Subtitle superimposition method and device
CN102495518A (en) * 2011-12-12 2012-06-13 高原 Camera with audio recording function

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1477590A (en) * 2002-06-19 2004-02-25 微软公司 System and method for white writing board and voice frequency catching
CN101443730A (en) * 2006-05-11 2009-05-27 松下电器产业株式会社 Display object layout changing device
CN101527772A (en) * 2008-03-07 2009-09-09 鸿富锦精密工业(深圳)有限公司 Digital camera and information recording method
CN101895694A (en) * 2010-07-23 2010-11-24 中兴通讯股份有限公司 Subtitle superimposition method and device
CN102495518A (en) * 2011-12-12 2012-06-13 高原 Camera with audio recording function

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107005629A (en) * 2014-12-02 2017-08-01 索尼公司 Information processor, information processing method and program
CN104580888A (en) * 2014-12-17 2015-04-29 广东欧珀移动通信有限公司 Picture processing method and terminal
CN106033421A (en) * 2015-03-10 2016-10-19 中兴通讯股份有限公司 A file output method and a terminal
CN112087653A (en) * 2020-09-18 2020-12-15 北京搜狗科技发展有限公司 Data processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US8140570B2 (en) Automatic discovery of metadata
US9449107B2 (en) Method and system for gesture based searching
US11249620B2 (en) Electronic device for playing-playing contents and method thereof
US9082452B2 (en) Method for media reliving on demand
US20120213493A1 (en) Method for media reliving playback
US20140293069A1 (en) Real-time image classification and automated image content curation
CN104580888A (en) Picture processing method and terminal
KR102337157B1 (en) Electronic blackboard apparatus and the controlling method thereof
CN104980681A (en) Video acquisition method and video acquisition device
JP6046393B2 (en) Information processing apparatus, information processing system, information processing method, and recording medium
US20160292897A1 (en) Capturing Notes From Passive Recordings With Visual Content
US20130107077A1 (en) Photograph management method and electronic device with camera using same
US8782052B2 (en) Tagging method and apparatus of portable terminal
CN103517020A (en) Data processing method and device and electronic equipment
KR20180017424A (en) Display apparatus and controlling method thereof
US8494347B2 (en) Electronic apparatus and movie playback method
EP3073360B1 (en) Multi-media data backup method, user terminal and synchronizer
RU2006113932A (en) DEVICE AND METHOD FOR DISPLAYING PHOTODATA AND VIDEO DATA AND A MEDIA INFORMATION CONTAINING A PROGRAM FOR EXECUTING SUCH METHOD
CN102473178A (en) Method and computer program product for enabling organization of media objects
Watanabe et al. WillCam: a digital camera visualizing users. interest
US9141850B2 (en) Electronic device and photo management method thereof
CN116506691B (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN106156252B (en) information processing method and electronic equipment
CN113886636A (en) Image marking method, image marking display method and mobile terminal
CN113676591A (en) Recording method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140115

RJ01 Rejection of invention patent application after publication