CN103838723A - Data association method and electronic device - Google Patents

Data association method and electronic device Download PDF

Info

Publication number
CN103838723A
CN103838723A CN201210473371.2A CN201210473371A CN103838723A CN 103838723 A CN103838723 A CN 103838723A CN 201210473371 A CN201210473371 A CN 201210473371A CN 103838723 A CN103838723 A CN 103838723A
Authority
CN
China
Prior art keywords
data
input
input data
user
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210473371.2A
Other languages
Chinese (zh)
Other versions
CN103838723B (en
Inventor
赵谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210473371.2A priority Critical patent/CN103838723B/en
Publication of CN103838723A publication Critical patent/CN103838723A/en
Application granted granted Critical
Publication of CN103838723B publication Critical patent/CN103838723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a data association method and an electronic device. The data association method is applied to the electronic device which comprises an audio acquisition unit and a touch display unit. The data association method comprises the following steps: according to a first operation, the audio acquisition unit is started to acquire audio data; according to a second operation, the touch display unit is used for acquiring user input and identifying user input so as to acquire input data; time information of the user input is acquired; with the time information serving as an association property, the audio data and the input data are stored in an association mode.

Description

Data correlation method and electronic equipment
Technical field
The present invention relates to the field of electronic equipment, more specifically, the present invention relates to a kind of data correlation method and electronic equipment.
Background technology
At present, various electronic equipments have the function of recording and handwriting input, as panel computer.For example, on classroom, user can use panel computer to record, and can use panel computer to record the note.But, because a class has 45 minutes, in therefore recording, have 45 minutes, but just ability record in the time hearing emphasis of notes.Therefore,, while review after class, it is difficult in the content of recording, finding with taking down notes corresponding part.
For this reason, expecting provides a kind of data correlation method and electronic equipment, and it can be associated recording substance and input content, thus the use of convenient various contents.
Summary of the invention
One embodiment of the present of invention provide a kind of data correlation method, are applied to the electronic equipment that comprises audio collection unit and touch-display unit, and described method comprises:
According to the first operation, start described audio collection unit and gather voice data;
According to the second operation, gather user's input and identify user and input to obtain input data by described touch-display unit;
Temporal information when obtaining described user and inputting; And
Taking described temporal information as relating attribute, store described voice data and described input data associatedly.
Preferably, taking described temporal information as relating attribute, store described voice data and described input data comprise associatedly:
Temporal information while input based on user, by every in described input data with described speech data in correspondence position store explicitly.
Preferably, when described in playback when speech data, according to the selection of in described input data, correspondingly export the speech data of the position corresponding with these input data by the loudspeaker of electronic equipment.
Preferably, when described in playback when speech data, according to the play position of described speech data, on described touch sensitive display unit, correspondingly show input data corresponding with this play position in input data.
Preferably, show input data on described touch sensitive display unit time, according to the play position of speech data, the input data of highlighted demonstration correspondence.
Preferably, described input data and described speech data are all stored in the storage unit of this electronic equipment.
Preferably, described input data are stored in the storage unit of this electronic equipment, and described speech data is stored in long-range storage unit.
Preferably, in the time of the described input data of storage and described speech data, encrypted.
Preferably, described input data comprise view data and/or text data, and described view data is the content of the original input of user, and described text data is that the content of the original input of user is identified to the text data obtaining.
According to another embodiment of the present invention, a kind of electronic equipment is provided, comprising:
Audio collection unit, is configured to gather voice data according to the first operation start;
Touch-display unit, is configured to gather user's input and identify user and input to obtain input data according to the second operation start;
Temporal information acquiring unit, the temporal information when being configured to obtain described user and inputting;
Storage unit, is configured to store various kinds of data; And
Control module, is configured to taking described temporal information as relating attribute, and associated ground is stored in described voice data and described input data in described storage unit.
Preferably, described control module is further configured to the temporal information while input based on user, by every in described input data with described speech data in correspondence position store explicitly.
Preferably, described control module is further configured to when described in playback when speech data, according to the selection of in described input data, correspondingly exports the speech data of the position corresponding with these input data by the loudspeaker of electronic equipment.
Preferably, described control module is further configured to when described in playback when speech data, according to the play position of described speech data, correspondingly shows input data corresponding with this play position in input data on described touch sensitive display unit.
Preferably, described touch sensitive display unit is further configured in the time showing input data, according to the play position of speech data, and the input data of highlighted demonstration correspondence.
Preferably, described input data and described speech data are all stored in the storage unit of this electronic equipment.
Preferably, described input data are stored in the storage unit of this electronic equipment, and described speech data is stored in long-range storage unit.
Preferably, in the time of the described input data of storage and described speech data, encrypted.
Preferably, described input data comprise view data and/or text data, and described view data is the content of the original input of user, and described text data is that the content of the original input of user is identified to the text data obtaining.
Therefore, according to data correlation method of the present invention and electronic equipment, recording substance and input content can be associated, thus the use of convenient various contents.
Brief description of the drawings
Fig. 1 is according to the functional block diagram of the electronic equipment of first embodiment of the invention;
Fig. 2 is according to the design sketch of the operating period of the electronic equipment of first embodiment of the invention;
Fig. 3 is another design sketch according to the operating period of the electronic equipment of first embodiment of the invention;
Fig. 4 is according to the process flow diagram of the data correlation method of second embodiment of the invention.
Embodiment
Below, the preferred embodiment of describing in detail with reference to accompanying drawing.
< the first embodiment >
Below, the electronic equipment to have audio collection unit and touch-display unit is described to the first embodiment of the present invention as example.Such electronic equipment for example comprises panel computer, desktop computer, personal digital assistant of having microphone and touch-screen etc.The present embodiment will be described in detail as an example of panel computer example.
Fig. 1 is according to the functional block diagram of the electronic equipment 100 of first embodiment of the invention.As shown in Figure 1, this electronic equipment 100 includes: audio collection unit 101, touch-display unit 102, temporal information acquiring unit 103, storage unit 104 and control module 105.
Audio collection unit 101 can be configured to gather voice data according to the first operation start.For example, for classroom learning, be a teacher and start upper class hour, can start audio collection unit 101, start to gather voice data, that is, record the voice that teacher attends class.Such operation can be by realizations such as the phonetic entry application in panel computer.User can start application by click, touch, mouse input, keyboard input, phonetic entry etc.
Touch-display unit 102 can be configured to gather user's input and identify user and input to obtain input data according to the second operation start.Specifically, in the time hearing that certain content of telling about between teacher is in class key content, be normally recorded in papery notebook or textbook, may not be very convenient in the past like this in the time reviewing subsequently.For this reason, can utilize panel computer to carry out record.In addition, the word that carries out key content on panel computer inputs that often speed is faster than the speed that records content on paper, and this also contributes to the key content of recording desired record better.Therefore,, in the time expecting to record key content, for example user can start corresponding written record application (as Word, board etc.), starts to record content.For example, user can input according to stylus, keyboard input, phonetic entry etc. start and apply and carry out input.
The temporal information that temporal information acquiring unit 103 can be configured to obtain described user while inputting.For example, it is benchmark that temporal information acquiring unit 103 can utilize the system time in panel computer, detects time that user carries out input operation temporal information while inputting as user.That is to say the absolute time information while input using system time as user.
In another embodiment, can be also benchmark with the time that starts phonetic entry application, detect time that user carries out input operation temporal information while inputting as user.That is to say, using with respect to start phonetic entry when application start-up time, elapsed time was inputted as user time relative time information.
Storage unit 104 is configured to store various kinds of data.Storage unit 104 for example can adopt hard disk drive (HDD), CD (as CD/DVD), disk (as floppy disk), magneto-optic disk, semiconductor memory etc.
Control module 105 is configured to taking described temporal information as relating attribute, and associated ground is stored in described voice data and described input data in storage unit.Control module for example can be by CPU(CPU (central processing unit)) realize, and can carry out the entirety control to whole electronic equipment.
In one embodiment, temporal information when described control module 105 can be inputted based on user, by every in described input data with described speech data in correspondence position store explicitly.Specifically, in carrying out audio collection operation, if user starts input operation, start to carry out input operation in touch-display unit, the temporal information that control module obtains according to temporal information unit, the correspondence position in speech data corresponding to the time with carry out input operation in touch-display unit time determined.That is to say, be provided with time tag to speech data, this time tag is corresponding to the time of input operation.
The operation of the embodiment of the present invention is described below with reference to Fig. 2.Fig. 2 is according to the design sketch of the operating period of the electronic equipment of first embodiment of the invention.Suppose in the first embodiment using system time as reference time.Start time of supposing to attend class is 9 points, that is to say, the time that starts phonetic entry is 9 points.In addition, suppose that current time is 9: 10 morning, now start to start word input application, and input content: emphasis 1 ...Then, input content in the time of 9:15: emphasis 2 ...
Now, control module, based on system time information, will input data 1(, emphasis 1) be associated with the voice data at 9:10 place, and will input data 2(, emphasis 2) be associated with the voice data at 9:15 place, and be stored in storage unit 104.
Fig. 3 is another design sketch according to the operating period of the electronic equipment of first embodiment of the invention.The start-up time of now applying using phonetic entry is as reference time.Now, the start time of supposing to attend class is 9 points, now starts application, and the time of startup phonetic entry is made as 0.In addition, suppose that the time of input Article 1 content is that 9:10 divides, input content is: emphasis 1 ..., the time is 0:10.Then, input content in the time of 9:15: emphasis 2 ..., the time is 0:15.
Now, control module, based on system time information, will input data 1(, emphasis 1) voice data located with 10 minutes is associated, and will inputs data 2(, emphasis 2) voice data located with 15 minutes is associated, and is stored in storage unit 104.In this case describe simply, only show two input data, but can be as required, record one or more input data.
It should be noted that voice data and input data can be used as separately storage of two independent files, also can save as a file with the form of stream file.
Subsequently, in the time that user wants to review class offerings, user can utilize panel computer to operate easily.
Specifically, in the time of the speech data of user's playback has been recorded classroom, can, according to the selection of in described input data, correspondingly export the speech data of the position corresponding with these input data by the loudspeaker of electronic equipment.For example, in the time of the speech data of user's playback classroom, if select emphasis 1, correspondingly export the speech data of 9:10 office by the loudspeaker of panel computer, if select emphasis 2, correspondingly export the speech data of 9:15 office by the loudspeaker of panel computer.
Like this, must carry out F.F. when the voice playback file, the mode of the content to find corresponding note taking such as drag with respect to former user, by based on temporal information, input data are carried out associated with speech data, can easily find the speech data corresponding with note taking content, thereby can contrast easily input message and speech data, obtain better experience.
Alternately, in the time of the speech data of user's playback has been recorded classroom, can be according to the play position of described speech data, on described touch sensitive display unit, correspondingly show input data corresponding with this play position in input data.For example, in the time of the speech data of user's playback classroom, if be played back to 9:10 office, can be on touch sensitive display unit the content of display highlighting 1 automatically, if be played back to 9:15 office, can be on touch sensitive display unit the content of display highlighting 2 automatically.
Like this, must carry out F.F. when the voice playback file, the mode of the content to find corresponding note taking such as drag with respect to former user, by based on temporal information, input data are carried out associated with speech data, can easily show the note taking content corresponding with speech data, thereby can contrast easily input message and speech data, obtain better experience.
In addition, in the time showing input data, can also be according to the play position of speech data, the input data of highlighted demonstration correspondence.For example, in the time of the speech data of user's playback classroom, if be played back to 9:10 office, can be on touch sensitive display unit the content of automatic highlighted display highlighting 1
In addition, in one embodiment, described input data and described speech data are all stored in the storage unit of this electronic equipment.In another embodiment, described input data are stored in the storage unit of this electronic equipment, and described speech data is stored in long-range storage unit.That is to say, user can, according to the storage capacity of electronic equipment, network communications capability, processing power etc., freely select the memory location of each data.
In addition,, in the time of the described input data of storage and described speech data, can be encrypted, thereby be there is better data security.Do not expect for some the data of being known by other user, can take various cipher modes to be encrypted.
It should be noted that described input data are not limited to input by file the word content of application input.For example, described input data can comprise view data and/or text data, and described view data is the content of the original input of user, and described text data is that the content of the original input of user is identified to the text data obtaining.That is to say, can input application (for example drawing board) by image, the content record that user is inputted in touch-display unit is image.Or, the content of the original input of user can be identified, and obtain the ripple data after identification.In addition, can also, by camera application etc., the photo of shooting be recorded as to input data.
Therefore, according to the electronic equipment of first embodiment of the invention, recording substance and input content can be associated, thus the use of convenient various contents.
< the second embodiment >
Below, describe according to the data correlation method of second embodiment of the invention with reference to Fig. 4.This data correlation method is applied to the electronic equipment that comprises audio collection unit and touch-display unit.
Described data correlation method 400 comprises:
Step S401: according to the first operation, start described audio collection unit and gather voice data.
In this step, for example, user can start phonetic entry application by click, touch, mouse input, keyboard input, phonetic entry etc., and starts to gather speech data.
Step S402: according to the second operation, gather user's input and identify user and input to obtain input data by described touch-display unit.
In this step, for example, user can input according to stylus, keyboard input, phonetic entry etc. start and apply and carry out input.For example, can start word input application, image input application, word identification application etc. and input data.
Step S403: the temporal information when obtaining described user and inputting.
In this step, the time data can obtain user and carry out input operation time.For example, taking the system time of electronic equipment as the temporal information of benchmark or the temporal information taking the phonetic entry application start time as benchmark.
Step S404: taking described temporal information as relating attribute, store described voice data and described input data associatedly.
For example, the temporal information can input based on user time, by every in described input data with described speech data in correspondence position store explicitly.Specifically, in carrying out audio collection operation, if user starts input operation, start to carry out input operation in touch-display unit, the temporal information of obtaining according to temporal information unit, the correspondence position in speech data corresponding to the time with carry out input operation in touch-display unit time determined.That is to say, be provided with time tag to speech data, this time tag is corresponding to the time of input operation.
Preferably, when described in playback when speech data, according to the selection of in described input data, correspondingly export the speech data of the position corresponding with these input data by the loudspeaker of electronic equipment.
Preferably, when described in playback when speech data, according to the play position of described speech data, on described touch sensitive display unit, correspondingly show input data corresponding with this play position in input data.
Preferably, show input data on described touch sensitive display unit time, according to the play position of speech data, the input data of highlighted demonstration correspondence.
Preferably, described input data and described speech data are all stored in the storage unit of this electronic equipment.
Preferably, described input data are stored in the storage unit of this electronic equipment, and described speech data is stored in long-range storage unit.
Preferably, in the time of the described input data of storage and described speech data, encrypted.
Preferably, described input data comprise view data and/or text data, and described view data is the content of the original input of user, and described text data is that the content of the original input of user is identified to the text data obtaining.
Therefore, according to the data correlation method of second embodiment of the invention, recording substance and input content can be associated, thus the use of convenient various contents.
It should be noted that embodiment is above only as example, the invention is not restricted to such example, but can carry out various variations.
It should be noted that, in this manual, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby the process, method, article or the equipment that make to comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or be also included as the intrinsic key element of this process, method, article or equipment.The in the situation that of more restrictions not, the key element being limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
Finally, also it should be noted that, above-mentioned a series of processing not only comprise the processing of carrying out by time series with order described here, and comprise processing parallel or that carry out respectively instead of in chronological order.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add essential hardware platform by software and realize, and can certainly all implement by hardware.Based on such understanding, what technical scheme of the present invention contributed to background technology can embody with the form of software product in whole or in part, this computer software product can be stored in storage medium, as ROM(ROM (read-only memory))/RAM(random access memory), magnetic disc, CD etc., comprise that some instructions (can be personal computers in order to make a computer equipment, server, or the network equipment etc.) carry out the method described in some part of each embodiment of the present invention or embodiment.
Above the present invention is described in detail, has applied specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment is just for helping to understand method of the present invention and core concept thereof; , for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention meanwhile.

Claims (18)

1. a data correlation method, is applied to the electronic equipment that comprises audio collection unit and touch-display unit, and described method comprises:
According to the first operation, start described audio collection unit and gather voice data;
According to the second operation, gather user's input and identify user and input to obtain input data by described touch-display unit;
Temporal information when obtaining described user and inputting; And
Taking described temporal information as relating attribute, store described voice data and described input data associatedly.
2. data correlation method as claimed in claim 1, wherein taking described temporal information as relating attribute, store described voice data and described input data comprise associatedly:
Temporal information while input based on user, by every in described input data with described speech data in correspondence position store explicitly.
3. data correlation method as claimed in claim 2, wherein, when described in playback when speech data, according to the selection of in described input data, correspondingly exports the speech data of the position corresponding with these input data by the loudspeaker of electronic equipment.
4. data correlation method as claimed in claim 2 wherein, when described in playback when speech data, according to the play position of described speech data, correspondingly shows input data corresponding with this play position in input data on described touch sensitive display unit.
5. data correlation method as claimed in claim 2, while wherein showing input data on described touch sensitive display unit, according to the play position of speech data, the input data of highlighted demonstration correspondence.
6. data correlation method as claimed in claim 1, wherein said input data and described speech data are all stored in the storage unit of this electronic equipment.
7. data correlation method as claimed in claim 1, wherein said input data are stored in the storage unit of this electronic equipment, and described speech data is stored in long-range storage unit.
8. data correlation method as claimed in claim 1 is wherein encrypted in the time of the described input data of storage and described speech data.
9. data correlation method as claimed in claim 1, wherein said input data comprise view data and/or text data, described view data is the content of the original input of user, and described text data is that the content of the original input of user is identified to the text data obtaining.
10. an electronic equipment, comprising:
Audio collection unit, is configured to gather voice data according to the first operation start;
Touch-display unit, is configured to gather user's input and identify user and input to obtain input data according to the second operation start;
Temporal information acquiring unit, the temporal information when being configured to obtain described user and inputting;
Storage unit, is configured to store various kinds of data; And
Control module, is configured to taking described temporal information as relating attribute, and associated ground stores described voice data and described input data in described storage unit into.
11. electronic equipments as claimed in claim 10, wherein said control module is further configured to the temporal information while input based on user, by every in described input data with described speech data in correspondence position store explicitly.
12. electronic equipments as claimed in claim 11, wherein said control module is further configured to when described in playback when speech data, according to the selection of in described input data, correspondingly export the speech data of the position corresponding with these input data by the loudspeaker of electronic equipment.
13. electronic equipments as claimed in claim 11, wherein said control module is further configured to when described in playback when speech data, according to the play position of described speech data, on described touch sensitive display unit, correspondingly show input data corresponding with this play position in input data.
14. electronic equipments as claimed in claim 10, wherein said touch sensitive display unit is further configured in the time showing input data, according to the play position of speech data, the input data of highlighted demonstration correspondence.
15. electronic equipments as claimed in claim 10, wherein said input data and described speech data are all stored in the storage unit of this electronic equipment.
16. electronic equipments as claimed in claim 10, wherein said input data are stored in the storage unit of this electronic equipment, and described speech data is stored in long-range storage unit.
17. electronic equipments as claimed in claim 10 are wherein encrypted in the time of the described input data of storage and described speech data.
18. electronic equipments as claimed in claim 10, wherein said input data comprise view data and/or text data, described view data is the content of the original input of user, and described text data is that the content of the original input of user is identified to the text data obtaining.
CN201210473371.2A 2012-11-20 2012-11-20 Data association method and electronic device Active CN103838723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210473371.2A CN103838723B (en) 2012-11-20 2012-11-20 Data association method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210473371.2A CN103838723B (en) 2012-11-20 2012-11-20 Data association method and electronic device

Publications (2)

Publication Number Publication Date
CN103838723A true CN103838723A (en) 2014-06-04
CN103838723B CN103838723B (en) 2017-04-19

Family

ID=50802239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210473371.2A Active CN103838723B (en) 2012-11-20 2012-11-20 Data association method and electronic device

Country Status (1)

Country Link
CN (1) CN103838723B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105280207A (en) * 2014-06-25 2016-01-27 禾瑞亚科技股份有限公司 Playing method, device and system associated with touch information time
US10310657B2 (en) 2014-06-25 2019-06-04 Egalax_Empia Technology Inc. Playback method, apparatus and system related to touch information timing
CN110415569A (en) * 2019-06-29 2019-11-05 嘉兴梦兰电子科技有限公司 Share educational method and system in campus classroom

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040027372A1 (en) * 2002-04-03 2004-02-12 Cheng-Shing Lai Method and electronic apparatus capable of synchronously playing the related voice and words
CN101253549A (en) * 2005-08-26 2008-08-27 皇家飞利浦电子股份有限公司 System and method for synchronizing sound and manually transcribed text
CN101640058A (en) * 2009-07-24 2010-02-03 王祐凡 Multimedia synchronization method, player and multimedia data making device
CN102393854A (en) * 2011-09-09 2012-03-28 杭州海康威视数字技术股份有限公司 Method and device obtaining audio/video data
CN102592628A (en) * 2012-02-15 2012-07-18 张群 Play control method of audio and video play file

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040027372A1 (en) * 2002-04-03 2004-02-12 Cheng-Shing Lai Method and electronic apparatus capable of synchronously playing the related voice and words
CN101253549A (en) * 2005-08-26 2008-08-27 皇家飞利浦电子股份有限公司 System and method for synchronizing sound and manually transcribed text
CN101640058A (en) * 2009-07-24 2010-02-03 王祐凡 Multimedia synchronization method, player and multimedia data making device
CN102393854A (en) * 2011-09-09 2012-03-28 杭州海康威视数字技术股份有限公司 Method and device obtaining audio/video data
CN102592628A (en) * 2012-02-15 2012-07-18 张群 Play control method of audio and video play file

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105280207A (en) * 2014-06-25 2016-01-27 禾瑞亚科技股份有限公司 Playing method, device and system associated with touch information time
CN105280207B (en) * 2014-06-25 2018-12-25 禾瑞亚科技股份有限公司 Playing method, device and system associated with touch information time
US10310657B2 (en) 2014-06-25 2019-06-04 Egalax_Empia Technology Inc. Playback method, apparatus and system related to touch information timing
CN110415569A (en) * 2019-06-29 2019-11-05 嘉兴梦兰电子科技有限公司 Share educational method and system in campus classroom
CN110415569B (en) * 2019-06-29 2021-08-03 嘉兴梦兰电子科技有限公司 Campus classroom sharing education method and system

Also Published As

Publication number Publication date
CN103838723B (en) 2017-04-19

Similar Documents

Publication Publication Date Title
US20230327887A1 (en) Collaborative Document Editing
US9122886B2 (en) Track changes permissions
US9465892B2 (en) Associating metadata with media objects using time
US9230356B2 (en) Document collaboration effects
EP2936336B1 (en) Method and apparatus for metadata directed dynamic and personal data curation
CN103548083B (en) Based on the multimedia playback system and method for the e-book of PDF document
TW201602932A (en) Search and locate event on calendar with timeline
CN105493481A (en) Automatically changing a display of graphical user interface
KR20180002702A (en) Bookmark management technology for media files
KR20140031717A (en) Method and apparatus for managing contents
TWI428817B (en) Data management methods and systems for handheld devices, and computer program products thereof
JP2020047275A (en) Reader mode for presentation slides in cloud collaboration platform
CN109284427A (en) A kind of document structure tree method, apparatus, server and storage medium
WO2016179128A1 (en) Techniques to automatically generate bookmarks for media files
US20140129564A1 (en) Providing file indexes and per-file viewing modes within a file management application
US20200372104A1 (en) Methods, systems, apparatuses and devices for facilitating book formatting
CN103838723A (en) Data association method and electronic device
CN112241461B (en) Method and equipment for generating character relation graph of book
CN108885630A (en) digital media content comparator
US20120290985A1 (en) System and method for presenting and interacting with eperiodical subscriptions
US20220261206A1 (en) Systems and methods for creating user-annotated songcasts
AU2018332906A1 (en) Atomizing digital content into multi-media objects for touchscreen and hands-free mobile devices
Shao et al. Multimedia interaction and intelligent user interfaces: principles, methods and applications
Rusen IC3: Internet and Computing Core Certification Global Standard 4 Study Guide
Chiang A Personalized Multimedia Recording System Using Memory Package on Mobile Device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant