CN108073715A - Dialect investigation method, system - Google Patents

Dialect investigation method, system Download PDF

Info

Publication number
CN108073715A
CN108073715A CN201711435031.XA CN201711435031A CN108073715A CN 108073715 A CN108073715 A CN 108073715A CN 201711435031 A CN201711435031 A CN 201711435031A CN 108073715 A CN108073715 A CN 108073715A
Authority
CN
China
Prior art keywords
video
user
dialect
video source
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711435031.XA
Other languages
Chinese (zh)
Inventor
刘艳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuncheng University
Original Assignee
Yuncheng University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuncheng University filed Critical Yuncheng University
Priority to CN201711435031.XA priority Critical patent/CN108073715A/en
Publication of CN108073715A publication Critical patent/CN108073715A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention belongs to field of computer technology, provide a kind of dialect investigation method, system.This method includes:Receive video record instruction input by user, it is instructed according to video record, the video in the range of specified angle is gathered, as video source, receives the markup information that user is inputted, and it associates to video source, according to pre-stored nomenclature principle, video source is stored, according to keyword input by user, in the video source of storage, inquiry and the target video of Keywords matching.Dialect investigation method of the present invention, system, can improve dialect search efficiency, and providing dialect for user pronounces video, is labeled convenient for user.

Description

Dialect investigation method, system
Technical field
The present invention relates to field of computer technology, and in particular to a kind of dialect investigation method, system.
Background technology
The rise of the services such as development and amusement increment with society to dialect, such as Guangdong language, northeast words, Sichuan words, needs It asks and becomes increasingly conspicuous.
But current dialect investigating system there are it is following the problem of:
First, dialect investigating system loyalty original text, also, lip shape when existing system does not provide sound pronunciation;
Second, dialect investigating system can not provide marking Function, can not meet the practical application request of user.
Dialect search efficiency how is improved, dialect pronunciation video is provided for user, is labeled convenient for user, is this The problem of field technology personnel's urgent need to resolve.
The content of the invention
For in the prior art the defects of, the present invention provides a kind of dialect investigation method, systems, can improve dialect and look into Efficiency is ask, dialect pronunciation video is provided for user, is labeled convenient for user.
In a first aspect, the present invention provides a kind of dialect investigation method, this method includes:
Video record step:Receive video record instruction input by user;
It is instructed according to the video record, the video in the range of specified angle is gathered, as video source;
Video editing step:The markup information that user is inputted is received, and is associated to the video source;
Video storing step:According to pre-stored nomenclature principle, the video source is stored;
Query video step:According to keyword input by user, in the video source of storage, inquiry and the keyword The target video matched somebody with somebody.
Further, the present embodiment dialect investigation method further includes:
Standard pronunciation query steps:According to standard pronunciation inquiry instruction input by user,
It transfers and shows the audio of instructions match and characteristic portion image with the standard pronunciation, be shown.
Further, transfer and show the audio of instructions match and characteristic portion image with the standard pronunciation, including:
It is transferred from local side or cloud platform and shows the audio of instructions match and characteristic portion image with the standard pronunciation.
Based on above-mentioned arbitrary dialect investigation method embodiment, further, instructed according to the video record, acquisition is specified Video in angular range, as video source, including:
It is instructed according to the video record, switches the working condition of camera;
Image in the range of the camera acquisition specified angle, described image include previous frame image and present frame figure Picture;
According to the facial characteristics of pre-acquiring, characteristic portion is extracted from previous frame image and current frame image;
According to position of the characteristic portion extracted in the previous frame image and current frame image, parallactic angle is determined;
The angle of the camera is adjusted according to the parallactic angle, the video in the range of specified angle is gathered, as video Source.
Further, after receiving video record instruction input by user, this method further includes:
Obtain the video record parameter of user setting;
The angle of the camera is adjusted according to the parallactic angle, the video in the range of specified angle is gathered, as video Source specifically includes:
The angle of the camera is adjusted according to the parallactic angle, according to the video record parameter of user setting, acquisition refers to The video in angular range is determined, as video source.
Based on above-mentioned arbitrary dialect investigation method embodiment, further, the markup information that user is inputted is received, and is closed The video source is coupled to, including:
Play the video source;
The markup information that user is inputted is received, and the markup information is associated to the designated position of the video source.
Based on above-mentioned arbitrary dialect investigation method embodiment, further, the video in the range of specified angle is gathered, as After video source, this method further includes:
Play the video source;
The target video of designated length is extracted from the video source;
According to the parameter in the 3D databases built in advance, the target video is converted into 3D 3-D cartoons.
Further, this method further includes:Acquisition characteristics pronunciation data;
The feature pronunciation data is updated to the 3D databases.
Further, this method further includes:Obtain enunciator's information corresponding with feature pronunciation data;
According to enunciator's information architecture specialist list;
It is instructed according to the click location of user and chat, determines the information transmission address of target expert;
To information transmission address transmission word, voice or the video of target expert.
Second aspect, the present invention provide a kind of dialect investigating system, which includes video record unit, video editing list Member, video storage unit and query video unit, video record unit instruct for receiving video record input by user;According to The video record instruction, gathers the video in the range of specified angle, as video source;Video editing unit is used to receive user The markup information inputted, and associate to the video source;Video storage unit is used for according to pre-stored nomenclature principle, storage The video source;Query video unit is used for according to keyword input by user, in the video source of storage, inquiry and the pass The matched target video of keyword.
As shown from the above technical solution, dialect investigation method provided in this embodiment, system, can be according to the reality of user Application demand carries out video record, helps to gather the video source of dialect.Meanwhile user can input markup information, and with regarding Frequency source is associated, and is convenient for video storage and is inquired about again, is improved the inquiry and storage of dialect sound, video, is easy to use Person is labeled.
Description of the drawings
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution of the prior art Embodiment or attached drawing needed to be used in the description of the prior art are briefly described.In all of the figs, similar element Or part is generally identified by similar reference numeral.In attached drawing, each element or part might not be drawn according to actual ratio.
Fig. 1 shows user interface provided by the present invention;
Fig. 2 shows the method flow diagram of dialect investigation method provided by the present invention;
Fig. 3 shows the structure diagram of dialect investigating system provided by the present invention.
Specific embodiment
The embodiment of technical solution of the present invention is described in detail below in conjunction with attached drawing.Following embodiment is only used for Clearly illustrate technical scheme, therefore be intended only as example, and the protection of the present invention cannot be limited with this Scope.
It should be noted that unless otherwise indicated, technical term or scientific terminology used in this application should be this hair The ordinary meaning that bright one of ordinary skill in the art are understood.
Dialect investigation method provided in an embodiment of the present invention can be to run on intelligent sliding suitable for intelligent mobile terminal The APP softwares of dynamic terminal, with reference to Fig. 1, the user interface of the APP can set four options, and the title of four options is respectively: " standard pronunciation ", " video recording ", " address list " and " I ".Wherein, " standard pronunciation " option mainly realizes the displaying function of standard pronunciation, " video recording " option mainly realizes video record, editor and store function, and " address list " option mainly realizes that user information stores work( Can, " I " option mainly realizes userspersonal information's management function, mainly including following label:Login interface, software study course, My mark, my file, system set and record and set.
Wherein, the content that system is set is following sections:Record set, check update, user feedback, contact us, It invites friend and removes and cache.
It is following sections to record the content set:The setting of storage positions of files, support is locally stored and cloud storage, record The setting of code check processed, the setting of recorded video clarity.
When each user logs on to user interface, acquiescence shows " video recording " option, can also be according to the practical application of user Demand is configured, and shows corresponding option.
In a first aspect, a kind of dialect investigation method that the embodiment of the present invention is provided, with reference to Fig. 2, this method includes:
Video record step S1:Receive video record instruction input by user.
It is instructed according to video record, the video in the range of specified angle is gathered, as video source.Before video record, User can adjust the distance and angle of mobile phone camera, in order to when pronunciation, record best facial video, especially It is lip shape video.
Video editing step S2:The markup information that user is inputted is received, and is associated to video source.
Video storing step S3:According to pre-stored nomenclature principle, video source is stored.
Wherein, naming rule is as follows:
Name only need to be named original video, original audio then give tacit consent to it is identical with original video name, if first Name is original audio, then original video gives tacit consent to, the name of the All Files that subsequently generates identical with original audio name, then Generation is according to fixed format given tacit consent to by system.
For example, original video is named as zhangsan, then original audio is also named as zhangsan;
The file with mark, name zhangsan-biaozhu are automatically generated after addition mark;
It is automatically saved after video shearing as new file, name is zhangsan-jianqie-020202-020218 (shearings Video time axis be 2 when 2 divide 2 seconds to when 22 divide 10 seconds);
The video that 3D simulations generate, automatically saves as new file, and name is zhangsan-3D-020202-020218 (3D Analog video time shaft be 2 when 2 divide 2 seconds to when 22 divide 10 seconds);
After picture interception, automatically save as new file, name be zhangsan-020202 (the image interception time for 2 when 2 Divide 2 seconds).
Query video step S4:According to keyword input by user, in the video source of storage, inquiry and Keywords matching Target video.
As shown from the above technical solution, dialect investigation method provided in this embodiment, can be according to the practical application of user Demand carries out video record, helps to gather the video source of dialect.Meanwhile user can input markup information, and and video source Be associated, be convenient for video storage with inquire about again, raising dialect sound, video inquiry and storage, convenient for user into Rower is noted.
In order to further improve the accuracy of the present embodiment dialect investigation method, in terms of standard pronunciation displaying, actually should With in the process, this function can be directed to " standard pronunciation " option in APP user interfaces, and the present embodiment dialect investigation method can also Show corresponding standard pronunciation, i.e. this method further includes:
Standard pronunciation query steps:According to standard pronunciation inquiry instruction input by user.
It transfers and shows the audio of instructions match and characteristic portion image with standard pronunciation, be shown, that is, show some standards The picture of the audio of pronunciation and corresponding standard nozzle type can specifically include three parts, i.e. vowel standard pronunciation and standard Nozzle type picture, consonant standard pronunciation and standard nozzle type picture, tone pronunciation example.
Here, the present embodiment dialect investigation method, can show standard pronunciation, convenient for users to the imitation and to standard pronunciation It practises, improves user experience.
It also, specifically can be from local side when transferring with the standard pronunciation displaying audio of instructions match and characteristic portion image Or cloud platform is transferred, and is obtained and the standard pronunciation displaying audio of instructions match and characteristic portion image.Wherein, transferred from local side, The installation kit of local side or the file of designated storage location can be transferred, here, user both may browse through what is be locally stored Standard pronunciation audio and image also can obtain standard pronunciation audio and image from cloud platform.For being mostly in rural area or remote For area, even if when network or poor network signal, also can browse through the standard pronunciation audio being locally stored and image, together When, when network signal is preferable, user can get more standard pronunciation audios and image from cloud platform.
In terms of video record, in actual application, this function can be directed to " video recording " choosing in APP user interfaces , it is instructed according to video record, gathers the video in the range of specified angle, during as video source, the specific implementation process is as follows:
It is instructed according to video record, switches the working condition of camera.
Image in the range of camera acquisition specified angle, image include previous frame image and current frame image.
According to the facial characteristics of pre-acquiring, characteristic portion is extracted from previous frame image and current frame image.
According to position of the characteristic portion extracted in previous frame image and current frame image, parallactic angle is determined.
The angle of camera is adjusted according to parallactic angle, the video in the range of specified angle is gathered, as video source.
Here, the present embodiment dialect investigation method, additionally it is possible to each facial characteristics in every two field picture, adjustment camera shooting The angle of head, convenient for video acquisition, especially collects the video source for facial characteristics.In actual video process, use When the position of person or angle shift, you can realize image from motion tracking with acquisition.
Also, after receiving video record instruction input by user, this method further includes:
Obtain the video record parameter of user setting.Wherein, video record parameter can be storage positions of files, record code Rate, recorded video clarity, to meet diversified business demand.
The angle of camera is adjusted according to parallactic angle, gathers the video in the range of specified angle, during as video source, specifically Realization process is:
According to the angle of parallactic angle adjustment camera, according to the video record parameter of user setting, specified angle model is gathered Interior video is enclosed, as video source.
Here, the present embodiment dialect investigation method, additionally it is possible to the permission of parameter setting is provided to the user, convenient for according to user Practical application request, carry out parameter setting, meet the practical application request of user.
In actual application, when being recorded a video, single video recording is could be provided as, may be set to be double chat.
When carrying out single video recording, dialect is mainly allowed to pronounce personnel oneself recorded video.
When carrying out double chat, mainly dialect pronunciation personnel and researcher carry out Video chat by mobile phone app, In the process, software whole recorded video and audio, and it is average under mobile phone screen transverse screen state to give tacit consent to chat window automatically It is divided into two columns, shows the facial video of dialect pronunciation personnel and researcher respectively, can be chatted by clicking on arbitrary screen realization The free switching of the Dan Lan of skylight opening and two columns.
Above-mentioned video record process provides data for the 3D simulation reconstructions of video and supports, main to gather enunciator in natural shape Under state, facial characteristics, the lip shape feature of word table, vocabulary, paragraph etc. are read aloud, providing data for 3D simulations supports.
In terms of information labeling, when receiving the markup information that user is inputted, and associating to video source, process is implemented It is as follows:
Play video source.
The markup information that user is inputted is received, and markup information is associated to the designated position of video source.
It in actual application, plays out after adding video, when playing, can suspend at any time, and can To add mark at pause, two kinds of labeling forms of word and voice are supported.Video can be sheared, preserve and regard after shearing During frequency, prompt whether to preserve together with mark.
Dialect of embodiment of the present invention investigation method can also provide the function of sectional drawing mark, for example, after playing video source, This method further includes:Intercept the image of video source.
The markup information that user is inputted is received, markup information and truncated picture are associated.
Dialect of embodiment of the present invention investigation method is supported to add picture and marked, and supports two kinds of mark shapes of word and voice Formula.
Here, the present embodiment dialect investigation method, can provide information labeling function to the user, meet the mark need of user Ask, also, this method can also be directed to video source designated position be labeled or realize sectional drawing mark, convenient for users into Row study and browsing.
Also, the present embodiment dialect investigation method is also equipped with 3D analog functuions, that is, gathers regarding in the range of specified angle Frequently, after as video source, this method further includes:
Play video source.
The target video of designated length is extracted from video source.
According to the parameter in the 3D databases built in advance, target video is converted into 3D 3-D cartoons, in order to make User provides 3D 3-D cartoons, i.e. this method provides the 3D simulation reconstructions of video, chooses one section of video, and 3D is selected to simulate, can be with It by the form of 3D 3-D cartoon pieces, reproduces in this section of video, facial characteristics, lip shape feature of enunciator etc..The function makes With background be researcher when being studied in the later stage, if being difficult to decide to the specific pronunciation of enunciator, be referred to The circle exhibition of lip shape shown in 3D simulation animations, the features such as size of opening degree determine specifically to pronounce, and can either meet makes The study that user reads aloud paragraph, and can realize that the pronunciation to specific vocabulary is imitated.
In terms of pronunciation character acquisition, dialect investigation method of the embodiment of the present invention, additionally it is possible to acquisition characteristics pronunciation data, it will Feature pronunciation data is updated to 3D databases, to realize the update of database, provides data for 3D simulations and supports.Wherein, feature Pronunciation data can be facial characteristics of the enunciator when most under natural conditions, reading aloud word table, vocabulary, paragraph etc., lip shape feature Deng.
In terms of address list setting, the information that the present embodiment dialect investigation method can also provide designated user to the user is handed over Mutual function, user can interact with the target user of any one in address list.Wherein, address list includes specialist list And buddy list.
For specialist list, in the structure of specialist list, this method the specific implementation process is as follows:It obtains and is sent out with feature The corresponding enunciator's information of sound data.According to enunciator's information architecture specialist list, in specialist list, each expert's title is equal The pronunciation character data of the expert are associated, and are to be preset in APP software inhouses.When specialist list updates, corresponding APP is installed The user terminal of software being capable of synchronized update specialist list.
When carrying out information exchange with some target expert in specialist list, the specific implementation process is as follows:
It is instructed according to the click location of user and chat, determines the information transmission address of target expert;
To information transmission address transmission word, voice or the video of target expert.
For example, clicking on name, Chat mode is directly entered, file, text chat, voice can be transmitted under Chat mode Chat, can also Video chat, that wherein Video chat is directly entered is two-player mode.
For buddy list, buddy list can be added according to the demand of user, more for individualized feature, Meet the individual demand of user, for example, the good friend that user often chats with addition, you can added to buddy list, side Just daily use.
When carrying out information exchange with some target user in buddy list, the specific implementation process is as follows:
It is instructed according to the click location of user and chat, determines the information transmission address of target user;
To information transmission address transmission word, voice or the video of target user, into Chat mode, under Chat mode Can transmit file, text chat, voice-enabled chat, can also Video chat, what wherein Video chat was directly entered is double mould Formula.
Here, the present embodiment dialect investigation method can provide address list to the user, user is facilitated to load multiple good friends, just Information exchange is carried out between different users.
Second aspect, a kind of dialect investigating system that the embodiment of the present invention is provided, with reference to Fig. 3, which includes video Recording elements 1, video editing unit 2, video storage unit 3 and query video unit 4, video record unit 1 are used for receiving The video record instruction of family input.It is instructed according to video record, the video in the range of specified angle is gathered, as video source.Depending on Frequency edit cell 2 is associated for receiving the markup information that user is inputted to video source.Video storage unit 3 be used for according to Pre-stored nomenclature principle stores video source.Query video unit 4 is used for according to keyword input by user, in regarding for storage In frequency source, inquiry and the target video of Keywords matching.
As shown from the above technical solution, dialect investigating system provided in this embodiment, can be according to the practical application of user Demand carries out video record, helps to gather the video source of dialect.Meanwhile user can input markup information, and and video source Be associated, be convenient for video storage with inquire about again, raising dialect sound, video inquiry and storage, convenient for user into Rower is noted.
In the specification of the present invention, numerous specific details are set forth.It is to be appreciated, however, that the embodiment of the present invention can be with It puts into practice without these specific details.In some instances, well known method, structure and skill is not been shown in detail Art, so as not to obscure the understanding of this description.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It is combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the different embodiments described in this specification or example and different embodiments or exemplary feature It closes and combines.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe is described in detail the present invention with reference to foregoing embodiments, it will be understood by those of ordinary skill in the art that:Its according to Can so modify to the technical solution recorded in foregoing embodiments either to which part or all technical characteristic into Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is not made to depart from various embodiments of the present invention technology The scope of scheme should all cover among the claim of the present invention and the scope of specification.

Claims (10)

1. a kind of dialect investigation method, which is characterized in that including:
Video record step:Receive video record instruction input by user;
It is instructed according to the video record, the video in the range of specified angle is gathered, as video source;
Video editing step:The markup information that user is inputted is received, and is associated to the video source;
Video storing step:According to pre-stored nomenclature principle, the video source is stored;
Query video step:According to keyword input by user, in the video source of storage, inquiry and the Keywords matching Target video.
2. dialect investigation method according to claim 1, which is characterized in that this method further includes:
Standard pronunciation query steps:According to standard pronunciation inquiry instruction input by user;
It transfers and shows the audio of instructions match and characteristic portion image with the standard pronunciation, be shown.
3. dialect investigation method according to claim 2, which is characterized in that
It transfers and shows the audio of instructions match and characteristic portion image with the standard pronunciation, including:
It is transferred from local side or cloud platform and shows the audio of instructions match and characteristic portion image with the standard pronunciation.
4. dialect investigation method according to claim 1, which is characterized in that
It is instructed according to the video record, gathers the video in the range of specified angle, as video source, including:
It is instructed according to the video record, switches the working condition of camera;
Image in the range of the camera acquisition specified angle, described image include previous frame image and current frame image;
According to the facial characteristics of pre-acquiring, characteristic portion is extracted from previous frame image and current frame image;
According to position of the characteristic portion extracted in the previous frame image and current frame image, parallactic angle is determined;
The angle of the camera is adjusted according to the parallactic angle, the video in the range of specified angle is gathered, as video source.
5. dialect investigation method according to claim 4, which is characterized in that
After receiving video record instruction input by user, this method further includes:
Obtain the video record parameter of user setting;
The angle of the camera is adjusted according to the parallactic angle, gathers the video in the range of specified angle, as video source, tool Body includes:
The angle of the camera is adjusted according to the parallactic angle, according to the video record parameter of user setting, gathers and specifies angle Video in the range of degree, as video source.
6. dialect investigation method according to claim 1, which is characterized in that
The markup information that user is inputted is received, and is associated to the video source, including:
Play the video source;
The markup information that user is inputted is received, and the markup information is associated to the designated position of the video source.
7. dialect investigation method according to claim 1, which is characterized in that
The video in the range of specified angle is gathered, after video source, this method further includes:
Play the video source;
The target video of designated length is extracted from the video source;
According to the parameter in the 3D databases built in advance, the target video is converted into 3D 3-D cartoons.
8. dialect investigation method according to claim 7, which is characterized in that this method further includes:
Acquisition characteristics pronunciation data;
The feature pronunciation data is updated to the 3D databases.
9. dialect investigation method according to claim 8, which is characterized in that this method further includes:
Obtain enunciator's information corresponding with the feature pronunciation data;
According to enunciator's information architecture specialist list;
It is instructed according to the click location of user and chat, determines the information transmission address of target expert;
To information transmission address transmission word, voice or the video of the target expert.
10. a kind of dialect investigating system, which is characterized in that including:
Video record unit:For receiving video record instruction input by user;
It is instructed according to the video record, the video in the range of specified angle is gathered, as video source;
Video editing unit:For receiving the markup information that user is inputted, and associate to the video source;
Video storage unit:For according to pre-stored nomenclature principle, storing the video source;
Query video unit:For according to keyword input by user, in the video source of storage, inquiry and the keyword The target video matched somebody with somebody.
CN201711435031.XA 2017-12-26 2017-12-26 Dialect investigation method, system Pending CN108073715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711435031.XA CN108073715A (en) 2017-12-26 2017-12-26 Dialect investigation method, system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711435031.XA CN108073715A (en) 2017-12-26 2017-12-26 Dialect investigation method, system

Publications (1)

Publication Number Publication Date
CN108073715A true CN108073715A (en) 2018-05-25

Family

ID=62155289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711435031.XA Pending CN108073715A (en) 2017-12-26 2017-12-26 Dialect investigation method, system

Country Status (1)

Country Link
CN (1) CN108073715A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382937A (en) * 2008-07-01 2009-03-11 深圳先进技术研究院 Multimedia resource processing method based on speech recognition and on-line teaching system thereof
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
CN104581062A (en) * 2014-12-26 2015-04-29 中通服公众信息产业股份有限公司 Video monitoring method and system capable of realizing identity information and video linkage
CN106409030A (en) * 2016-12-08 2017-02-15 河南牧业经济学院 Customized foreign spoken language learning system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382937A (en) * 2008-07-01 2009-03-11 深圳先进技术研究院 Multimedia resource processing method based on speech recognition and on-line teaching system thereof
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
CN104581062A (en) * 2014-12-26 2015-04-29 中通服公众信息产业股份有限公司 Video monitoring method and system capable of realizing identity information and video linkage
CN106409030A (en) * 2016-12-08 2017-02-15 河南牧业经济学院 Customized foreign spoken language learning system

Similar Documents

Publication Publication Date Title
US9363360B1 (en) Text message definition and control of multimedia
US20130262127A1 (en) Content Customization
US20130257871A1 (en) Content Customization
Metcalf mLearning: Mobile learning and performance in the palm of your hand
US7308479B2 (en) Mail server, program and mobile terminal synthesizing animation images of selected animation character and feeling expression information
CN1672178B (en) Method and device for instant motion picture communication
US20090164916A1 (en) Method and system for creating mixed world that reflects real state
US20080228480A1 (en) Speech recognition method, speech recognition system, and server thereof
CN107220228A (en) One kind teaching recorded broadcast data correction device
CN107562195A (en) Man-machine interaction method and system
CN115494941A (en) Meta-universe emotion accompanying virtual human realization method and system based on neural network
US8074176B2 (en) Electronic communications dialog using sequenced digital images stored in an image dictionary
JP2002288213A (en) Data-forwarding device, data two-way transmission device, data exchange system, data-forwarding method, data-forwarding program, and data two-way transmission program
CN108073715A (en) Dialect investigation method, system
CN108055192A (en) Group's generation method, apparatus and system
CN116954437A (en) Information interaction processing method, device, equipment and computer storage medium
KR101165300B1 (en) UCC service system based on pattern-animation
KR20200068512A (en) Portal service object system for providing custom VR contents and Drive method of the Same
CN115629666A (en) Intelligent tourism platform system and method based on VR/AR virtual immersion technology
WO2021229692A1 (en) Avatar control program, avatar control method and information processing device
KR20100012525A (en) Method and system for generating ria based character movie clip
JPWO2021229692A5 (en)
JP2011519079A (en) Photorealistic talking head creation, content creation, and distribution system and method
Spadoni et al. A Personalized Expert Guide for the Hybrid Museums of the Future
Wark et al. The FOCAL point-multimodal dialogue with virtual geospatial displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180525