CN107071554A - Method for recognizing semantics and device - Google Patents

Method for recognizing semantics and device Download PDF

Info

Publication number
CN107071554A
CN107071554A CN201710038438.2A CN201710038438A CN107071554A CN 107071554 A CN107071554 A CN 107071554A CN 201710038438 A CN201710038438 A CN 201710038438A CN 107071554 A CN107071554 A CN 107071554A
Authority
CN
China
Prior art keywords
identified
language
represented
video segment
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710038438.2A
Other languages
Chinese (zh)
Other versions
CN107071554B (en
Inventor
李钟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710038438.2A priority Critical patent/CN107071554B/en
Publication of CN107071554A publication Critical patent/CN107071554A/en
Application granted granted Critical
Publication of CN107071554B publication Critical patent/CN107071554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4753End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for user identification, e.g. by entering a PIN or password
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention discloses a kind of method for recognizing semantics and device.Wherein, this method includes:Identification instruction is received, wherein, identification instruction is used to ask that the object to be identified in the captions played on the broadcast interface of video is identified, and object to be identified is represented with first language;Obtain the object to be identified in captions;Search the recognition result represented with second language matched with object to be identified;The information of the object to be identified represented with first language, the video segment corresponding to the recognition result and object to be identified that are represented with second language is preserved, wherein, the information of video segment is used to play video segment after play instruction is received;Export the recognition result represented with second language.The present invention solves user's unhandy technical problem when watching video study new word.

Description

Method for recognizing semantics and device
Technical field
The present invention relates to video field, in particular to a kind of method for recognizing semantics and device.
Background technology
Viewing foreign language movies are an important channels for learning language, but convenient function does not make video abundant at present Play this function.During foreign language movies are watched, some unacquainted words, in the prior art, people are frequently encountered If it is intended to by seeing that video is learnt foreign languages, can only oneself some manually recorded new word, or by inquiring about dictionary etc. in addition Instrument obtains the semanteme of new word and saved in case review later.Effectively there can not be this of picture again using the existing sound of video One characteristic helps user's note word.The manually recorded word of user, is very inconvenient first, secondly wanting also can only be opposite when looking back Uninteresting word is reviewed, and can not find the video segment where word so that the study of word is very uninteresting.This just word and is regarded Frequency is separated, and does not accomplish effectively combination.
For it is above-mentioned the problem of, effective solution is not yet proposed at present.
The content of the invention
The embodiments of the invention provide a kind of method for recognizing semantics and device, at least to solve user in viewing video study Unhandy technical problem during new word.
One side according to embodiments of the present invention there is provided a kind of method for recognizing semantics, including:Identification instruction is received, Wherein, the identification instruction is used to ask that the object to be identified in the captions played on the broadcast interface of video is identified, The object to be identified is represented with first language;Obtain the object to be identified in the captions;Search and the object to be identified The recognition result represented with second language of matching;Preserve represented with the first language object to be identified, with described second The information for the video segment corresponding to recognition result and the object to be identified that language is represented, wherein, the video segment Information be used for the video segment is played after play instruction is received;Export the identification knot represented with the second language Really.
Another aspect according to embodiments of the present invention, additionally provides a kind of semantic recognition device, including:First receives single Member, for receiving identification instruction, wherein, the identification instruction is used to ask in the captions to playing on the broadcast interface of video Object to be identified is identified, and the object to be identified is represented with first language;First acquisition unit, for obtaining the captions In object to be identified;First searching unit, for searching the knowledge represented with second language matched with the object to be identified Other result;First storage unit, for preserving the object to be identified represented with the first language, being represented with the second language Recognition result and the object to be identified corresponding to video segment information, wherein, the information of the video segment is used In playing the video segment after play instruction is received;First output unit, for exporting with the second language table The recognition result shown.
In embodiments of the present invention, the object to be identified obtained using receiving after identification is instructed in captions, searches and treats The recognition result represented with second language of identification object matching, preserves object to be identified, recognition result and described to be identified The information of video segment corresponding to object, exports the recognition result that is represented with second language, reached object to be identified and The purpose that recognition result and corresponding video segment are preserved simultaneously, user can check that the new word is corresponding when checking new word Video segment, it is achieved thereby that facilitating user's correspondence video segment to learn the technique effect of new word, and then solves user in sight See that video learns unhandy technical problem during new word.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this hair Bright schematic description and description is used to explain the present invention, does not constitute inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the schematic diagram of the hardware environment of method for recognizing semantics according to embodiments of the present invention;
Fig. 2 is a kind of flow chart of optional method for recognizing semantics according to embodiments of the present invention;
Fig. 3 is a kind of schematic diagram of export document according to embodiments of the present invention;
Fig. 4 is a kind of flow chart of method for recognizing semantics according to embodiments of the present invention;
Fig. 5 is the flow chart of another method for recognizing semantics according to embodiments of the present invention;
Fig. 6 is a kind of schematic diagram of optional semantic recognition device according to embodiments of the present invention;And
Fig. 7 is a kind of structured flowchart of terminal according to embodiments of the present invention.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the model that the present invention is protected Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned accompanying drawing, " Two " etc. be for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so using Data can exchange in the appropriate case, so as to embodiments of the invention described herein can with except illustrating herein or Order beyond those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover Lid is non-exclusive to be included, for example, the process, method, system, product or the equipment that contain series of steps or unit are not necessarily limited to Those steps or unit clearly listed, but may include not list clearly or for these processes, method, product Or the intrinsic other steps of equipment or unit.
Embodiment 1
There is provided a kind of embodiment of the method for semantics recognition according to embodiments of the present invention.
Alternatively, in the present embodiment, above-mentioned method for recognizing semantics can apply to as shown in Figure 1 by server 102 In the hardware environment constituted with terminal 104.As shown in figure 1, server 102 is attached by network with terminal 104, it is above-mentioned Network includes but is not limited to:Wide area network, Metropolitan Area Network (MAN) or LAN, terminal 104 are not limited to PC, mobile phone, tablet personal computer etc..This The method for recognizing semantics of inventive embodiments can be performed by server 102, can also be performed, can also be by terminal 104 Performed jointly by server 102 and terminal 104.Wherein, the method for recognizing semantics of the execution of terminal 104 embodiment of the present invention can also It is to be performed by client mounted thereto.
In a kind of optional application scenarios, user runs into unacquainted word, phrase or sentence when watching video, The object to be identified that is shown with first language on screen is clicked on by mouse, after obtaining object to be identified, from rear number of units The recognition result of the object to be identified represented according to being searched in storehouse or network data base by second language, also, will be to be identified Object, recognition result and include object to be identified video segment preserve, for example, video segment can occur this to treat The video segment of front and rear each 5 seconds of identification object, and the recognition result that second language is represented is exported, facilitate user to check immediately To the semantics recognition result of object to be identified.
Fig. 2 is a kind of flow chart of optional method for recognizing semantics according to embodiments of the present invention, as shown in Fig. 2 the party Method may comprise steps of:
Step S202, receives identification instruction, wherein, identification instruction is used to ask the word to playing on the broadcast interface of video Object to be identified in curtain is identified, and object to be identified is represented with first language.
Step S204, obtains the object to be identified in captions.
Step S206, searches the recognition result represented with second language matched with object to be identified.
Step S208, preserve the object to be identified represented with first language, the recognition result that is represented with second language and The information of video segment corresponding to object to be identified, wherein, the information of video segment is used for after play instruction is received Play video segment.
Step S210, exports the recognition result represented with second language.
By above-mentioned steps S202 to step S210, obtain to be identified right in captions after identification is instructed using receiving As, search the recognition result represented with second language that is matched with object to be identified, preservation object to be identified, recognition result and The information of video segment corresponding to the object to be identified, the recognition result that output second language is represented, having reached will wait to know The purpose that other object and recognition result and corresponding video segment are preserved simultaneously, user can check the life when checking new word The corresponding video segment of word, it is achieved thereby that facilitating user's correspondence video segment to learn the technique effect of new word, solves user The unhandy technical problem when watching video study new word, and then operated when reaching raising user's viewing video study new word Convenience technique effect.
Step S202 provide technical scheme in, receive identification instruction can be performed by terminal, terminal can be mobile phone, Polytype terminal such as computer, tablet personal computer, identification instruction can be that terminal recognition user clicks on the knowledge sent by mouse Do not instruct, for example, in video playback, user clicks on caption display area by mouse and sends identification instruction, asks to video Broadcast interface on object to be identified in the captions played be identified, object to be identified is represented with first language, the first language Speech can be that English, Chinese can also be other any natural languages.Recognize that instruction can also be sent out by other means Go out, for example, sent by the captions clicked in screen, it is generally the case that the area of partial below of the Subtitle Demonstration in screen In domain, if click on screen, the position of click is that without character area, then the clicking operation is not instructed as identification, in point When hitting the word of captions, it is determined that receiving identification instruction.When mouse pointer is dragged in whole viewing area, work as mouse pointer , can be by mouse pointer change shape when indicating the word of captions, the word in captions to point out user current location can It is identified with choosing.
In the technical scheme that step S204 is provided, object to be identified can be by recognizing that instruction is determined, for example, knowing It Zhi Ling be that the identification of some word is instructed, obtain the word in captions according to identification instruction, object to be identified can be with It is word, phrase or sentence.
Step S206 provide technical scheme in, obtain captions in object to be identified after, search with it is to be identified The recognition result represented with second language of object matching, first language and the second language language that can be two kinds different, example Such as, first language is Chinese, and second language is English, or first language is English, and the second prophesy is Chinese, then or, the One language is French, and second language is Japanese.First language and second language can also be the different expression-forms of language of the same race, For example, first language is the writing in classical Chinese, second language is Modern Chinese.What lookup was matched with object to be identified is represented with second language Recognition result, can be from the dictionary of default local data base search matched with object to be identified with second language table The recognition result that shows or the identification knot that the second language matched with object to be identified is represented is searched from the webserver Really.
In the technical scheme that step S208 is provided, after recognition result is found, preserve what is represented with first language The information of video segment corresponding to object to be identified, the recognition result and object to be identified that are represented with second language, wherein, The information of video segment be used for video segment is played after play instruction is received, when stored, can by object to be identified, Video segment corresponding to recognition result and object to be identified is as a semantic preservation, and the information of video segment can include regarding The content of frequency fragment or the link of video segment, the information such as the title of video, in some cases, video segment Content can not preserve and simply preserve the link of video segment, to save the memory space of terminal, in other cases, The content of video segment can be preserved to save the data traffic of terminal, for example, user can be directly viewable in follow-up check Video segment content, rather than to be downloaded again into server every time.Semanteme can be stored in the corresponding storage of word folder In space.The length of video segment can need to modify according to user, for example, can make into before object to be identified appearance Each 10 seconds or front and rear each 2 seconds afterwards.
In the technical scheme that step S210 is provided, it can be in terminal to export the recognition result represented with second language The recognition result represented with second language is shown, when exporting the recognition result that second language is represented, broadcasting for video can be suspended Put, to facilitate user to check, can continue to the broadcasting of video, what one jiao of display second language at video playback interface was represented Recognition result.
As a kind of optional embodiment, the object to be identified that is represented in preservation with first language, represented with second language Recognition result and object to be identified corresponding to video segment information after, receive play instruction;Refer in response to playing The information of video segment is searched in order;Video segment is played according to the information of video segment.
After the information for preserving the video segment corresponding to object to be identified, the play instruction that user sends is received, is broadcast Putting instruction is used to play video segment, and the information of video segment is searched according to play instruction, the information of video segment is being found Afterwards, video segment is played.
As a kind of optional embodiment, the information of video segment can be position link or video segment Content.Playing video segment according to the information of video segment includes one below:When the information of video segment links for position, Access position link to obtain video segment, then play video segment, wherein, position link for indicate video segment regarding Position in frequency, by accessing the link of video segment, can view the content of video segment;Used in the information of video segment When the content of video segment is indicated, the content of video segment is played.
As a kind of optional embodiment, before the object to be identified in obtaining captions, by predetermined component to video Broadcast interface on the captions played be processed, obtain captioned test corresponding with video;On the broadcast interface of video Precalculated position caption playing text.
Predetermined component can be background process component, be processed for the captions in the broadcast interface to video, processing Content can be that the captions in video playback interface are processed into captioned test, and captioned test can be selected, for example, can lead to Screen word-selecting is crossed to select captioned test, after captioned test is obtained, the precalculated position on the broadcast interface of video Caption playing text, the captioned test can be covered on the viewing area of former captions or aobvious in other positions Show, alternatively, the broadcasting of the captioned test and audio video synchronization, to facilitate user to watch and read.
As a kind of optional embodiment, the object to be identified that is represented in preservation with first language, represented with second language Recognition result and object to be identified corresponding to video segment information before, export prompt message, wherein, prompt message The object to be identified represented for prompting preservation with first language, the recognition result and object to be identified that are represented with second language The information of corresponding video segment;When receiving the preservation operation in response to prompt message, preservation is represented with first language Object to be identified, the information of video segment corresponding to the recognition result and object to be identified that are represented with second language.
During user's viewing video, if object to be identified, then search obtain matching with object to be identified with The recognition result that second language is represented, can need these information being stored in precalculated position, for example, being stored in list according to user In word sheet, before preservation, prompt message is exported, is prompted the user whether object to be identified, recognition result and corresponding video The information of fragment is preserved, after the preservation operation that user response prompt message is sent is received, by object to be identified, identification knot Fruit and video segment are preserved.
As a kind of optional embodiment, the object to be identified that is represented in preservation with first language, represented with second language Recognition result and object to be identified corresponding to video segment information after, receive query statement, wherein, query statement For inquiring about the object to be identified that the first language preserved is represented;Inquire about and known with waiting of representing of first language in response to query statement The information of video segment corresponding to other object, the recognition result and object to be identified that are represented with second language, is inquired about As a result;Play the video segment corresponding to object to be identified in Query Result.
After preservation, all semantic informations preserved can be inquired about according to query statement, for example, all semantic letters Breath is stored in word folder, and the object to be identified in a certain bar semantic information in user clicks on word sheet is then obtained to be identified The Query Results such as the information of object and the corresponding video segment of recognition result and object to be identified, play and wait to know in Query Result Video segment corresponding to other object, due to the corresponding video segment of object to be identified can be viewed when checking word folder, Study is carried out to the word in word folder with reference to video segment can facilitate user combining the new word of video study, improve operation Flexibility, be also convenient for the study of user.
As a kind of optional embodiment, the object to be identified that is represented in preservation with first language, represented with second language Recognition result and object to be identified corresponding to video segment information before, obtain current login account;Getting In the case of current login account, the object to be identified represented with first language, the recognition result represented with second language are preserved And in the information of the video segment corresponding to object to be identified to the corresponding storage region of current login account;Obtain less than In the case of current login account, login prompt interface is exported;Receive the logon information inputted at login prompt interface;According to stepping on Land log account.
Before the information for preserving the video segment corresponding to object to be identified, recognition result and object to be identified, need First to carry out login account, obtain current login account, if it is possible to get current login account, then will be with first language The information of video segment corresponding to the object to be identified of expression, the recognition result and object to be identified that are represented with second language Preserve into the corresponding word folder of login account, if obtained less than current login account, output login prompt interface, use Family inputs logon information at login prompt interface, is receiving after the logon information that login prompt interface is inputted, according to stepping on Log account is recorded, is stored by login account, user can be facilitated subsequently to check, when changing using terminal, it is only necessary to Login account, it is possible to check a plurality of semantic information stored in the account.
As a kind of optional embodiment, the object to be identified that is represented in preservation with first language, represented with second language Recognition result and object to be identified corresponding to video segment information after, receive export instruction, wherein, export instruction The object to be identified that is represented for the first language indicated preservation, the recognition result represented with second language and to be identified right As the information of corresponding video segment is separately stored to predefined paths;In response to export instruction, the preservation that instruction is indicated will be exported Object to be identified that first language is represented, the piece of video corresponding to the recognition result and object to be identified represented with second language The information of section is separately stored to predefined paths in a predetermined format.
In the recognition result and object to be identified represented by the object to be identified represented with first language, with second language After the information storage of corresponding video segment, user checks or shared for convenience, can be by the information of these storages Export, after export instruction is received, object to be identified that the first language of preservation is represented, the knowledge represented with second language The information of video segment corresponding to other result and object to be identified is separately stored to predefined paths in a predetermined format, and predetermined format can To be text formatting, such as word, the form such as excel, in export, the information of video segment can include video title and The information such as link.
The enthusiasm that can improve user using viewing software by the technical scheme of the embodiment of the present invention to learn foreign languages, Liveness of the user in video software is lifted, while can also attract the user of foreign language studying to use the video software with this function, Increase new number of users.Object to be identified can be not limited only to word or phrase or sentence, sometimes reset video Customer flow may be expended, video segment where directly preserving the word can also be selected, directly broadcast during user clicks on word The fragment of preservation is put, consumed flow is not required to.
Present invention also offers a kind of preferred embodiment, with reference to technology of the preferred embodiment to the embodiment of the present invention Scheme is illustrated:
The embodiment of the present invention be can apply in video software (such as Tengxun's video), and a list is added in individual center This function of word.Hereafter, click on captions just can eject whether preserve word frame during viewing foreign language video, click on and determine that preservation will be certainly It is dynamic that the word is stored in word folder, while word can attach position connection.Word folder is opened, the word in word folder is clicked on Video location where it can just be returned, plays the video segment of the position, for example, there is the piece of front and rear each 5 seconds of the word Section.Can be changed after word folder and personal account number binding in PC and mobile terminal, allow user accomplish to check whenever and wherever possible word with Reviewed.In addition user can also be exported word, lexical or textual analysis and film name into document by the export button in word folder, just Share in good friend.
Fig. 3 is a kind of schematic diagram of export document according to embodiments of the present invention, as shown in figure 3, in the list of individual center Displaying has multiple words that user preserves in word sheet, (is incidentally linked in English word, its place can be redirected by clicking on some word The corresponding position of video), for example, have two words in film A, English X and English Y, for each word, right side correspondence is aobvious Show the lexical or textual analysis of the word, lexical or textual analysis X and lexical or textual analysis Y, there is a word in film B, English Z is corresponding to be interpreted as lexical or textual analysis Z.
Fig. 4 is a kind of flow chart of method for recognizing semantics according to embodiments of the present invention, as shown in figure 4, this method includes Following steps:
Step S401:User clicks on video caption word A.
Step S402:Check user's whether login account.
Step S403:Guiding user's registration is logged in.
If user does not have login account, output log-in interface guides user's registration or logged in.
Step S404:System preserves account information, the link of current video position and word A to background data base, and will Word A returns to backstage word library and matched.If user has logged in, above- mentioned information is preserved into background data base, can To search word A corresponding lexical or textual analysis in the word library of backstage.
Step S405:Return matched and searched to A correspondence lexical or textual analysis into database.
After word A corresponding lexical or textual analysis is found, lexical or textual analysis is returned and lexical or textual analysis is saved in database.
The embodiment of the present invention can be directed to common PC ends or mobile terminal video player, and hardware environment is computer or touch screen hand Machine, the captions needed in front end in video playback add click function, and add a word folder in individual center, can show Word, lexical or textual analysis and film name.The word in word folder can be clicked on simultaneously, film position where jumping to it after click automatically. There is an export button in word folder, click on wherein all words, lexical or textual analysis and film name can be then exported to a document and protect Deposit, after preservation, user can share document with good friend.
Fig. 5 is the flow chart of another method for recognizing semantics according to embodiments of the present invention, as shown in figure 5, this method bag Include following steps:
Step S501:User opens word folder.
Step S502:Check user's whether login account.
Step S503:Guiding user's registration is logged in.If user does not log in, prompt message is exported, points out user to step on Record or register account number.
Step S504:If user has logged in, account information is back to background data base, correspondence word note is matched Record and all display (except links of position), user clicks on word is then connected from backstage matched position, returns to matching value.
Backstage can be provided with a foreign language word set storehouse, when user clicks on captions word, backstage obtain the word and It links (which two field picture of certain film) in the position of the film and searched in word library, is returned after the lexical or textual analysis for finding matching The word, lexical or textual analysis and film name are returned into the word folder at individual subscriber center.Preserve the user profile, word and its place simultaneously Film position is into background data base.When the word in user clicks on word sheet, backstage is by matching the use in the user library Family and word information, film position where finding the word, play back fragment where the word automatically.
This function and user account number binding, can be synchronously shared at PC ends and mobile phone terminal.
The method for recognizing semantics of the embodiment of the present invention is by the video word folder that is embedded in video software toolbar come real Existing, user is clicked in the word that captions occur in foreign language video, the word folder with regard to that can put it into oneself, and attaches the list automatically Word appears in the position link in film.This word folder can be bundled on user account, and user logs in and can directly lead to next time Cross and click on the fragment (five seconds before and after the such as word position) that word in word folder plays back its place film, be easy to user to be answered Practise.
Video word instinct provides a kind of function that new word is recorded when seeing video of user, while fragment where subsidiary word And position link, it can at any time play back and check video segment, the efficiency that user remembers word be improved, while also by video and foreign language Habit effectively combines.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because According to the present invention, some steps can be carried out sequentially or simultaneously using other.Secondly, those skilled in the art should also know Know, embodiment described in this description belongs to preferred embodiment, involved action and module is not necessarily of the invention It is necessary.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation The method of example can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but a lot In the case of the former be more preferably embodiment.Understood based on such, technical scheme is substantially in other words to existing The part that technology contributes can be embodied in the form of software product, and the computer software product is stored in a storage In medium (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal equipment (can be mobile phone, calculate Machine, server, or network equipment etc.) perform method described in each of the invention embodiment.
Embodiment 2
According to embodiments of the present invention, a kind of semantic recognition device for being used to implement above-mentioned method for recognizing semantics is additionally provided. Fig. 6 is a kind of schematic diagram of optional semantic recognition device according to embodiments of the present invention, as shown in fig. 6, the device can be wrapped Include:
First receiving unit 10, for receiving identification instruction, wherein, identification instruction is used to ask the broadcast interface to video Object to be identified in the captions of upper broadcasting is identified, and object to be identified is represented with first language.
First acquisition unit 20, for obtaining the object to be identified in captions.
First searching unit 30, for searching the recognition result represented with second language matched with object to be identified.
First storage unit 40, for the knowledge for preserving the object to be identified represented with first language, being represented with second language The information of video segment corresponding to other result and object to be identified, wherein, the information of video segment is used to broadcast receiving Put and play video segment after instructing.
First output unit 50, for exporting the recognition result represented with second language.
It should be noted that the first receiving unit 10 in the embodiment can be used for performing in the embodiment of the present application 1 First acquisition unit 20 in step S202, the embodiment can be used for performing the step S204 in the embodiment of the present application 1, the reality The first searching unit 30 in example is applied to can be used for performing first in the step S206 in the embodiment of the present application 1, the embodiment The first output unit 50 that storage unit 40 can be used for performing in the step S208 in the embodiment of the present application 1, the embodiment can For performing the step S210 in the embodiment of the present application 1.
Herein it should be noted that above-mentioned module is identical with example and application scenarios that the step of correspondence is realized, but not It is limited to the disclosure of that of above-described embodiment 1.It should be noted that above-mentioned module as a part for device may operate in as It in hardware environment shown in Fig. 1, can be realized, can also be realized by hardware by software.
By above-mentioned module, user's unhandy technical problem when watching video study new word can be solved, is entered And reach the technique effect for the operating flexibility for improving user's viewing video study new word.
Alternatively, the device also includes:Second receiving unit, for preserve with first language represent it is to be identified right As after, the information of video segment corresponding to the recognition result and object to be identified that are represented with second language, receiving and playing Instruction;Second searching unit, the information for searching video segment in response to play instruction;First broadcast unit, for basis The information of video segment plays video segment.
Alternatively, the first broadcast unit is used to perform one below:When the information of video segment links for position, access Position link plays video segment to obtain video segment, wherein, position is linked for indicating the position of video segment in video Put;When the information of video segment is used to indicate the content of video segment, the content of video segment is played.
Alternatively, the device also includes:Machining cell, for before the object to be identified in obtaining captions, by pre- Determine component to be processed the captions played on the broadcast interface of video, obtain captioned test corresponding with video;Second plays Unit, for the precalculated position caption playing text on the broadcast interface of video.
Alternatively, the device also includes:Second output unit, for preserve with first language represent it is to be identified right As before, the information of video segment corresponding to the recognition result and object to be identified that are represented with second language, output prompting Information, wherein, prompt message is used to point out to preserve the object to be identified represented with first language, the identification represented with second language As a result the information of the video segment and corresponding to object to be identified;Second storage unit, for receiving in response to prompting During the preservation operation of information, preserve the object to be identified represented with first language, the recognition result that is represented with second language and The information of video segment corresponding to object to be identified.
Alternatively, the device also includes:3rd receiving unit, for preserve with first language represent it is to be identified right As after, the information of video segment corresponding to the recognition result and object to be identified that represent with second language, receiving inquiry Instruction, wherein, query statement is used to inquire about the object to be identified that the first language preserved is represented;Query unit, in response to Query statement inquires about the object to be identified represented with first language, the recognition result and object to be identified that are represented with second language The information of corresponding video segment, obtains Query Result;3rd broadcast unit, for playing object to be identified in Query Result Corresponding video segment.
Alternatively, the device also includes:Second acquisition unit, for preserve with first language represent it is to be identified right As before, the information of video segment corresponding to the recognition result and object to be identified that represent with second language, obtaining current Login account;3rd storage unit, in the case where getting current login account, preserving with treating that first language is represented The information of video segment corresponding to identification object, the recognition result and object to be identified that are represented with second language is to currently stepping on In the corresponding storage region of land account;3rd output unit, in the case where obtaining less than current login account, output to be stepped on Land prompting interface;4th receiving unit, for receiving the logon information inputted at login prompt interface;Login unit, for root According to logon information login account.
Alternatively, the device also includes:5th receiving unit, for preserve with first language represent it is to be identified right As after, the information of video segment corresponding to the recognition result and object to be identified that represent with second language, receiving export Instruction, wherein, export instruction is used to indicate the object to be identified for representing the first language of preservation, the knowledge represented with second language The information of video segment corresponding to other result and object to be identified is separately stored to predefined paths;Another memory cell, in response to Export instruction, the object to be identified that the first language for exporting the preservation for instructing instruction is represented, the identification represented with second language As a result the information of the video segment and corresponding to object to be identified is separately stored to predefined paths in a predetermined format.
Herein it should be noted that above-mentioned module is identical with example and application scenarios that the step of correspondence is realized, but not It is limited to the disclosure of that of above-described embodiment 1.It should be noted that above-mentioned module as a part for device may operate in as It in hardware environment shown in Fig. 1, can be realized, can also be realized by hardware by software, wherein, hardware environment includes network Environment.
Embodiment 3
According to embodiments of the present invention, a kind of server or terminal for being used to implement above-mentioned method for recognizing semantics is additionally provided.
Fig. 7 is a kind of structured flowchart of terminal according to embodiments of the present invention, as shown in fig. 7, the terminal can include:One Individual or multiple (one is only shown in figure) processor 201, memory 203 and transmitting device 205 are (in above-mentioned embodiment Dispensing device), as shown in fig. 7, the terminal can also include input-output equipment 207.
Wherein, the semantics recognition side that memory 203 can be used in storage software program and module, such as embodiment of the present invention Method and the corresponding programmed instruction/module of device, processor 201 by operation be stored in software program in memory 203 and Module, so as to perform various function application and data processing, that is, realizes above-mentioned method for recognizing semantics.Memory 203 can be wrapped Include high speed random access memory, can also include nonvolatile memory, such as one or more magnetic storage device, flash memory or Other non-volatile solid state memories of person.In some instances, memory 203 can further comprise remote relative to processor 201 The memory that journey is set, these remote memories can pass through network connection to terminal.The example of above-mentioned network includes but not limited In internet, intranet, LAN, mobile radio communication and combinations thereof.
Above-mentioned transmitting device 205 is used to data are received or sent via network, can be also used for processor with Data transfer between memory.Above-mentioned network instantiation may include cable network and wireless network.In an example, Transmitting device 205 includes a network adapter (Network Interface Controller, NIC), and it can pass through netting twine It is connected to be communicated with internet or LAN with router with other network equipments.In an example, transmission dress It is radio frequency (Radio Frequency, RF) module to put 205, and it is used to wirelessly be communicated with internet.
Wherein, specifically, memory 203 is used to store application program.
Processor 201 can call the application program that memory 203 is stored by transmitting device 205, to perform following steps Suddenly:Identification instruction is received, wherein, identification instruction is used to ask to be identified right in the captions played on the broadcast interface of video As being identified, object to be identified is represented with first language;Obtain the object to be identified in captions;Search and object to be identified The recognition result represented with second language matched somebody with somebody;Preserve the object to be identified represented with first language, represented with second language The information of video segment corresponding to recognition result and object to be identified, wherein, the information of video segment is used to receive Video segment is played after play instruction;Export the recognition result represented with second language.
Processor 201 is additionally operable to perform following step:Receive play instruction;Video segment is searched in response to play instruction Information;Video segment is played according to the information of video segment.
Processor 201 is additionally operable to perform following step:When the information of video segment links for position, position link is accessed To obtain video segment, video segment is played, wherein, position is linked for indicating the position of video segment in video;Regarding When the information of frequency fragment is used to indicate the content of video segment, the content of video segment is played.
Processor 201 is additionally operable to perform following step:By predetermined component to the captions played on the broadcast interface of video It is processed, obtains captioned test corresponding with video;Precalculated position caption playing text on the broadcast interface of video.
Processor 201 is additionally operable to perform following step:Export prompt message, wherein, prompt message be used for point out preserve with Object to be identified that first language is represented, the piece of video corresponding to the recognition result and object to be identified represented with second language The information of section;Receive in response to prompt message preservation operation when, preserve represented with first language object to be identified, with The information for the video segment corresponding to recognition result and object to be identified that second language is represented.
Processor 201 is additionally operable to perform following step:Query statement is received, wherein, query statement is used to inquire about what is preserved The object to be identified that first language is represented;The object to be identified represented with first language is inquired about in response to query statement, with second The information for the video segment corresponding to recognition result and object to be identified that language is represented, obtains Query Result;Play inquiry As a result the video segment corresponding to middle object to be identified.
Processor 201 is additionally operable to perform following step:Obtain current login account;Getting the feelings of current login account Under condition, the object to be identified represented with first language, the recognition result represented with second language and object to be identified institute are preserved In the information of corresponding video segment to the corresponding storage region of current login account;Obtaining the feelings less than current login account Under condition, login prompt interface is exported;Receive the logon information inputted at login prompt interface;According to logon information login account.
Processor 201 is additionally operable to perform following step:Export instruction is received, wherein, export instruction is used to indicate to preserve The first language object to be identified represented, the video corresponding to the recognition result and object to be identified that are represented with second language The information of fragment is separately stored to predefined paths;In response to export instruction, the first language for exporting the preservation for instructing instruction is represented The information of video segment corresponding to object to be identified, the recognition result and object to be identified that are represented with second language is with predetermined Form is separately stored to predefined paths.
Using the embodiment of the present invention, using the object to be identified received after identification is instructed in acquisition captions, search and treat The recognition result represented with second language of identification object matching, preserves object to be identified, recognition result and object to be identified The information of corresponding video segment, is exported the recognition result represented with second language, reached object to be identified and identification As a result and the purpose that preserves simultaneously of corresponding video segment, user can check the corresponding video of the new word when checking new word Fragment, it is achieved thereby that facilitating user's correspondence video segment to learn the technique effect of new word, and then solves user and is regarded in viewing Unhandy technical problem during frequency study new word.
Alternatively, the specific example in the present embodiment may be referred to showing described in above-described embodiment 1 and embodiment 2 Example, the present embodiment will not be repeated here.
It will appreciated by the skilled person that the structure shown in Fig. 7 is only signal, terminal can be smart mobile phone (such as Android phone, iOS mobile phones), tablet personal computer, palm PC and mobile internet device (Mobile Internet Devices, MID), the terminal device such as PAD.Fig. 7 it does not cause to limit to the structure of above-mentioned electronic installation.For example, terminal is also It may include than shown in Fig. 7 more either less components (such as network interface, display device etc.) or with shown in Fig. 7 Different configurations.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To be completed by program come the device-dependent hardware of command terminal, the program can be stored in a computer-readable recording medium In, storage medium can include:Flash disk, read-only storage (Read-Only Memory, ROM), random access device (Random Access Memory, RAM), disk or CD etc..
Embodiment 4
Embodiments of the invention additionally provide a kind of storage medium.Alternatively, in the present embodiment, above-mentioned storage medium can For performing the program code of method for recognizing semantics.
Alternatively, in the present embodiment, above-mentioned storage medium can be located at multiple in the network shown in above-described embodiment On at least one network equipment in the network equipment.
Alternatively, in the present embodiment, storage medium is arranged to the program code that storage is used to perform following steps:
S1, receives identification instruction, wherein, identification instruction is used to ask in the captions to playing on the broadcast interface of video Object to be identified is identified, and object to be identified is represented with first language;
S2, obtains the object to be identified in captions;
S3, searches the recognition result represented with second language matched with object to be identified;
S4, preserves the object to be identified represented with first language, the recognition result that is represented with second language and to be identified The information of video segment corresponding to object, wherein, the information of video segment is used to play after play instruction is received to regard Frequency fragment;
S5, exports the recognition result represented with second language.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Receive play instruction; The information of video segment is searched in response to play instruction;Video segment is played according to the information of video segment.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:In video segment When information is that position is linked, accesses position link to obtain video segment, play video segment, wherein, position is linked for referring to Show the position of video segment in video;When the information of video segment is used to indicate the content of video segment, piece of video is played The content of section.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Pass through predetermined component The captions played on the broadcast interface of video are processed, captioned test corresponding with video is obtained;In broadcasting circle of video Precalculated position caption playing text on face.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Export prompt message, Wherein, prompt message is used to point out to preserve the object to be identified represented with first language, the recognition result represented with second language And the information of the video segment corresponding to object to be identified;When receiving the preservation operation in response to prompt message, preserve The object to be identified that is represented with first language, the video corresponding to the recognition result and object to be identified represented with second language The information of fragment.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Receive query statement, Wherein, query statement is used to inquire about the object to be identified that the first language preserved is represented;In response to query statement inquiry with first Object to be identified that language is represented, video segment corresponding to the recognition result and object to be identified represented with second language Information, obtains Query Result;Play the video segment corresponding to object to be identified in Query Result.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Acquisition is currently logged in Account;In the case where getting current login account, preserve represented with first language object to be identified, with second language table The information of the video segment corresponding to recognition result and object to be identified shown is to the corresponding storage region of current login account It is interior;In the case where obtaining less than current login account, login prompt interface is exported;Receive in stepping on that login prompt interface is inputted Land information;According to logon information login account.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Export instruction is received, Wherein, export instruction is used to indicate the object to be identified for representing the first language of preservation, the identification knot represented with second language The information of video segment corresponding to fruit and object to be identified is separately stored to predefined paths;In response to export instruction, export is referred to Make the object to be identified that the first language of the preservation indicated is represented, the recognition result and object to be identified that are represented with second language The information of corresponding video segment is separately stored to predefined paths in a predetermined format.
Alternatively, the specific example in the present embodiment may be referred to showing described in above-described embodiment 1 and embodiment 2 Example, the present embodiment will not be repeated here.
Alternatively, in the present embodiment, above-mentioned storage medium can include but is not limited to:USB flash disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. is various can be with the medium of store program codes.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
If the integrated unit in above-described embodiment is realized using in the form of SFU software functional unit and is used as independent product Sale or in use, the storage medium that above computer can be read can be stored in.Understood based on such, skill of the invention The part or all or part of the technical scheme that art scheme substantially contributes to prior art in other words can be with soft The form of part product is embodied, and the computer software product is stored in storage medium, including some instructions are to cause one Platform or multiple stage computers equipment (can be personal computer, server or network equipment etc.) perform each embodiment institute of the invention State all or part of step of method.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not have in some embodiment The part of detailed description, may refer to the associated description of other embodiment.
, can be by others side in several embodiments provided herein, it should be understood that disclosed client Formula is realized.Wherein, device embodiment described above is only schematical, such as division of described unit, only one Kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or Another system is desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or discussed it is mutual it Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of unit or module by some interfaces Connect, can be electrical or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (16)

1. a kind of method for recognizing semantics, it is characterised in that including:
Identification instruction is received, wherein, the identification instruction is used to ask to treating in the captions played on the broadcast interface of video Identification object is identified, and the object to be identified is represented with first language;
Obtain the object to be identified in the captions;
Search the recognition result represented with second language matched with the object to be identified;
Preserve the object to be identified represented with the first language, the recognition result that is represented with the second language and described treat The information of video segment corresponding to identification object, wherein, the information of the video segment be used for receive play instruction it After play the video segment;
Export the recognition result represented with the second language.
2. according to the method described in claim 1, it is characterised in that preserve with the first language represent it is to be identified right As after, the information of video segment corresponding to the recognition result and the object to be identified that are represented with the second language, Methods described also includes:
Receive the play instruction;
The information of the video segment is searched in response to the play instruction;
The video segment is played according to the information of the video segment.
3. method according to claim 2, it is characterised in that the piece of video is played according to the information of the video segment Section includes one below:
When the information of the video segment links for position, the position link is accessed to obtain the video segment, is played The video segment, wherein, the position links the position for indicating the video segment in the video;
When the information of the video segment is used to indicate the content of the video segment, the content of the video segment is played.
4. according to the method described in claim 1, it is characterised in that before the object to be identified in obtaining the captions, institute Stating method also includes:
The captions played on the broadcast interface of video are processed by predetermined component, captions corresponding with the video are obtained Text;
Play the captioned test in precalculated position on the broadcast interface of the video.
5. according to the method described in claim 1, it is characterised in that preserve with the first language represent it is to be identified right As before, the information of video segment corresponding to the recognition result and the object to be identified that are represented with the second language, Methods described also includes:
Export prompt message, wherein, the prompt message be used for point out preserve represented with the first language object to be identified, The information of the video segment corresponding to recognition result and the object to be identified represented with the second language;
Receive in response to the prompt message preservation operation when, preserve with the first language represent it is to be identified right As the information of, video segment corresponding to the recognition result and the object to be identified that are represented with the second language.
6. method according to claim 5, it is characterised in that preserve with the first language represent it is to be identified right As after, the information of video segment corresponding to the recognition result and the object to be identified that are represented with the second language, Methods described also includes:
Query statement is received, wherein, the query statement is used to inquire about the object to be identified that the first language preserved is represented;
In response to the query statement inquire about the object to be identified represented with the first language, represented with the second language The information of video segment corresponding to recognition result and the object to be identified, obtains Query Result;
Play the video segment corresponding to object to be identified described in the Query Result.
7. according to the method described in claim 1, it is characterised in that preserve with the first language represent it is to be identified right As before, the information of video segment corresponding to the recognition result and the object to be identified that are represented with the second language, Methods described also includes:
Obtain current login account;
In the case where getting the current login account, preserve represented with the first language object to be identified, with institute The information for stating the video segment corresponding to the recognition result and the object to be identified that second language represents currently is stepped on to described In the corresponding storage region of land account;
In the case where obtaining less than the current login account, login prompt interface is exported;
Receive the logon information inputted at the login prompt interface;
According to the logon information login account.
8. according to the method described in claim 1, it is characterised in that preserve with the first language represent it is to be identified right As after, the information of video segment corresponding to the recognition result and the object to be identified that are represented with the second language, Methods described also includes:
Receive export instruction, wherein, it is described export instruction be used for indicate by the first language of preservation represent it is to be identified right As the information of, video segment corresponding to the recognition result and the object to be identified that are represented with the second language is separately stored to Predefined paths;
In response to it is described export instruction, by it is described export instruct indicate the preservation the first language represent it is to be identified The information of video segment corresponding to object, the recognition result represented with the second language and the object to be identified is with pre- The formula that fixes separately is stored to predefined paths.
9. a kind of semantic recognition device, it is characterised in that including:
First receiving unit, for receiving identification instruction, wherein, the identification instruction is used to ask the broadcast interface to video Object to be identified in the captions of broadcasting is identified, and the object to be identified is represented with first language;
First acquisition unit, for obtaining the object to be identified in the captions;
First searching unit, for searching the recognition result represented with second language matched with the object to be identified;
First storage unit, for preserving the object to be identified represented with the first language, is represented with the second language The information of video segment corresponding to recognition result and the object to be identified, wherein, the information of the video segment is used for The video segment is played after play instruction is received;
First output unit, for exporting the recognition result represented with the second language.
10. device according to claim 9, it is characterised in that described device also includes:
Second receiving unit, for preserving the object to be identified represented with the first language, being represented with the second language Recognition result and the object to be identified corresponding to video segment information after, receive the play instruction;
Second searching unit, the information for searching the video segment in response to the play instruction;
First broadcast unit, for playing the video segment according to the information of the video segment.
11. device according to claim 10, it is characterised in that first broadcast unit is used to perform one below:
When the information of the video segment links for position, the position link is accessed to obtain the video segment, is played The video segment, wherein, the position links the position for indicating the video segment in the video;
When the information of the video segment is used to indicate the content of the video segment, the content of the video segment is played.
12. device according to claim 9, it is characterised in that described device also includes:
Machining cell, before the object to be identified in the acquisition captions, broadcasting circle by predetermined component to video The captions played on face are processed, and obtain captioned test corresponding with the video;
Second broadcast unit, the captioned test is played for the precalculated position on the broadcast interface of the video.
13. device according to claim 9, it is characterised in that described device also includes:
Second output unit, for preserving the object to be identified represented with the first language, being represented with the second language Recognition result and the object to be identified corresponding to video segment information before, export prompt message, wherein, it is described Prompt message is used to point out to preserve the object to be identified represented with the first language, the identification knot represented with the second language The information of video segment corresponding to fruit and the object to be identified;
Second storage unit, for when receiving the preservation operation in response to the prompt message, preserving with first language Say the video corresponding to the object to be identified represented, the recognition result represented with the second language and the object to be identified The information of fragment.
14. device according to claim 13, it is characterised in that described device also includes:
3rd receiving unit, for preserving the object to be identified represented with the first language, being represented with the second language Recognition result and the object to be identified corresponding to video segment information after, receive query statement, wherein, it is described Query statement is used to inquire about the object to be identified that the first language preserved is represented;
Query unit, for inquiring about the object to be identified represented with the first language, in response to the query statement with described The information for the video segment corresponding to recognition result and the object to be identified that second language is represented, obtains Query Result;
3rd broadcast unit, for playing the video segment described in the Query Result corresponding to object to be identified.
15. device according to claim 9, it is characterised in that described device also includes:
Second acquisition unit, for preserving the object to be identified represented with the first language, being represented with the second language Recognition result and the object to be identified corresponding to video segment information before, obtain current login account;
3rd storage unit, in the case where getting the current login account, preservation to be represented with the first language Object to be identified, the recognition result that is represented with the second language and the object to be identified corresponding to video segment In information to the corresponding storage region of the current login account;
3rd output unit, in the case where obtaining less than the current login account, exporting login prompt interface;
4th receiving unit, for receiving the logon information inputted at the login prompt interface;
Login unit, for according to the logon information login account.
16. device according to claim 9, it is characterised in that described device also includes:
5th receiving unit, for preserving the object to be identified represented with the first language, being represented with the second language Recognition result and the object to be identified corresponding to video segment information after, receive export instruction, wherein, it is described Export instruction is used to indicate the object to be identified for representing the first language of preservation, the identification represented with the second language As a result the information of the video segment and corresponding to the object to be identified is separately stored to predefined paths;
Another memory cell, for being instructed in response to the export, the export is instructed first language of the preservation indicated Say the video corresponding to the object to be identified represented, the recognition result represented with the second language and the object to be identified The information of fragment is separately stored to predefined paths in a predetermined format.
CN201710038438.2A 2017-01-16 2017-01-16 Method for recognizing semantics and device Active CN107071554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710038438.2A CN107071554B (en) 2017-01-16 2017-01-16 Method for recognizing semantics and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710038438.2A CN107071554B (en) 2017-01-16 2017-01-16 Method for recognizing semantics and device

Publications (2)

Publication Number Publication Date
CN107071554A true CN107071554A (en) 2017-08-18
CN107071554B CN107071554B (en) 2019-02-26

Family

ID=59598754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710038438.2A Active CN107071554B (en) 2017-01-16 2017-01-16 Method for recognizing semantics and device

Country Status (1)

Country Link
CN (1) CN107071554B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348145A (en) * 2018-09-14 2019-02-15 上海连尚网络科技有限公司 The method and apparatus of association barrage is generated based on subtitle
CN110620960A (en) * 2018-06-20 2019-12-27 北京优酷科技有限公司 Video subtitle processing method and device
CN112163433A (en) * 2020-09-29 2021-01-01 北京字跳网络技术有限公司 Key vocabulary matching method and device, electronic equipment and storage medium
CN112163103A (en) * 2020-09-29 2021-01-01 北京字跳网络技术有限公司 Method, device, electronic equipment and storage medium for searching target content
WO2021136334A1 (en) * 2019-12-31 2021-07-08 阿里巴巴集团控股有限公司 Video generating method and apparatus, electronic device, and computer readable storage medium
CN115186655A (en) * 2022-07-06 2022-10-14 重庆软江图灵人工智能科技有限公司 Character semantic recognition method, system, medium and device based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040152054A1 (en) * 2003-01-30 2004-08-05 Gleissner Michael J.G. System for learning language through embedded content on a single medium
CN2772159Y (en) * 2005-01-20 2006-04-12 英业达股份有限公司 Caption translating device
CN104081784A (en) * 2012-02-10 2014-10-01 索尼公司 Information processing device, information processing method, and program
CN105190678A (en) * 2013-03-15 2015-12-23 美介摩公司 Language learning environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040152054A1 (en) * 2003-01-30 2004-08-05 Gleissner Michael J.G. System for learning language through embedded content on a single medium
CN2772159Y (en) * 2005-01-20 2006-04-12 英业达股份有限公司 Caption translating device
CN104081784A (en) * 2012-02-10 2014-10-01 索尼公司 Information processing device, information processing method, and program
CN105190678A (en) * 2013-03-15 2015-12-23 美介摩公司 Language learning environment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110620960A (en) * 2018-06-20 2019-12-27 北京优酷科技有限公司 Video subtitle processing method and device
CN110620960B (en) * 2018-06-20 2022-01-25 阿里巴巴(中国)有限公司 Video subtitle processing method and device
CN109348145A (en) * 2018-09-14 2019-02-15 上海连尚网络科技有限公司 The method and apparatus of association barrage is generated based on subtitle
CN109348145B (en) * 2018-09-14 2020-11-24 上海连尚网络科技有限公司 Method and device for generating associated bullet screen based on subtitle and computer readable medium
WO2021136334A1 (en) * 2019-12-31 2021-07-08 阿里巴巴集团控股有限公司 Video generating method and apparatus, electronic device, and computer readable storage medium
CN112163433A (en) * 2020-09-29 2021-01-01 北京字跳网络技术有限公司 Key vocabulary matching method and device, electronic equipment and storage medium
CN112163103A (en) * 2020-09-29 2021-01-01 北京字跳网络技术有限公司 Method, device, electronic equipment and storage medium for searching target content
CN112163433B (en) * 2020-09-29 2022-04-05 北京字跳网络技术有限公司 Key vocabulary matching method and device, electronic equipment and storage medium
CN115186655A (en) * 2022-07-06 2022-10-14 重庆软江图灵人工智能科技有限公司 Character semantic recognition method, system, medium and device based on deep learning

Also Published As

Publication number Publication date
CN107071554B (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN107071554A (en) Method for recognizing semantics and device
US20200301954A1 (en) Reply information obtaining method and apparatus
CN107027060A (en) The determination method and apparatus of video segment
CN106570100B (en) Information search method and device
CN106339507B (en) Streaming Media information push method and device
CN108259971A (en) Subtitle adding method, device, server and storage medium
CN107241622A (en) video location processing method, terminal device and cloud server
CN107666623A (en) The methods of exhibiting and device of broadcast information
CN108345692A (en) A kind of automatic question-answering method and system
CN106803987A (en) The acquisition methods of video data, device and system
CN109474562B (en) Display method and device of identifier, and response method and device of request
CN106796496A (en) Display device and its operating method
CN109829064B (en) Media resource sharing and playing method and device, storage medium and electronic device
CN109389427B (en) Questionnaire pushing method, questionnaire pushing device, computer device and storage medium
CN109784974A (en) Advertisement placement method, device, electronic equipment and storage medium based on big data
CN108536414A (en) Method of speech processing, device and system, mobile terminal
CN109063662A (en) Data processing method, device, equipment and storage medium
CN106548793A (en) Storage and the method and apparatus for playing audio file
CN108062404A (en) Processing method, device, readable storage medium storing program for executing and the terminal of facial image
CN107862915A (en) Multimedia file with read method and apparatus
CN107633029A (en) A kind of method and device for showing electronic document
CN105845158A (en) Information processing method and client
CN112163560A (en) Video information processing method and device, electronic equipment and storage medium
CN110209777A (en) The method and electronic equipment of question and answer
CN106356056B (en) Audio recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210924

Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right