CN103729476A - Method and system for correlating contents according to environmental state - Google Patents

Method and system for correlating contents according to environmental state Download PDF

Info

Publication number
CN103729476A
CN103729476A CN201410037096.9A CN201410037096A CN103729476A CN 103729476 A CN103729476 A CN 103729476A CN 201410037096 A CN201410037096 A CN 201410037096A CN 103729476 A CN103729476 A CN 103729476A
Authority
CN
China
Prior art keywords
content
ambient condition
module
current
retrieval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410037096.9A
Other languages
Chinese (zh)
Inventor
王玉娇
李则润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410037096.9A priority Critical patent/CN103729476A/en
Publication of CN103729476A publication Critical patent/CN103729476A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a method and a system for correlating the contents according to an environmental state. The method comprises the following steps of acquiring the current environmental state through a sensor on an intelligent terminal, analyzing the environmental state, generating semantic information for expressing the characteristics of the environmental state, establishing key words for searching the content according to the semantic information, and finally searching the content according to the key words. By adopting the method and the system, the capability of detecting the environmental state by the intelligent terminal can be adequately utilized, so that the pushed content is correlated with the environmental state, the pushed content can well conform to the circumstance in which a user stays, the probability that the information is accepted by the user is high, and high value can be brought to the user.

Description

A kind of method and system that carrys out content association according to ambient condition
Technical field
The application relates to a kind of method and system that carrys out content association according to ambient condition.
Background technology
Current, there are various computer-aided learning systems, for example English study software and verbal learning software, it is all the hierarchical structure of the content of courses of having set various scenes or predefine in program, on the menu being provided in program by user, select, thereby obtain the different contents of courses.
In application program shop, can find multilingual tutorial application, no matter that auxiliary note is word, assisting hearing, auxiliary spoken etc., be all in program indoor design fixing function logic, entrance using menu or icon as difference in functionality, user clicks menu or icon just starts corresponding function, thereby obtains the corresponding content of courses.
The shortcoming of this mode is: only have user to select which function could carry out this function, user is presented in the content of courses completely passively, do not utilize sensors numerous on intelligent terminal to detect the residing environment of user, the residing ambient condition of the execution of function and the content of obtaining and user disconnects.
Summary of the invention
The application provides a kind of method and system that carrys out content association according to ambient condition, can make full use of the sensor on intelligent terminal, has solved the problem that the content obtained and current environment state disconnect.
According to the application's first aspect, the application provides a kind of method of carrying out content association according to ambient condition, comprising: obtain current ambient condition; Described ambient condition is analyzed, generated for expressing the semantic information of described ambient condition feature, and according to described semantic information, be configured to the key word of retrieval of content; According to described key search, obtain content.
In one embodiment, described ambient condition is analyzed, in generative semantics information step, the expression way of employing comprises: key-value pair form, label form and word paragraph form.
In one embodiment, described method also comprises the content that obtains of retrieval is exported.
In one embodiment, when current content is exported, detect in real time current ambient condition, and judge whether described ambient condition meets switching condition, if met, generate new key word for retrieving and the content of output and current content and current environment state relation.
In one embodiment, described ambient condition comprises one or more in geographic position, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature; Described content comprises one or more in audio frequency, video, picture, text, program code, control data.
According to the application's second aspect, the application provides a kind of system of carrying out content association according to ambient condition, comprising: acquisition module, for obtaining current ambient condition; Analysis module, for receiving described ambient condition, analyzes it, generates for expressing the semantic information of described ambient condition feature, and according to described semantic information, is configured to the key word of retrieval of content; Retrieval module, for receiving described key word, and obtains content according to described key search.
In one embodiment, described analysis module is analyzed described ambient condition, and during generative semantics information, the expression way that analysis module adopts comprises: key-value pair form, label form and word paragraph form.
In one embodiment, described system also comprises output module, for the content that retrieval module retrieval is obtained, exports.
In one embodiment, described acquisition module also, for when output module is exported current content, detects current ambient condition in real time; Described system also comprises judge module, be used for judging whether described ambient condition meets switching condition, if met, generate new key word for the content of retrieval module retrieval and current content and current environment state relation, and control described in output module output and the content of current content and current environment state relation.
In one embodiment, described ambient condition comprises one or more in geographic position, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature; Described content comprises one or more in audio frequency, video, picture, text, program code, control data.
In the application provides a kind of method and system that carrys out content association according to ambient condition, by the sensor on intelligent terminal, obtain current ambient condition, ambient condition is analyzed, generate for expressing the semantic information of this ambient condition feature, and according to semantic information, be configured to the key word of retrieval of content, finally according to key search, obtain content.Therefore, can make full use of the detectability of intelligent terminal to ambient condition, make the content of propelling movement relevant to ambient condition, thereby more meet the residing sight of user, make information larger by the probability of user admission, to user, bring higher value.
Accompanying drawing explanation
Fig. 1 carrys out the schematic flow sheet of the method for content association according to ambient condition in a kind of embodiment of the application;
Fig. 2 carrys out the module diagram of the system of content association according to ambient condition in a kind of embodiment of the application;
Fig. 3 is the organizational form schematic diagram of content library in a kind of embodiment of the application;
Fig. 4 carrys out the resume module schematic flow sheet of the system of content association according to ambient condition in a kind of embodiment of the application;
Fig. 5 is the module diagram of the on-line study system based on intelligent glasses in a kind of application of the application;
Fig. 6 is in conjunction with the module diagram of the assistant learning system of motion state and position in a kind of application of the application.
Embodiment
Along with the development of infotech, increasing intelligent terminal has entered into people's life, and these intelligent terminals have abundant sensor element, can the residing ambient condition of checkout equipment.For example: it is first-class that smart mobile phone has microphone, gravity sensor, acceleration transducer, position transducer, shooting; An Intelligent spire lamella has gravity sensor, acceleration transducer etc.When user uses these intelligent terminals, intelligent terminal can detect a series of ambient condition, these message reflections user's state, user's preference etc.
If the ambient condition that computer-aided learning system can utilize intelligent terminal to gather, the content of courses of environmental correclation is provided to user, can provide enriching experiences more to user, not only make user learn the content of courses with environmental correclation of living in whenever and wherever possible, and reduce the trouble that user selects the content of courses by hand, thereby promote learning efficiency.
Therefore, the embodiment of the present application provides a kind of method and system that carrys out content association according to ambient condition, can make full use of the sensor on intelligent terminal, and makes the content obtained relevant to current environment state.
Be the explanation of some terms of relating in the embodiment of the present application below.
Bluetooth: the radiotelegraphy that is a kind of support equipment short haul connection (in general tens of rice).Can comprise mobile phone, PDA, wireless headset, notebook computer, be correlated with and carry out wireless messages exchange between numerous equipment such as peripheral hardware.
GPS:Global Positioning System, GPS.
Wi-Fi: be the brand of Wi-Fi Alliance, for representing 802.11 series standards of IEEE.
Wi-Fi Direct: be equipment in a set of permission wireless network of Wi-Fi Alliance issue without get final product direct-connected standard mutually by wireless router, permission wireless device is with point-to-point form.
NFC: be Near Field Communication abbreviation, i.e. near field communication (NFC).Be a kind of contactless identification and interconnection technique, can carry out wireless near field communication at mobile device, consumer electronics product, PC and smart control Tool Room.
RFID: be the abbreviation of Radio Frequency Identification, radio RF recognition technology.
Intelligent terminal: refer to the electronic equipment that comprises software program, by software program for execution, realize specific function.Common intelligent terminal comprises: smart mobile phone, personal digital assistant, panel computer, PC, intelligent glasses, intelligent watch, wrist strap, be attached to smart accessories on other equipment etc.Intelligent terminal detects outer environment state by sensor, and sensor can be built in intelligent terminal, also can be physically separated with intelligent terminal, by wireless or wire communication, the information gathering is passed to intelligent terminal.
Ambient condition: the perceived external condition of intelligent terminal is all ambient condition.Such as: the information such as the geographic position that intelligent terminal can detect, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature.For example, in ambient condition, camera detects user's smile, serious expression, the posture that grips terminal, the sound sending etc.; User has used which application program on intelligent terminal, in program, has done which thing etc.
Short-range communication: refer to the communication between the equipment of closer distance (for example, in hundreds of rice).Generally by LAN (Local Area Network) or point-to-point mode, realize short-range communication, such as: by Wi-Fi, Wi-Fi Direct, bluetooth, NFC, RFID etc.
The content of courses: domain knowledge is presented by content of multimedia and software program code, these content of multimedia and program code are called the content of courses.The content of courses is downloaded to after intelligent terminal, on intelligent terminal, is interpreted and executed.The content of courses is a kind of imbody of content in the embodiment of the present application.
Below by embodiment, by reference to the accompanying drawings the application is described in further detail.
Please refer to Fig. 1, the present embodiment provides a kind of method of carrying out content association according to ambient condition, comprising:
Step 101: obtain current ambient condition.Wherein, ambient condition comprises one or more in geographic position, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature.
Step 102: the ambient condition obtaining is analyzed, generated for expressing the semantic information of this ambient condition feature.In specific embodiment, the expression way that step 102 adopts comprises: key-value pair form, label form and word paragraph form.
Step 103: the key word that is configured to retrieval of content according to the semantic information generating.
Step 104: the key search according to structure obtains content.Wherein, content comprises one or more in audio frequency, video, picture, text, program code, control data.
Step 105: the content that retrieval is obtained is exported.
Step 106: in step 105, when the content that retrieval is obtained is exported, also detect in real time current ambient condition, and whether the ambient condition that judgement detects meets switching condition, if met, forward step 107 to, if do not met, continue the current content of output.
Step 107: when the ambient condition that determines real-time detection in step 106 meets switching condition, generate new key word for retrieving and the content of output and current content and current environment state relation.
Please refer to Fig. 2, based on said method, the present embodiment is also corresponding provides a kind of system of carrying out content association according to ambient condition, comprises acquisition module 201, analysis module 202, retrieval module 203, output module 204 and judge module 205.
Acquisition module 201 is for obtaining current ambient condition.Wherein, border state comprises one or more in geographic position, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature.In specific embodiment, acquisition module 201 can also be for receiving user's input.Acquisition module 201 can be keyboard, touch-screen, microphone, camera, compass, GPS, acceleration transducer, temperature sensor etc.Acquisition module 201 comprises sensor hardware and relevant software program, also can be not comprise sensor hardware, it is software program (being commonly referred to virtual-sensor), for example, the content that user issues in social networks application program has just been expressed user's state and preference, and these information also can be used as a kind of ambient condition.Different sensors can, in different physical entities, provide the raw data of the ambient condition of collection to analysis module 202 by wired or wireless mode.
Analysis module 202, for receiving this ambient condition, is analyzed it, generates for expressing the semantic information of this ambient condition feature, and according to the semantic information generating, is configured to the key word of retrieval of content.In the present embodiment, 202 pairs of ambient conditions of analysis module are analyzed, and during generative semantics information, the expression way that analysis module 202 adopts comprises: key-value pair form, label form and word paragraph form.For example use camera to take an image that several individuals have meal around dining table as sensor, then this width image is analyzed, according to different expression waies, can export multi-form semantic information, key-value pair form may be " tone=warm tones; Content=people; Profile=irregular; Number of person=2; Object=desk; Object=cup; XX dining room, XX district, place=XX city "; Label form may be " have a meal, man, woman, has a drink "; Word paragraph form may be " two people of a man and a woman are sitting in a limit and drink cola drink, and background has large tree, lawn, hedge, XX dining room, XX district, XX city, place ".
Retrieval module 203 is for receiving this key word, and the key search of constructing according to analysis module 202 obtains content.Concrete, retrieval module 203 can be the content of retrieval coupling from the inverted index of content library.Content library comprises content and index two parts, content comprises one or more in audio frequency, video, picture, text, program code, control data, content library is according to the incidence relation of content and ambient condition, set up inverted index, the function of inverted index is: input environment state (key word), can return to the content of courses of coupling.
Output module 204 is exported for the content that retrieval module retrieval is obtained, and is about to content and represents.According to the difference of content, there is different ways of presentation.Such as: on display display video or image, in loudspeaker, broadcast audio frequency, with the electric signal that formal output was received that shakes, in browser engine, carry out the code etc. that meets Web standard.Output module 204 not necessarily, in a physical entity, can be divided into a plurality of entities, for example: use intelligent glasses as display, use intelligent watch to produce vibration, use bluetooth earphone output sound etc.
Acquisition module 201 also, for when 204 pairs of current contents of output module are exported, detects current ambient condition in real time; Judge module 205 is for judging whether the ambient condition that acquisition module 201 detects in real time meets switching condition, if met, generate new key word for the content of retrieval module retrieval and current content and current environment state relation, and control described in output module 204 outputs and the content of current content and current environment state relation.
Content is stored in content library, and content library can be the ingredient of retrieval module 203.The content of courses of take below illustrates the organizational form of content library as example.
Conventionally, the carrier of every kind of content of courses is to have border, mutual isolation.For example: every textbook has the chapters and sections arrangement of oneself, every teaching laser disk has Content Organizing structure and the use-pattern of oneself, user can only browse a content of courses carrier inside, between different content of courses carriers, cannot provide rapidly the relevant content of courses according to same condition (ambient condition).And the Content Organizing form that adopts the present embodiment to provide, just can be so that different, oarse-grained content of courses fragmentation, abolish the barrier of various teaching content vector, can according to specific theme (ambient condition), from numerous contents of courses, extract the content being associated and present to user.
Please refer to Fig. 3, the base unit that content element is content, each content element has been expressed specific Information Communication content.
Each content element can be associated with one or more ambient condition, shows that this content element is applicable to the sight of these ambient conditions.The information that ambient condition can detect as the sensor of intelligent terminal, it may be the audio frequency that microphone receives, and may be also image or video that camera is taken, may be also the equipment moving state that detects of acceleration transducer etc.When making content, relative ambient condition sample is made in sampling; The ambient condition of content element association can increase, deletes, revise, so that content element is applicable to actual conditions more with mating of ambient condition.According to the difference of ambient condition, the method for making sample has very big difference, and same ambient condition also can be sampled in multiple dimension.For example: for this ambient condition of image, in hue dimension sampling, in the profile dimension sampling of object, in the big or small dimension sampling of object, in the sampling of material dimension etc.The current environment state ambient condition associated with content obtaining need to compare in identical dimension.
Content element 1 in Fig. 3 and ambient condition 1A, ambient condition 1B are associated; And content element 2 and ambient condition 2A, ambient condition 2B are associated.Describe environment state has two kinds of modes, a kind of is accurate description: the classification of ambient condition (according to actual needs, classification can more carefully be divided into subclass), the attribute of ambient condition and value to (Key-value), an ambient condition can comprise the right of a plurality of attributes and value; Another is fuzzy description, with one or more word tag even passage reflect the feature of environment.
Also relevant between content element, as shown in Figure 3, after content element 1 completes, can unconditionally be switched to content element 3, also can have ready conditions (having received ambient condition X) is switched to content 2.For example: content is a section audio dialogue, is talking about the thing of having a meal.If intelligent terminal is in playing the process of this section audio, user being detected and also participated in dialogue---user has said facing to microphone, think that user is to this section of content (meeting switching condition) interested, then at once or etc. play the content 2 that just can be switched to after this section audio with user's speech strong correlation; Otherwise be exactly after grade plays content element 1, then play content unit 3, be a concrete application of judge module 205 herein.
Please refer to Fig. 4, is the treatment scheme schematic diagram of acquisition module 201, analysis module 202, retrieval module 203, output module 204 and judge module 205.
1.1 obtain ambient condition: acquisition module 201 utilizes sensors sense environmental state, obtain the primal environment state of sensor output.For example: what by camera sensing device, detect is piece image, now and do not know what content image comprises; What by microphone, detect is a section audio, now and do not know what the content of audio frequency is.Certainly, some sensors may carry out initial analysis to content, the primal environment state of output has had certain semantic information, such as the longitude and latitude of GPS locating module output, be deployed in beacon unique identification that the Bluetooth beacon in environment exports etc., this class semantic information can directly be used as search key.If be equipped with multiple sensors in acquisition module 201, can collect the primal environment state of multiple sensors.
1.2 send: acquisition module 201 sends to analysis module 202 original ambient condition.
1.3 analysis environments states: 202 pairs of original ambient conditions of analysis module are analyzed, obtain the ambient condition (semantic information) of semantization, and are configured to the key word of retrieval of content.The collection piece image of take is example, and the ambient condition of semantization can be: classification=image; Key1=contour of object, Value1=is circular; Key2=color, Value2=is orange.Optionally, analysis module 202 can be accumulated the ambient condition in the past gathering, and selects the information of some or all of accumulation, sends to retrieval module 203.In specific embodiment, the priority of semantic information can be identified by certain method, for example: according to the time order and function gathering, discharge priority, according to user, the feedback of output module 204 output contents is judged the priority of semantic information etc.
1.4 send key word: analysis module 202 sends to retrieval module 203 the key word obtaining after analyzing.
1.5 retrieval of content: retrieval module 203 is compared the ambient condition sample in current ambient condition and content library according to the key word of receiving, finds the identical or close corresponding content of ambient condition sample.In the message of sending due to analysis module 202, can comprise the corresponding ambient condition of multiple sensors, so retrieval module 203 also supports to carry out from a plurality of dimensions the comparison of ambient condition.
1.6 send: the content that retrieval module 203 obtains retrieval sends to output module 204.
1.7 output contents: the content that output module 204 identifications are sent, according to the difference of content, use the output device corresponding with it to carry out output content.
1.8 obtain ambient condition: in output module 204 output contents, and the ambient condition that acquisition module 201 Real-time Obtainings are current.
1.9 send: the ambient condition that acquisition module 201 obtains by analysis module 202 sends to judge module 205 after analyzing.
2.0 judgements (switching condition): judge module 205 judges whether current ambient condition meets switching condition.
2.1 control switching: if judge module 205 determines current ambient condition, meet switching condition, produce new search key, to retrieval module 203, initiate retrieval request.
2.2 retrieval of content: retrieval module 203 obtains according to the new key search of receiving the content being associated with current content and ambient condition.
2.3 send: the content that retrieval module 203 obtains retrieval sends to output module 204.
The content of 2.4 output switchings: the content of output module 204 output switchings, it should be noted that, output module 204 can be after the output of current content the content of output switching again, also can after the switching command that gets judge module 205, switch immediately.
A kind of method and system that carrys out content association according to ambient condition providing based on the present embodiment, is described with concrete application examples below.
In concrete application examples, acquisition module 201, analysis module 202, retrieval module 203, output module 204 and judge module 205 can be in same physical entities, also may be in a plurality of physical entities.
Application examples one: the on-line study system based on intelligent glasses
Intelligent glasses possesses the multiple sensors such as camera, microphone, but also possesses the output devices such as earphone, display, therefore based on intelligent glasses, can realize a kind of on-line study system.
The feature of this on-line study system is: with camera, take pictures, then analyze the content in photo, as ambient condition, retrieve associated knowledge, knowledge is play to (can be multilingual, such as English, Chinese etc.) by the form of voice; In the process of playing, the sentence that may have a question needs user to answer, if user has done voice answering, the knowledge that system provides changes accordingly.
Please refer to Fig. 5, this system comprises intelligent glasses and server, and server is positioned at network side.Intelligent glasses comprises acquisition module 501, analysis module 502, first communication module 503 and output module 504.Server comprises second communication module 505 and retrieval module 506.
For acquisition module 501, can use camera as image acquisition sensor, use microphone as audio collection sensor.
For output module 504, can use earphone as audio output apparatus, use display as vision output device.
Analysis module 502 comprises image recognition and speech recognition.The input of image recognition is the tone of detected image, the contour feature of object, can also judge people's face, and the classification of object etc., with the formal output testing result of label.The output of speech recognition is to be text speech conversion.
First communication module 503 makes intelligent glasses can be connected on internet, for example: can be connected to Wi-Fi access point by Wi-Fi agreement, and then access internet; Also can directly access internet by cellular network.
The on-line study system based on intelligent glasses in should use-case can realize by a client-side program.Client-side program is responsible for coordinating acquisition module 501, analysis module 502, first communication module 503 and output module 504.After running client program, implementation is as follows:
1, select sensor, call acquisition module 501 and gather ambient condition.Now can select one or more sensors, for example: with camera, take a photo, receive audio frequency of microphone collection etc.
2, the ambient condition gathering is passed to analysis module 502 and analyze, obtain the ambient condition (semantic information) of semantization, and be configured to the key word of retrieval of content.
If 3 networks are unimpeded, by first communication module 503 and second communication module 505, the key word of structure is passed to server.
4, server is according to after receiving, retrieval module 506 retrieves the content of coupling from content library according to this key word, and returns to intelligent glasses.
5, client program calls content output block (output module 504), the content that output is received.For example: audio plays in earphone shows image, shows Web program operation result etc. on display.
Each content element of content library in server is a section audio, and content is the phonetic material of various scenes.Comprise singlely tell about, many people talk etc.Each content element has been stamped label, and label has embodied the feature of ambient condition.Content has been set up inverted index according to associated label, according to label, can retrieve content.Please refer to table 1, for the maintenance of content library is related to example:
Table 1 content library
Numbering Label Content element address Content association numbering Correlation Criteria
0001 Two, the bar people men and women that chats File://a/b/1.mp3 0006 Nothing
0002 Art Museum Shenzhen Http://a.b.com/2.mp3 0005 Receive user speech
0003 Little tree lawn flower File://a/c/3.mp3 0004 ?
? ? ? ? ?
User uses the camera of intelligent glasses to take piece image, for example: the image that on a width dining table, several individuals have meal, then this width image is analyzed.According to the difference of the analytical technology adopting, can export the semantic information of different aspects, the feature that semantic hierarchies are lower may be " tone=warm tones; Content=people; Profile=irregular "; That semantic hierarchies are higher may be " content=people; Number of person=2; Object=desk; Object=cup "; What semantic hierarchies were higher may be " two people of a man and a woman are sitting in a limit and drink cola drink, and background has large tree, lawn, hedge ".
Intelligent glasses is configured to the key word of retrieval according to current ambient condition, for example: for key-value mode using key and value all as key word; Can be directly as key word for label form; For the content of one section of word, need first to carry out word segmentation processing, then distinguish which word meaningful, select significant word to form key word.Then key word is passed to the retrieval module 506 of server, retrieval module 506 is retrieved suitable content from content library.
In content library, talk about the conversation content of having a meal and key word " desk, have a meal, drink, beverage, wine " etc. for one section and set up incidence relation, so this section audio content is matched to merit, return to intelligent glasses.
Intelligent glasses is received after audio content, just in earphone, is played back.
If in the process of playing, user has participated in dialogue with voice, intelligent glasses is analyzed the said content of user, obtain one or multinomial mixing of following content: text, the tone, emotion etc., send to server, server finds new content to send to intelligent glasses to play according to the content of receiving, has so just formed a kind of man-machine conversation based on environment sensing.
The on-line study system of this combining environmental state is particularly suitable for spoken language and the hearing of foreign language, and child, children, student's daily knowledge learning.
Application examples two: in conjunction with the assistant learning system of motion state and position
Please refer to Fig. 6, this system comprises intelligent bracelet, smart mobile phone and server.Intelligence bracelet comprises the first acquisition module 601, the first analysis module 602 and first communication module 603.Smart mobile phone comprises second communication module 604, the second acquisition module 605, the second analysis module 606 and output module 607.Server comprises third communication module 608 and retrieval module 609.
Intelligence bracelet comprises motion sensor, i.e. the first acquisition module 601, motion state that can checkout equipment holder.Through the analysis of the first analysis module 602, obtain the key word for retrieving, such as: walk, run, by bus, stair climbing etc.Intelligence bracelet by first communication module 603, is used short-range communication agreement key word, and for example Bluetooth protocol sends to smart mobile phone.Certainly, different according to the processing power of intelligent bracelet, the first analysis module 602 also can be deployed on smart mobile phone, or is deployed on server.
The second acquisition module 605 of smart mobile phone comprises: detect the microphone of audio frequency input, detect the mutual touch-screen of user and client-side program and vision content output, detect GPS or the indoor positioning functional part in geographic position.The output module 607 of smart mobile phone comprises: the display of the earphone of output audio, output multimedia page.The second analysis module 606 of smart mobile phone has been safeguarded the mapping relations of geographic position and place title in advance, can process original positional information, supplement semantic information, so just can be according to the particular location at longitude and latitude judgment device place, as " XX market ", so " XX market " can be passed to server as search key.
Comprehensive movable information and the positional information obtaining of client-side program in smart mobile phone, sends to server by second communication module 604 and third communication module 608, and for example key word is " running in XX market ".
After server is received key word, by retrieval module 609, retrieve relevant content from content library, for example the audio frequency of one section of English dialogue, discusses the commodity about moving in XX market; Or a multimedia page, shows the relevant commodity of motion that is selling in XX market.
Server sends to smart mobile phone the audio content retrieving, and smart mobile phone is play in earphone (output module 607); And above-mentioned webpage is sent to smart mobile phone, in display (output module 607), show.User can listening to audio, and watches the content of showing in display, and carries out interactive operation, and purchase commodity for example place an order.
The all right nearly step of multiple ambient condition is comprehensive, for example: smart mobile phone utilizes microphone detection user's wheeze, if motion state is to run, and continued longer a period of time, wheeze is larger, above-mentioned ambient condition is converted to new key word, sends to server.So server is retrieved the content of coupling again, to smart mobile phone, feed back new content, play.
It should be noted that, according to actual needs, each functional module can be disposed flexibly between intelligent terminal and server.
When the function ratio of intelligent terminal is weak, intelligent terminal collects after ambient condition, can not analyze, but raw information is sent to server, by server, is analyzed; If intelligent terminal function is more intense, after can analysing in depth, again result is sent to server on intelligent terminal.
Or content disposition is on server, other modules are all deployed on intelligent terminal.Intelligent terminal is according to the residing ambient condition of user, the account of the history that comprises ambient condition, (for example by Wi-Fi, access internet in the preferred case, save campus network) to the content of courses of server request part, like this when really needing teaching of use service, just without real time access server, but use the content of courses on intelligent terminal.
Or the modules on server is all deployed in (not presence server concept) on intelligent terminal, intelligent terminal can complete all functions by off-line (not interconnection network).
What the embodiment of the present application provided carrys out the method and system of content association according to ambient condition, can make full use of the detectability of intelligent terminal to ambient condition, make the content of propelling movement relevant to ambient condition, thereby more meet the residing sight of user, make information larger by the probability of user admission, to user, bring higher value.
It will be appreciated by those skilled in the art that, in above-mentioned embodiment, all or part of step of the whole bag of tricks can come instruction related hardware to complete by program, this program can be stored in a computer-readable recording medium, and storage medium can comprise: ROM (read-only memory), random access memory, disk or CD etc.
Above content is the further description of the application being done in conjunction with concrete embodiment, can not assert that the application's concrete enforcement is confined to these explanations.For the application person of an ordinary skill in the technical field, not departing under the prerequisite of the present application design, can also make some simple deduction or replace.

Claims (10)

1. according to ambient condition, carry out a method for content association, it is characterized in that, comprising:
Obtain current ambient condition;
Described ambient condition is analyzed, generated for expressing the semantic information of described ambient condition feature, and according to described semantic information, be configured to the key word of retrieval of content;
According to described key search, obtain content.
2. the method for claim 1, is characterized in that, described ambient condition is analyzed, and in generative semantics information step, the expression way of employing comprises: key-value pair form, label form and word paragraph form.
3. the method for claim 1, is characterized in that, also comprises and exporting retrieving the content obtaining.
4. method as claimed in claim 3, it is characterized in that, when current content is exported, detect in real time current ambient condition, and judge whether described ambient condition meets switching condition, if met, generate new key word for retrieving and the content of output and current content and current environment state relation.
5. the method as described in any one in claim 1 to 4, it is characterized in that, described ambient condition comprises one or more in geographic position, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature; Described content comprises one or more in audio frequency, video, picture, text, program code, control data.
6. according to ambient condition, carry out a system for content association, it is characterized in that, comprising:
Acquisition module, for obtaining current ambient condition;
Analysis module, for receiving described ambient condition, analyzes it, generates for expressing the semantic information of described ambient condition feature, and according to described semantic information, is configured to the key word of retrieval of content;
Retrieval module, for receiving described key word, and obtains content according to described key search.
7. system as claimed in claim 6, is characterized in that, described analysis module is analyzed described ambient condition, and during generative semantics information, the expression way that analysis module adopts comprises: key-value pair form, label form and word paragraph form.
8. system as claimed in claim 6, is characterized in that, also comprises output module, for the content that retrieval module retrieval is obtained, exports.
9. system as claimed in claim 8, is characterized in that, described acquisition module also, for when output module is exported current content, detects current ambient condition in real time; Described system also comprises judge module, be used for judging whether described ambient condition meets switching condition, if met, generate new key word for the content of retrieval module retrieval and current content and current environment state relation, and control described in output module output and the content of current content and current environment state relation.
10. the system as described in claim 6-9 any one, it is characterized in that, described ambient condition comprises one or more in geographic position, temperature, humidity, sound, image, video, orientation, motion state, human body physiological parameter, place sign, object identification, user behavior feature; Described content comprises one or more in audio frequency, video, picture, text, program code, control data.
CN201410037096.9A 2014-01-26 2014-01-26 Method and system for correlating contents according to environmental state Pending CN103729476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410037096.9A CN103729476A (en) 2014-01-26 2014-01-26 Method and system for correlating contents according to environmental state

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410037096.9A CN103729476A (en) 2014-01-26 2014-01-26 Method and system for correlating contents according to environmental state

Publications (1)

Publication Number Publication Date
CN103729476A true CN103729476A (en) 2014-04-16

Family

ID=50453550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410037096.9A Pending CN103729476A (en) 2014-01-26 2014-01-26 Method and system for correlating contents according to environmental state

Country Status (1)

Country Link
CN (1) CN103729476A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268154A (en) * 2014-09-02 2015-01-07 百度在线网络技术(北京)有限公司 Recommended information providing method and device
CN105224802A (en) * 2015-10-08 2016-01-06 广东欧珀移动通信有限公司 A kind of based reminding method and mobile terminal
CN106097793A (en) * 2016-07-21 2016-11-09 北京光年无限科技有限公司 A kind of child teaching method and apparatus towards intelligent robot
WO2016206644A1 (en) * 2015-06-26 2016-12-29 北京贝虎机器人技术有限公司 Robot control engine and system
CN106325113A (en) * 2015-06-26 2017-01-11 北京贝虎机器人技术有限公司 Robot control engine and system
CN106708836A (en) * 2015-08-17 2017-05-24 重庆物联利浪科技有限公司 Precise pushing platform and method based on Internet of Things
CN106850846A (en) * 2017-03-10 2017-06-13 重庆智绘点途科技有限公司 A kind of telelearning system and method
WO2017121277A1 (en) * 2016-01-13 2017-07-20 阿里巴巴集团控股有限公司 Method and device for retrieval based on wearable device
CN108432209A (en) * 2015-11-30 2018-08-21 法拉第未来公司 Infotainment based on automobile navigation data
CN109344238A (en) * 2018-09-18 2019-02-15 阿里巴巴集团控股有限公司 The benefit word method and apparatus of user's question sentence
CN110832477A (en) * 2017-10-24 2020-02-21 谷歌有限责任公司 Sensor-based semantic object generation
CN111223014A (en) * 2018-11-26 2020-06-02 北京派润泽科技有限公司 Method and system for online generating subdivided scene teaching courses from large amount of subdivided teaching contents

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254043A (en) * 2011-08-17 2011-11-23 电子科技大学 Semantic mapping-based clothing image retrieving method
CN102404680A (en) * 2010-09-09 2012-04-04 三星电子(中国)研发中心 Method for starting application based on location identification and handheld equipment thereby
CN102622390A (en) * 2011-10-11 2012-08-01 北京掌汇天下科技有限公司 Application recommending method and application recommending server in mobile terminal
CN103038765A (en) * 2010-07-01 2013-04-10 诺基亚公司 Method and apparatus for adapting a context model
WO2013085320A1 (en) * 2011-12-06 2013-06-13 Wee Joon Sung Method for providing foreign language acquirement and studying service based on context recognition using smart device
CN103335644A (en) * 2013-05-31 2013-10-02 王玉娇 Voice broadcast method for street view map, and relevant apparatus
CN103412937A (en) * 2013-08-22 2013-11-27 成都数之联科技有限公司 Searching and shopping method based on handheld terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103038765A (en) * 2010-07-01 2013-04-10 诺基亚公司 Method and apparatus for adapting a context model
CN102404680A (en) * 2010-09-09 2012-04-04 三星电子(中国)研发中心 Method for starting application based on location identification and handheld equipment thereby
CN102254043A (en) * 2011-08-17 2011-11-23 电子科技大学 Semantic mapping-based clothing image retrieving method
CN102622390A (en) * 2011-10-11 2012-08-01 北京掌汇天下科技有限公司 Application recommending method and application recommending server in mobile terminal
WO2013085320A1 (en) * 2011-12-06 2013-06-13 Wee Joon Sung Method for providing foreign language acquirement and studying service based on context recognition using smart device
CN103335644A (en) * 2013-05-31 2013-10-02 王玉娇 Voice broadcast method for street view map, and relevant apparatus
CN103412937A (en) * 2013-08-22 2013-11-27 成都数之联科技有限公司 Searching and shopping method based on handheld terminal

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268154A (en) * 2014-09-02 2015-01-07 百度在线网络技术(北京)有限公司 Recommended information providing method and device
CN106325113B (en) * 2015-06-26 2019-03-19 北京贝虎机器人技术有限公司 Robot controls engine and system
WO2016206644A1 (en) * 2015-06-26 2016-12-29 北京贝虎机器人技术有限公司 Robot control engine and system
CN106325113A (en) * 2015-06-26 2017-01-11 北京贝虎机器人技术有限公司 Robot control engine and system
CN106708836A (en) * 2015-08-17 2017-05-24 重庆物联利浪科技有限公司 Precise pushing platform and method based on Internet of Things
CN105224802A (en) * 2015-10-08 2016-01-06 广东欧珀移动通信有限公司 A kind of based reminding method and mobile terminal
CN108432209A (en) * 2015-11-30 2018-08-21 法拉第未来公司 Infotainment based on automobile navigation data
WO2017121277A1 (en) * 2016-01-13 2017-07-20 阿里巴巴集团控股有限公司 Method and device for retrieval based on wearable device
CN106097793A (en) * 2016-07-21 2016-11-09 北京光年无限科技有限公司 A kind of child teaching method and apparatus towards intelligent robot
CN106850846A (en) * 2017-03-10 2017-06-13 重庆智绘点途科技有限公司 A kind of telelearning system and method
CN106850846B (en) * 2017-03-10 2020-09-29 重庆智绘点途科技有限公司 Remote learning system and method
CN110832477A (en) * 2017-10-24 2020-02-21 谷歌有限责任公司 Sensor-based semantic object generation
CN109344238A (en) * 2018-09-18 2019-02-15 阿里巴巴集团控股有限公司 The benefit word method and apparatus of user's question sentence
CN111223014A (en) * 2018-11-26 2020-06-02 北京派润泽科技有限公司 Method and system for online generating subdivided scene teaching courses from large amount of subdivided teaching contents
CN111223014B (en) * 2018-11-26 2023-11-07 北京派润泽科技有限公司 Method and system for online generation of subdivision scene teaching courses from a large number of subdivision teaching contents

Similar Documents

Publication Publication Date Title
CN103729476A (en) Method and system for correlating contents according to environmental state
US11823677B2 (en) Interaction with a portion of a content item through a virtual assistant
CN110741433B (en) Intercom communication using multiple computing devices
Chon et al. Automatically characterizing places with opportunistic crowdsensing using smartphones
US11194842B2 (en) Methods and systems for interacting with mobile device
JP6305389B2 (en) Method and apparatus for intelligent chat between human and machine using artificial intelligence
US20200107152A1 (en) Inferring user availability for a communication
CN105320726B (en) Reduce the demand to manual beginning/end point and triggering phrase
US20170277993A1 (en) Virtual assistant escalation
US20170132821A1 (en) Caption generation for visual media
JP6791569B2 (en) User profile generation method and terminal
CN108351870A (en) According to the Computer Distance Education and semantic understanding of activity pattern
US10204292B2 (en) User terminal device and method of recognizing object thereof
CN106406806A (en) A control method and device for intelligent apparatuses
KR102228455B1 (en) Device and sever for providing a subject of conversation and method for providing the same
US20190327357A1 (en) Information presentation method and device
CN104794122A (en) Position information recommending method, device and system
CN105335398A (en) Service recommendation method and terminal
CN107066523A (en) Use the automatic route of search result
Krüger et al. Adaptive mobile guides
CN110147467A (en) A kind of generation method, device, mobile terminal and the storage medium of text description
CN103137043A (en) Advertisement display system and advertisement display method in combination with search engine service
CN111563151B (en) Information acquisition method, session configuration method, device and storage medium
US20190197427A1 (en) Device and method for recommending contact information
CN113868427A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140416