CN105183162A - Information processing method and electronic device - Google Patents

Information processing method and electronic device Download PDF

Info

Publication number
CN105183162A
CN105183162A CN201510556707.5A CN201510556707A CN105183162A CN 105183162 A CN105183162 A CN 105183162A CN 201510556707 A CN201510556707 A CN 201510556707A CN 105183162 A CN105183162 A CN 105183162A
Authority
CN
China
Prior art keywords
scene
information
electronic equipment
full content
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510556707.5A
Other languages
Chinese (zh)
Other versions
CN105183162B (en
Inventor
张晓海
孙国勇
褚福玺
高飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510556707.5A priority Critical patent/CN105183162B/en
Publication of CN105183162A publication Critical patent/CN105183162A/en
Application granted granted Critical
Publication of CN105183162B publication Critical patent/CN105183162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention discloses an information processing method which is used for solving the technical problem that the reading method provided by the conventional electronic device is single. The method comprises acquiring a current content paid attention to by a user through a sensor, the current content belonging to a part of the whole content; analyzing the current content to determine scene information of a scene described by the current content; searching for at least one data according with the scene based on the scene information; constructing a scene output model based on the scene information; and outputting the at least one data through an output device based on the scene output model, so as to enable the user to feel the at least one data, thus to enhance experience of the user to the current content. A corresponding electronic device is also disclosed.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to field of computer technology, particularly a kind of information processing method and electronic equipment.
Background technology
At the society of scientific and technical develop rapidly, various electronic equipment has brought comfortable life, and people can utilize electronic equipment to see a film, play games, listen to the music etc.But traditional reading remains a part indispensable in people's life.
Existing reading method normally people by the character in eyes direct viewing books or electronic equipment or image.Such as, user is when reading by electronic equipment, directly can only watch by eyes character or image that screen shows, such reading method is comparatively uninteresting, eyes also easily produce fatigue, further, due to the Limited information that electronic equipment exports, therefore the electronic equipment reading method that can provide is comparatively single.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method, for solving the comparatively single technical matters of reading method that electronic equipment can provide.
First aspect, provides a kind of information processing method, comprising:
By the Current Content that sensor acquisition user pays close attention to, wherein, described Current Content belongs to the part of full content;
Analyze described Current Content, determine the scene information of the scene that described Current Content is described;
At least one data meeting described scene are searched based on described scene information;
Scene output model is built based on described scene information;
Export at least one data described based on described scene output model by output unit, experience at least one data described to strengthen the experience of described user for described Current Content to make described user.
Optionally,
Search at least one data meeting described scene based on described scene information, comprising:
Based on described scene information, from the first database of all scenes corresponding to described full content, obtain at least one data described;
Build scene output model based on described scene information, comprising:
Based on described scene information, from described first database, obtain described scene output model.
Wherein, described first database of all scenes for described full content of described first database for obtaining for described full content real-time analysis when described electronic equipment loads described full content.
Optionally, described first database of all scenes for described full content obtained for described full content real-time analysis when electronic equipment loads described full content, comprising:
Load cloud service; Described cloud service is used for being connected with cloud server, and described cloud server has intelligent scene identification engine;
The described full content loaded by described electronic equipment based on described cloud service inputs in the described intelligent scene identification engine of described cloud server, to make described smart recognition engine for described full content real-time analysis to obtain described first database for all scenes of described full content;
Described first database of described cloud server feedback is obtained based on described cloud service.
Optionally, analyze described Current Content, determine the scene information of the scene that described Current Content is described, comprising:
From described Current Content, extraction meets the key word of predetermined condition as the type I information in described scene information; Wherein, the identification information of at least one output unit of described predetermined condition included by described electronic equipment; And,
Descriptor for described key word is extracted as the Equations of The Second Kind information in described scene information from described Current Content.
Optionally, search at least one data meeting described scene based on described scene information, comprising:
For the type I information in described scene information, local and/or search at least one data described in a network.
Optionally, build scene output model based on described scene information, comprising:
For the Equations of The Second Kind information in the type I information in described scene information and described scene information, build described scene output model.
Second aspect, provides a kind of electronic equipment, comprising:
Sensor, for obtaining the Current Content that user pays close attention to, wherein, described Current Content belongs to the part of full content;
Processor, for analyzing described Current Content, determines the scene information of the scene that described Current Content is described; At least one data meeting described scene are searched based on described scene information; Scene output model is built based on described scene information; And, export at least one data described based on described scene output model by output unit, experience at least one data described to strengthen the experience of described user for described Current Content to make described user.
Optionally, described processor is used for:
Based on described scene information, from the first database of all scenes corresponding to described full content, obtain at least one data described;
Based on described scene information, from described first database, obtain described scene output model;
Wherein, described first database of all scenes for described full content of described first database for obtaining for described full content real-time analysis when described electronic equipment loads described full content.
Optionally, described processor is used for:
Load cloud service; Described cloud service is used for being connected with cloud server, and described cloud server has intelligent scene identification engine;
The described full content loaded by described electronic equipment based on described cloud service inputs in the described intelligent scene identification engine of described cloud server, to make described smart recognition engine for described full content real-time analysis to obtain described first database for all scenes of described full content;
Described first database of described cloud server feedback is obtained based on described cloud service.
Optionally, described processor is used for:
From described Current Content, extraction meets the key word of predetermined condition as the type I information in described scene information; Wherein, the identification information of at least one output unit of described predetermined condition included by described electronic equipment; And,
Descriptor for described key word is extracted as the Equations of The Second Kind information in described scene information from described Current Content.
Optionally, described processor is used for:
For the type I information in described scene information, local and/or search at least one data described in a network.
Optionally, described processor is used for:
For the Equations of The Second Kind information in the type I information in described scene information and described scene information, build described scene output model.
In the embodiment of the present invention, when user is by electronic equipment viewing word or image, electronic equipment can by analyzing the content of the current concern of user, the data that the scene described of the content obtaining the current concern of user is corresponding, electronic equipment can build scene output model by the scene information corresponding according to scene, and exports data based on scene output model.By such mode, the data that the scene that user can be made to experience truly describe with the content of reading is consistent, the information that electronic equipment can export is more, has therefore enriched the reading method that electronic equipment can provide, and also improves the intelligence degree of electronic equipment.Concerning user, a kind of reading experience on the spot in person can be obtained, add the interest of reading.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of information processing method in the embodiment of the present invention;
Fig. 2 is the structural representation of electronic equipment in the embodiment of the present invention;
Fig. 3 is the structured flowchart of electronic equipment in the embodiment of the present invention.
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Electronic equipment in the embodiment of the present invention can be Wearable, such as intelligent glasses, intelligent helmet etc., or also can be PAD (panel computer), mobile phone etc. electronic equipment, and the present invention is not restricted this.
In addition, term "and/or" herein, being only a kind of incidence relation describing affiliated partner, can there are three kinds of relations in expression, and such as, A and/or B, can represent: individualism A, exists A and B simultaneously, these three kinds of situations of individualism B.In addition, character "/" herein, if no special instructions, general expression forward-backward correlation is to the relation liking a kind of "or".
Below in conjunction with accompanying drawing, the preferred embodiment of the present invention is described in detail.
Refer to Fig. 1, the embodiment of the present invention provides a kind of information processing method, and described method can be applied to electronic equipment, and the flow process of described method is described below.
Step 101: the Current Content paid close attention to by sensor acquisition user, wherein, Current Content belongs to the part of full content;
Step 102: analyze Current Content, determine the scene information of the scene that Current Content is described;
Step 103: search at least one data meeting scene based on scene information;
Step 104: build scene output model based on scene information;
Step 105: export at least one data by output unit based on scene output model, experiences at least one data to strengthen the experience of user for Current Content to make user.
Full content, the i.e. content of the electronic equipment text that can obtain or image, these contents may be the full contents that user will read.For the form of expression of text or image, the present invention is not construed as limiting, can be such as text or the image of electronics shelves, also can be the text or image etc. on papery books or newpapers and periodicals, as long as the content of the electronic equipment text that can obtain or image can be the full content described in the embodiment of the present invention.
Such as, full content can be the content of the e-book stored in the electronic device, and so electronic equipment directly can obtain full content from this locality.Or such as, full content can be the content of an e-book in network, and so electronic equipment can download full content from network.Or such as, full content can be the content of the pictures be stored in another electronic equipment, so electronic equipment can by carrying out mutual with another electronic equipment thus obtaining full content.Or such as, full content can be the content in a paper book, and so electronic equipment can obtain full content by camera, etc.
Current Content, namely in full content, the content that user is current paid close attention to.Current Content is comprised to which content in full content, the present invention is not construed as limiting.Such as, full content can comprise the whole word contents in an e-book, and so Current Content can be the word content that current page that user reads when reading electronic book comprises, or the word content included by paragraph of the current reading of user.
Optionally, in an alternative embodiment of the invention, the Current Content paid close attention to by sensor acquisition user, being comprised:
Based on eyeball tracking technology, obtained the positional information of the position corresponding to eyeball of user by sensor;
According to positional information, obtain the Current Content that user pays close attention to.
In the embodiment of the present invention, as long as the equipment that can obtain the positional information of the position corresponding to eyeball of user can as the sensor in the present invention, the present invention is not construed as limiting this.Such as, sensor can be infrared equipment, so can extract feature by light beams such as active transmission infrared rays to the iris of user's eyeball, thus obtain the position corresponding to user's eyeball.Or such as, sensor also can be image capture device, so position corresponding to user's eyeball can be obtained by gathering the eyeball of user and the feature of eyeball periphery.
Such as, full content is such as three sections of words that numbering is respectively 1,2,3, the position paid close attention to by infrared equipment acquisition user is such as the position of a word in paragraph 2, so can using a word of paragraph 2 as Current Content, or also can using whole for paragraph 2 section of words as Current Content, or also paragraph 1 and these two sections, paragraph 2 can be talked about as Current Content, etc.Like this, more adequately can obtain the content that user is paying close attention to, the intelligence degree of electronic equipment is higher.
In general, word can describe scene.Then electronic equipment can be analyzed Current Content, thus obtain scene information corresponding to scene that Current Content describes, such as, Current Content comprises " sky descends light rain " such word content, so electronic equipment is by analyzing, and can determine that this scene information comprises " light rain ".
Optionally, in an alternative embodiment of the invention, analyze Current Content, determine the scene information of the scene that Current Content is described, comprising:
From Current Content, extraction meets the key word of predetermined condition as the type I information in scene information; Wherein, the identification information of at least one output unit of predetermined condition included by electronic equipment; And,
Descriptor for key word is extracted as the Equations of The Second Kind information in scene information from Current Content.
In the embodiment of the present invention, the output unit that predetermined condition can comprise according to electronic equipment is determined.Such as, electronic equipment comprises temperature output unit, is such as heating plate, and the word meeting predetermined condition just can be thought in the word so for the description such as " heat " or " cold " temperature.Or such as, electronic equipment comprises audio output device, so the word of corresponding acoustic information the word meeting predetermined condition can be thought for " hoofbeat ", " patter of rain " or " sound of the wind " etc.That is, can think, meet pre-conditioned key word, be can the key word of output unit in corresponding electronic equipment, nature, the key word that dissimilar output unit can be corresponding different.
In the embodiment of the present invention, the key word meeting predetermined condition in Current Content is called type I information, the word be described type I information is called Equations of The Second Kind information.Such as, Current Content is " weather is more and more warmmer " this word content, and corresponding type I information can comprise " heat ", and corresponding Equations of The Second Kind information can comprise " more and more " for describing " heat ".Or such as, Current Content is " transmitting hoofbeat at a distance " this word content, and corresponding type I information can comprise " hoofbeat ", and corresponding Equations of The Second Kind information can comprise " transmitting at a distance " for describing " hoofbeat ", etc.
Like this, by obtaining the mode of two category informations, being conducive to electronic equipment and building scene output model more true to nature, improving the information processing capability of electronic equipment.
At least one data can comprise data that are that the scene corresponding with Current Content is consistent and that can be exported by the output unit of electronic equipment, and the embodiment of the present invention is not restricted for the type of at least one data.Such as wherein can comprise voice data, electronic equipment can pass through audio output device (such as earphone) outputting audio data.Or such as wherein can comprise temperature data, electronic equipment can pass through temperature output unit output temperature data.Or such as wherein can comprise odor data, electronic equipment can export odor data by odor output device, etc.
Optionally, in an alternative embodiment of the invention, search at least one data meeting scene based on scene information, comprising:
For the type I information in scene information, local and/or search at least one data in a network.
In actual applications, electronic equipment this locality may store multiple data, then electronic equipment is after determining scene information, can search at least one data in this locality.Or electronic equipment this locality may not store data, then electronic equipment is after determining scene information, at least one data of NetFind can be passed through.Or, electronic equipment this locality stores data, electronic equipment can search the data corresponding with the scene determined in this locality, and, electronic equipment can also by NetFind data corresponding with the scene determined, in this case, electronic equipment can by local and NetFind at least one data.
Like this, electronic equipment can search the data corresponding with scene information by number of ways, improves the information processing capability of electronic equipment.
In this embodiment, electronic equipment can determine according to type I information at least one data of needing to search.The quantity of type I information and the quantity of at least one data can identical also can not be identical.Such as, type I information comprises " thundering ", " brightness of flowers and birdsongs " and " heat ", so corresponding can search at least one data, such as, comprising the temperature data of odor data corresponding to voice data corresponding to voice data corresponding to " thunder ", " bird cries ", " fragrance of a flower " and " heating " correspondence.
Like this, electronic equipment directly searches at least one data according to type I information, and type I information is relative to the full detail included by scene information, and quantity of information is less, decreases the workload of electronic equipment, improves search efficiency.
Scene output model in the embodiment of the present invention, a kind of control strategy can be thought, namely, the output unit that can be used for controlling electronic equipment exports at least one data, such as, can be controlled the way of output of at least one data by scene output model, the way of output such as can comprise output effect, output order, etc.
Optionally, in an alternative embodiment of the invention, build scene output model based on scene information, comprising:
For the type I information in scene information and the Equations of The Second Kind information in scene information, build scene output model.
Such as, Current Content comprises " transmit hoofbeat at a distance, a dry goods goes at express speed ", and corresponding type I information can comprise " hoofbeat ", Equations of The Second Kind information can comprise " distant place is transmitted " and " going at express speed ".So can be used for controlling that the audio output device of electronic equipment is ascending according to volume, frequency mode from low to high exports audio frequency corresponding to " hoofbeat " for the scene output model of type I information and Equations of The Second Kind information architecture.Like this, user can experience the scene described by Current Content comparatively truly, enhances reading experience, also improves the intelligence degree of electronic equipment.
Optionally, in an alternative embodiment of the invention,
Search at least one data meeting scene based on scene information, comprising:
Based on scene information, from the first database of all scenes corresponding to full content, obtain at least one data;
Build scene output model based on scene information, comprising:
Based on scene information, from the first database, obtain scene output model.
Wherein, first database of all scenes for full content of the first database for obtaining for full content real-time analysis when electronic equipment loads full content.
First database can be such as the scene output model that each scene information in the data and all scene informations that each scene information comprised in all scene informations corresponding to full content, all scene informations is corresponding is corresponding.
In the embodiment of the present invention, user can before reading, can full content be imported in electronic equipment, such as, download in electronic equipment by an e-book, electronic equipment can the pre-loaded and full content of analytical electron book, the scene information that all scenes that acquisition full content comprises are corresponding, find out the data be consistent with all scenes respectively, obtain the database for all scenes, the first database namely in the present invention.User, when reading, just directly can obtain the scene output model of data corresponding to scene information that Current Content that user pays close attention to comprises and correspondence from existing first database.Like this, search data corresponding to scene without the need to electronic equipment temporarily and build scene output model, effectively reducing the response time of electronic equipment, also shortening the time that user waits for, improve the speed of electronic equipment process information.
Such as, a pre-loaded e-book in electronic equipment, electronic equipment is after loading completes, obtain the first database that this e-book is corresponding, the first database comprises scene output model corresponding to each scene information in data corresponding to each scene information in all scene informations corresponding to the full content of this e-book, all scene informations and all scene informations.The current content read of user is such as " rain is gradually little ", then electronic equipment can analyze this content, determine its scene information described, and at least one data of meeting scene and scene output model corresponding to this scene information can be searched in the first database, at least one corresponding data such as comprise " patter of rain ", corresponding scene output model can be the audio frequency that mode that the audio output device controlling electronic equipment reduces gradually with volume exports " patter of rain ", like this, electronic equipment can directly export at least one data.
Optionally, in an alternative embodiment of the invention, the first database of all scenes for full content obtained for full content real-time analysis when electronic equipment loads full content, comprising:
Load cloud service; Cloud service is used for being connected with cloud server, and cloud server has intelligent scene identification engine;
The full content loaded by electronic equipment based on cloud service inputs in the intelligent scene identification engine of cloud server, to make smart recognition engine for full content real-time analysis to obtain the first database for all scenes of full content;
The first database of cloud server feedback is obtained based on cloud service.
In the embodiment of the present invention, electronic equipment can be connected with cloud server by loading cloud service, thus loaded full content can be input to cloud server, cloud server has intelligent scene identification engine, can analyze full content and obtain first database corresponding with all scenes of full content, final electronic equipment can obtain the first database from cloud server.Visible, by such mode, electronic equipment without the need at local analytics full content, thus without the need to taking the internal memory of electronic equipment, also can saves energy effectively, electronic device information processing power is stronger.
In the embodiment of the present invention, electronic equipment can export at least one data based on scene output model, thus user can experience at least one data.Such as, at least one data comprises voice data corresponding to " hoofbeat ", and electronic equipment can export voice data corresponding to " hoofbeat " based on scene output model by the mode that audio output device (such as earphone) strengthens gradually with volume.
Refer to Fig. 2, based on same inventive concept and the various embodiments described above, the embodiment of the present invention provides a kind of electronic equipment, and described electronic equipment can comprise sensor 201, processor 202 and output unit 203.
In the embodiment of the present invention,
Sensor 201, for obtaining the Current Content that user pays close attention to, wherein, Current Content belongs to the part of full content;
Processor 202, for analyzing Current Content, determines the scene information of the scene that Current Content is described; At least one data meeting scene are searched based on scene information; Scene output model is built based on scene information; And, export at least one data based on scene output model by output unit 203, experience at least one data to strengthen the experience of user for Current Content to make user.
Optionally, in an alternative embodiment of the invention, processor 202 for:
Based on scene information, from the first database of all scenes corresponding to full content, obtain at least one data;
Based on scene information, from the first database, obtain scene output model;
Wherein, first database of all scenes for full content of the first database for obtaining for full content real-time analysis when electronic equipment loads full content.
Optionally, in an alternative embodiment of the invention, processor 202 for:
Load cloud service; Cloud service is used for being connected with cloud server, and cloud server has intelligent scene identification engine;
The full content loaded by electronic equipment based on cloud service inputs in the intelligent scene identification engine of cloud server, to make smart recognition engine for full content real-time analysis to obtain the first database for all scenes of full content;
The first database of cloud server feedback is obtained based on cloud service.
Optionally, in an alternative embodiment of the invention, processor 202 for:
From Current Content, extraction meets the key word of predetermined condition as the type I information in scene information; Wherein, the identification information of at least one output unit 203 of predetermined condition included by electronic equipment; And,
Descriptor for key word is extracted as the Equations of The Second Kind information in scene information from Current Content.
Optionally, in an alternative embodiment of the invention, processor 202 for:
For the type I information in scene information, local and/or search at least one data in a network.
Optionally, in an alternative embodiment of the invention, processor 202 for:
For the type I information in scene information and the Equations of The Second Kind information in scene information, build scene output model.
Refer to Fig. 3, based on same inventive concept and the various embodiments described above, the embodiment of the present invention provides another kind of electronic equipment, and described electronic equipment can comprise acquisition module 301, determination module 302, searches module 303, build module 304, output module 305.
Acquisition module 301, for being obtained the Current Content that user pays close attention to by sensor, wherein, Current Content belongs to the part of full content;
Determination module 302, for analyzing Current Content, determines the scene information of the scene that Current Content is described;
Search module 303, for searching at least one data meeting scene based on scene information;
Build module 304, for building scene output model based on scene information;
Output module 305, for exporting at least one data based on scene output model by output unit, experiences at least one data to strengthen the experience of user for Current Content to make user.
Optionally, in an alternative embodiment of the invention,
Search module 303 for:
Based on scene information, from the first database of all scenes corresponding to full content, obtain at least one data;
Build module 304 for:
Based on scene information, from the first database, obtain scene output model;
Wherein, first database of all scenes for full content of the first database for obtaining for full content real-time analysis when electronic equipment loads full content.
Optionally, in an alternative embodiment of the invention, electronic equipment also comprises:
Load-on module, for loading cloud service; Cloud service is used for being connected with cloud server, and cloud server has intelligent scene identification engine;
Load module, full content for being loaded by electronic equipment based on cloud service inputs in the intelligent scene identification engine of cloud server, to make smart recognition engine for full content real-time analysis to obtain the first database for all scenes of full content;
Receiver module, for obtaining the first database of cloud server feedback based on cloud service.
Optionally, in an alternative embodiment of the invention, determination module 302 for:
From Current Content, extraction meets the key word of predetermined condition as the type I information in scene information; Wherein, the identification information of at least one output unit of predetermined condition included by electronic equipment; And,
Descriptor for key word is extracted as the Equations of The Second Kind information in scene information from Current Content.
Optionally, in an alternative embodiment of the invention, search module 303 for:
For the type I information in scene information, local and/or search at least one data in a network.
Optionally, in an alternative embodiment of the invention, output module 305 for:
For the type I information in scene information and the Equations of The Second Kind information in scene information, build scene output model.
In the embodiment of the present invention, when user is by electronic equipment viewing word or image, electronic equipment can by analyzing the content of the current concern of user, the data that the scene described of the content obtaining the current concern of user is corresponding, electronic equipment can build scene output model by the scene information corresponding according to scene, and exports data based on scene output model.By such mode, the data that the scene that user can be made to experience truly describe with the content of reading is consistent, the information that electronic equipment can export is more, has therefore enriched the reading method that electronic equipment can provide, and also improves the intelligence degree of electronic equipment.Concerning user, a kind of reading experience on the spot in person can be obtained, add the interest of reading.
Those skilled in the art can be well understood to, for convenience and simplicity of description, only be illustrated with the division of above-mentioned each functional module, in practical application, can distribute as required and by above-mentioned functions and be completed by different functional modules, inner structure by device is divided into different functional modules, to complete all or part of function described above.The system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed equipment and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described module or unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that the technical scheme of the application contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) or processor (processor) perform all or part of step of method described in each embodiment of the application.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CD etc. various can be program code stored medium.
Specifically, the computer program instructions that a kind of image processing method in the embodiment of the present application is corresponding can be stored in CD, hard disk, on the storage mediums such as USB flash disk, when the computer program instructions corresponding with a kind of image processing method in storage medium is read by an electronic equipment or be performed, comprise the steps:
By the Current Content that sensor acquisition user pays close attention to, wherein, described Current Content belongs to the part of full content;
Analyze described Current Content, determine the scene information of the scene that described Current Content is described;
At least one data meeting described scene are searched based on described scene information;
Scene output model is built based on described scene information;
Export at least one data described based on described scene output model by output unit, experience at least one data described to strengthen the experience of described user for described Current Content to make described user.
Optionally, that store in described storage medium and step: search at least one data meeting described scene based on described scene information, corresponding computer instruction is being performed in process, comprising:
Based on described scene information, from the first database of all scenes corresponding to described full content, obtain at least one data described;
That store in described storage medium and step: build scene output model based on described scene information, corresponding computer instruction is being performed in process, comprising:
Based on described scene information, from described first database, obtain described scene output model;
Wherein, described first database of all scenes for described full content of described first database for obtaining for described full content real-time analysis when described electronic equipment loads described full content.
Optionally, that store in described storage medium and step: described first database of all scenes for described full content obtained for described full content real-time analysis when electronic equipment loads described full content, corresponding computer instruction is being performed in process, comprising:
Load cloud service; Described cloud service is used for being connected with cloud server, and described cloud server has intelligent scene identification engine;
The described full content loaded by described electronic equipment based on described cloud service inputs in the described intelligent scene identification engine of described cloud server, to make described smart recognition engine for described full content real-time analysis to obtain described first database for all scenes of described full content;
Described first database of described cloud server feedback is obtained based on described cloud service.
Optionally, that store in described storage medium and step: analyze described Current Content, determine the scene information of the scene that described Current Content is described, corresponding computer instruction is being performed in process, comprising:
From described Current Content, extraction meets the key word of predetermined condition as the type I information in described scene information; Wherein, the identification information of at least one output unit of described predetermined condition included by described electronic equipment; And,
Descriptor for described key word is extracted as the Equations of The Second Kind information in described scene information from described Current Content.
Optionally, that store in described storage medium and step: search at least one data meeting described scene based on described scene information, corresponding computer instruction is being performed in process, comprising:
For the type I information in described scene information, local and/or search at least one data described in a network.
Optionally, that store in described storage medium and step: build scene output model based on described scene information, corresponding computer instruction is being performed in process, comprising:
For the Equations of The Second Kind information in the type I information in described scene information and described scene information, build described scene output model.
The above, above embodiment is only in order to be described in detail the technical scheme of the application, but the explanation of above embodiment just understands method of the present invention and core concept thereof for helping, and should not be construed as limitation of the present invention.Those skilled in the art are in the technical scope that the present invention discloses, and the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.

Claims (12)

1. an information processing method, comprising:
By the Current Content that sensor acquisition user pays close attention to, wherein, described Current Content belongs to the part of full content;
Analyze described Current Content, determine the scene information of the scene that described Current Content is described;
At least one data meeting described scene are searched based on described scene information;
Scene output model is built based on described scene information;
Export at least one data described based on described scene output model by output unit, experience at least one data described to strengthen the experience of described user for described Current Content to make described user.
2. the method for claim 1, is characterized in that,
Search at least one data meeting described scene based on described scene information, comprising:
Based on described scene information, from the first database of all scenes corresponding to described full content, obtain at least one data described;
Build scene output model based on described scene information, comprising:
Based on described scene information, from described first database, obtain described scene output model;
Wherein, described first database of all scenes for described full content of described first database for obtaining for described full content real-time analysis when described electronic equipment loads described full content.
3. method as claimed in claim 2, is characterized in that, described first database of all scenes for described full content obtained for described full content real-time analysis when electronic equipment loads described full content, comprising:
Load cloud service; Described cloud service is used for being connected with cloud server, and described cloud server has intelligent scene identification engine;
The described full content loaded by described electronic equipment based on described cloud service inputs in the described intelligent scene identification engine of described cloud server, to make described smart recognition engine for described full content real-time analysis to obtain described first database for all scenes of described full content;
Described first database of described cloud server feedback is obtained based on described cloud service.
4. the method for claim 1, is characterized in that, analyzes described Current Content, determines the scene information of the scene that described Current Content is described, comprising:
From described Current Content, extraction meets the key word of predetermined condition as the type I information in described scene information; Wherein, the identification information of at least one output unit of described predetermined condition included by described electronic equipment; And,
Descriptor for described key word is extracted as the Equations of The Second Kind information in described scene information from described Current Content.
5. method as claimed in claim 4, is characterized in that, search at least one data meeting described scene, comprising based on described scene information:
For the type I information in described scene information, local and/or search at least one data described in a network.
6. method as claimed in claim 4, is characterized in that, builds scene output model, comprising based on described scene information:
For the Equations of The Second Kind information in the type I information in described scene information and described scene information, build described scene output model.
7. an electronic equipment, comprising:
Sensor, for obtaining the Current Content that user pays close attention to, wherein, described Current Content belongs to the part of full content;
Processor, for analyzing described Current Content, determines the scene information of the scene that described Current Content is described; At least one data meeting described scene are searched based on described scene information; Scene output model is built based on described scene information; And, export at least one data described based on described scene output model by output unit, experience at least one data described to strengthen the experience of described user for described Current Content to make described user.
8. electronic equipment as claimed in claim 7, it is characterized in that, described processor is used for:
Based on described scene information, from the first database of all scenes corresponding to described full content, obtain at least one data described;
Based on described scene information, from described first database, obtain described scene output model;
Wherein, described first database of all scenes for described full content of described first database for obtaining for described full content real-time analysis when described electronic equipment loads described full content.
9. electronic equipment as claimed in claim 8, it is characterized in that, described processor is used for:
Load cloud service; Described cloud service is used for being connected with cloud server, and described cloud server has intelligent scene identification engine;
The described full content loaded by described electronic equipment based on described cloud service inputs in the described intelligent scene identification engine of described cloud server, to make described smart recognition engine for described full content real-time analysis to obtain described first database for all scenes of described full content;
Described first database of described cloud server feedback is obtained based on described cloud service.
10. electronic equipment as claimed in claim 7, it is characterized in that, described processor is used for:
From described Current Content, extraction meets the key word of predetermined condition as the type I information in described scene information; Wherein, the identification information of at least one output unit of described predetermined condition included by described electronic equipment; And,
Descriptor for described key word is extracted as the Equations of The Second Kind information in described scene information from described Current Content.
11. electronic equipments as claimed in claim 10, it is characterized in that, described processor is used for:
For the type I information in described scene information, local and/or search at least one data described in a network.
12. electronic equipments as claimed in claim 10, it is characterized in that, described processor is used for:
For the Equations of The Second Kind information in the type I information in described scene information and described scene information, build described scene output model.
CN201510556707.5A 2015-09-02 2015-09-02 A kind of information processing method and electronic equipment Active CN105183162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510556707.5A CN105183162B (en) 2015-09-02 2015-09-02 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510556707.5A CN105183162B (en) 2015-09-02 2015-09-02 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105183162A true CN105183162A (en) 2015-12-23
CN105183162B CN105183162B (en) 2019-04-23

Family

ID=54905288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510556707.5A Active CN105183162B (en) 2015-09-02 2015-09-02 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105183162B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000090285A (en) * 1999-09-13 2000-03-31 Toppan Printing Co Ltd Video display device
CN103440307A (en) * 2013-08-23 2013-12-11 北京智谷睿拓技术服务有限公司 Method and device for providing media information
CN103869946A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Display control method and electronic device
WO2015100070A1 (en) * 2013-12-27 2015-07-02 Alibaba Group Holding Limited Presenting information based on a video
CN104866116A (en) * 2015-03-25 2015-08-26 百度在线网络技术(北京)有限公司 Method and device for outputting expression information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000090285A (en) * 1999-09-13 2000-03-31 Toppan Printing Co Ltd Video display device
CN103869946A (en) * 2012-12-14 2014-06-18 联想(北京)有限公司 Display control method and electronic device
CN103440307A (en) * 2013-08-23 2013-12-11 北京智谷睿拓技术服务有限公司 Method and device for providing media information
WO2015100070A1 (en) * 2013-12-27 2015-07-02 Alibaba Group Holding Limited Presenting information based on a video
CN104866116A (en) * 2015-03-25 2015-08-26 百度在线网络技术(北京)有限公司 Method and device for outputting expression information

Also Published As

Publication number Publication date
CN105183162B (en) 2019-04-23

Similar Documents

Publication Publication Date Title
WO2022078102A1 (en) Entity identification method and apparatus, device and storage medium
CN103456314B (en) A kind of emotion identification method and device
US10176198B1 (en) Techniques for identifying visually similar content
CN110139159A (en) Processing method, device and the storage medium of video material
CN108833973A (en) Extracting method, device and the computer equipment of video features
JP2020528705A (en) Moving video scenes using cognitive insights
CN109271542A (en) Cover determines method, apparatus, equipment and readable storage medium storing program for executing
CN105825191A (en) Face multi-attribute information-based gender recognition method and system and shooting terminal
CN108959323B (en) Video classification method and device
CN104881451A (en) Image searching method and image searching device
CN111368141B (en) Video tag expansion method, device, computer equipment and storage medium
KR20200009117A (en) Systems for data collection and analysis
CN104866308A (en) Scenario image generation method and apparatus
US20150147045A1 (en) Computer ecosystem with automatically curated video montage
CN103970791A (en) Method and device for recommending video from video database
CN105893404A (en) Natural information identification based pushing system and method, and client
CN103942243A (en) Display apparatus and method for providing customer-built information using the same
CN103019730A (en) Method for displaying interface element and electronic equipment
CN113254684B (en) Content aging determination method, related device, equipment and storage medium
CN111444357A (en) Content information determination method and device, computer equipment and storage medium
CN114357278B (en) Topic recommendation method, device and equipment
US20190378506A1 (en) Method and apparatus for synthesizing adaptive data visualizations
CN111414506A (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
CN103902564A (en) File showing method and device
CN103020117A (en) Service contrast method and service contrast system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant