CN105183162B - A kind of information processing method and electronic equipment - Google Patents
A kind of information processing method and electronic equipment Download PDFInfo
- Publication number
- CN105183162B CN105183162B CN201510556707.5A CN201510556707A CN105183162B CN 105183162 B CN105183162 B CN 105183162B CN 201510556707 A CN201510556707 A CN 201510556707A CN 105183162 B CN105183162 B CN 105183162B
- Authority
- CN
- China
- Prior art keywords
- scene
- information
- electronic equipment
- content
- full content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Information Transfer Between Computers (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of information processing method, for solving the more single technical problem of reading method that electronic equipment is capable of providing.The described method includes: obtaining user's Current Content of interest by sensor, wherein the Current Content belongs to the part of full content;The Current Content is analyzed, determines the scene information of the discribed scene of the Current Content;At least one data for meeting the scene are searched based on the scene information;Scene output model is constructed based on the scene information;At least one described data are exported by output device based on the scene output model, so that the user experiences at least one described data to enhance the experience that the user is directed to the Current Content.The invention also discloses corresponding electronic equipments.
Description
Technical field
The present invention relates to field of computer technology, in particular to a kind of information processing method, that is, electronic equipment.
Background technique
In the today's society of science and technology rapid development, various electronic equipments have brought comfortable life
Living, people can use electronic equipment and watch movie, play games, listen to music etc..But traditional reading is still people's life
In an indispensable part.
Existing reading method is usually that people pass through eyes direct viewing books or character or image in electronic equipment.
For example, user when reading by electronic equipment, directly can only watch the character or image that show on screen by eyes,
Such reading method is more uninteresting, and eyes are also easy to produce fatigue, also, due to the Limited information of electronic equipment output, because
The reading method that this electronic equipment is capable of providing is more single.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method, for solve reading method that electronic equipment is capable of providing compared with
For single technical problem.
In a first aspect, providing a kind of information processing method, comprising:
User's Current Content of interest is obtained by sensor, wherein the Current Content belongs to the portion of full content
Point;
The Current Content is analyzed, determines the scene information of the discribed scene of the Current Content;
At least one data for meeting the scene are searched based on the scene information;
Scene output model is constructed based on the scene information;
At least one described data are exported by output device based on the scene output model, so that the user feels
By at least one described data to enhance the experience that the user is directed to the Current Content.
Optionally,
At least one data for meeting the scene are searched based on the scene information, comprising:
Based on the scene information, obtained from the first database of all scenes corresponding to the full content described
At least one data;
Scene output model is constructed based on the scene information, comprising:
Based on the scene information, the scene output model is obtained from the first database.
Wherein, the first database is when the electronic equipment loads the full content for the full content reality
When analyze the first databases of all scenes obtained for the full content.
Optionally, obtained be directed to is analyzed in real time for the full content when electronic equipment loads the full content
The first database of all scenes of the full content, comprising:
Load cloud service;For the cloud service for connecting with cloud server, the cloud server has intelligence
Scene Recognition engine;
The full content for being loaded the electronic equipment based on the cloud service is input to the cloud service
In the intelligent scene identification engine of device, so that the smart recognition engine is analyzed in real time for the full content to obtain
Obtain the first database of all scenes for the full content;
The first database of the cloud server feedback is obtained based on the cloud service.
Optionally, the Current Content is analyzed, determines the scene information of the discribed scene of the Current Content, comprising:
From the keyword for meeting predetermined condition is extracted in the Current Content as the first kind letter in the scene information
Breath;Wherein, the predetermined condition is the identification information of at least one output device included by the electronic equipment;And
From the description information for the keyword is extracted in the Current Content as second in the scene information
Category information.
Optionally, at least one data for meeting the scene are searched based on the scene information, comprising:
For the type I information in the scene information, at least one described number locally and/or is in a network being searched
According to.
Optionally, scene output model is constructed based on the scene information, comprising:
For the second category information in the type I information and the scene information in the scene information, the field is constructed
Scape output model.
Second aspect provides a kind of electronic equipment, comprising:
Sensor, for obtaining user's Current Content of interest, wherein the Current Content belongs to the portion of full content
Point;
Processor determines the scene information of the discribed scene of the Current Content for analyzing the Current Content;Base
At least one data for meeting the scene are searched in the scene information;Mould is exported based on scene information building scene
Type;And at least one described data are exported by output device based on the scene output model, so that the user experiences
To at least one described data to enhance the experience that the user is directed to the Current Content.
Optionally, the processor is used for:
Based on the scene information, obtained from the first database of all scenes corresponding to the full content described
At least one data;
Based on the scene information, the scene output model is obtained from the first database;
Wherein, the first database is when the electronic equipment loads the full content for the full content reality
When analyze the first databases of all scenes obtained for the full content.
Optionally, the processor is used for:
Load cloud service;For the cloud service for connecting with cloud server, the cloud server has intelligence
Scene Recognition engine;
The full content for being loaded the electronic equipment based on the cloud service is input to the cloud service
In the intelligent scene identification engine of device, so that the smart recognition engine is analyzed in real time for the full content to obtain
Obtain the first database of all scenes for the full content;
The first database of the cloud server feedback is obtained based on the cloud service.
Optionally, the processor is used for:
From the keyword for meeting predetermined condition is extracted in the Current Content as the first kind letter in the scene information
Breath;Wherein, the predetermined condition is the identification information of at least one output device included by the electronic equipment;And
From the description information for the keyword is extracted in the Current Content as second in the scene information
Category information.
Optionally, the processor is used for:
For the type I information in the scene information, at least one described number locally and/or is in a network being searched
According to.
Optionally, the processor is used for:
For the second category information in the type I information and the scene information in the scene information, the field is constructed
Scape output model.
In the embodiment of the present invention, when user watches text or image by electronic equipment, electronic equipment can pass through analysis
The content that user currently pays close attention to, obtains the corresponding data of the discribed scene of content that user currently pays close attention to, and electronic equipment can be with
Scene output model is constructed according to the corresponding scene information of scene, and based on scene output model come output data.In this way
Mode, can enable a user to truly to experience the data being consistent with the discribed scene of content read, electronics
The information that equipment can export is more, therefore enriches the reading method that electronic equipment is capable of providing, and also improves electronic equipment
Intelligence degree.For users, a kind of reading experience on the spot in person can be obtained, the interest of reading is increased.
Detailed description of the invention
Fig. 1 is the flow chart of information processing method in the embodiment of the present invention;
Fig. 2 is the structural schematic diagram of electronic equipment in the embodiment of the present invention;
Fig. 3 is the structural block diagram of electronic equipment in the embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Electronic equipment in the embodiment of the present invention can be wearable device, such as intelligent glasses, intelligent helmet etc., or
Person is also possible to PAD (tablet computer), mobile phone etc. electronic equipment, the invention is not limited in this regard.
In addition, the terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates may exist
Three kinds of relationships, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.Separately
Outside, character "/" herein typicallys represent the relationship that forward-backward correlation object is a kind of "or" unless otherwise specified.
The preferred embodiment of the present invention is described in detail with reference to the accompanying drawing.
Referring to Figure 1, the embodiment of the present invention provides a kind of information processing method, and the method can be applied to electronics and set
Standby, the process of the method is described as follows.
Step 101: user's Current Content of interest being obtained by sensor, wherein Current Content belongs to full content
Part;
Step 102: analysis Current Content determines the scene information of the discribed scene of Current Content;
Step 103: at least one data for meeting scene are searched based on scene information;
Step 104: scene output model is constructed based on scene information;
Step 105: at least one data being exported by output device based on scene output model, so that user experiences
At least one data is to enhance the experience that user is directed to Current Content.
The content for the text or image that full content, i.e. electronic equipment can obtain, these contents may be that user will read
The full content of reading.For the form of expression of text or image, the present invention is not construed as limiting, for example, can be electronics shelves text or
Image, the text or image etc. being also possible on papery books or newpapers and periodicals, as long as text that electronic equipment can obtain or
The content of image can be full content described in the embodiment of the present invention.
For example, full content can be stored in the content of an e-book in electronic equipment, then electronic equipment can
Directly to obtain full content from local.Such as full content can be the content of an e-book in network, then electricity
Sub- equipment can download full content from network.Such as full content can be stored in another electronic equipment
The content of one picture, then electronic equipment can be by interacting to obtain full content with another electronic equipment.Or
Person is for example, full content can be the content in a paper book, then electronic equipment can be obtained in whole by camera
Hold, etc..
Current Content, i.e., in full content, the current content of interest of user.It include in full content for Current Content
Which content, the present invention is not construed as limiting.For example, full content may include whole word contents in an e-book, that
Current Content can be the word content for including in the current page that user is read in reading electronic book or user
Word content included by the paragraph currently read.
Optionally, in an alternative embodiment of the invention, user's Current Content of interest is obtained by sensor, comprising:
Based on eyeball tracking technology, the location information of position corresponding to the eyeball of user is obtained by sensor;
According to location information, user's Current Content of interest is obtained.
In the embodiment of the present invention, as long as the equipment of the location information of position corresponding to the eyeball of user can be obtained all
It can be used as the sensor in the present invention, this is not limited by the present invention.For example, sensor can be infrared equipment, then can
To extract feature by actively transmiting the light beams such as infrared ray to the iris of user eyeball, to obtain corresponding to user eyeball
Position.Such as sensor is also possible to image capture device, then can by acquire user eyeball and eyeball periphery
Feature obtain position corresponding to user eyeball.
For example, full content is such as three sections of texts that number is respectively 1,2,3, user is being obtained by infrared equipment
The position of concern is such as the position of a word in paragraph 2, then can be using a word of paragraph 2 as current interior
Hold, 2 whole sections of paragraph words can also be perhaps used as to Current Content or or works as this two sections words conducts of paragraph 1 and paragraph 2
Preceding content, etc..In this way, the content that user is paying close attention to can be obtained relatively accurately, the intelligence degree of electronic equipment compared with
It is high.
In general, text can describe scene.Then electronic equipment can analyze Current Content, to be worked as
The corresponding scene information of the preceding discribed scene of content, for example, Current Content includes word content as " day descends light rain ",
So electronic equipment can determine that the scene information includes " light rain " by analysis.
Optionally, in an alternative embodiment of the invention, Current Content is analyzed, determines the field of the discribed scene of Current Content
Scape information, comprising:
From the keyword for meeting predetermined condition is extracted in Current Content as the type I information in scene information;Wherein,
Predetermined condition is the identification information of at least one output device included by electronic equipment;And
From the description information for keyword is extracted in Current Content as the second category information in scene information.
In the embodiment of the present invention, predetermined condition can be determined according to the output device that electronic equipment includes.For example, electronics
Equipment includes temperature output device, for example is heating sheet, then can think for the word of the description temperature such as " heat " or " cold "
It is the word for meeting predetermined condition.Such as electronic equipment includes audio output device, then for " hoofbeat ", " patter of rain ",
Or " sound of the wind " etc. can correspond to the word of acoustic information and be construed as meeting the word of predetermined condition.That is, it is considered that meeting
The keyword of preset condition is the keyword that can correspond to the output device in electronic equipment, naturally, different types of output fills
Different keywords can be corresponded to by setting.
In the embodiment of the present invention, the keyword that predetermined condition is met in Current Content is known as type I information, it will be to
The word that one category information is described is known as the second category information.For example, Current Content is " weather is more and more warmmer " this word content,
Corresponding type I information may include " heat ", and corresponding second category information may include " increasingly " for describing " heat ".
Such as Current Content is " transmitting hoofbeat at a distance " this word content, corresponding type I information may include " horseshoe
Sound ", corresponding second category information may include " transmitting at a distance ", etc. for describing " hoofbeat ".
In this way, be conducive to electronic equipment by way of obtaining two category informations and construct scene output model more true to nature,
Improve the information processing capability of electronic equipment.
At least one data may include that scene corresponding with Current Content is consistent and can be by electronic equipment
Output device output data, the embodiment of the present invention at least one data type with no restriction.Such as it wherein can wrap
Audio data is included, electronic equipment can export audio data by audio output device (such as earphone).Such as wherein may be used
To include temperature data, electronic equipment can pass through temperature output device output temperature data.Such as wherein may include
Odor data, electronic equipment can export odor data, etc. by odor output device.
Optionally, in an alternative embodiment of the invention, at least one data for meeting scene, packet are searched based on scene information
It includes:
For the type I information in scene information, at least one data locally and/or is in a network being searched.
In practical applications, electronic equipment may locally be stored with multiple data, then electronic equipment is determining scene information
Afterwards, at least one data can locally searched.Alternatively, electronic equipment locally may not storing data, then electronic equipment exists
After determining scene information, at least one data of internet search can be passed through.Alternatively, data, electronics has been locally stored in electronic equipment
Equipment can in the corresponding data of scene locally searched with determined, and, electronic equipment can also by internet search with it is true
The corresponding data of fixed scene, in this case, electronic equipment can be counted by local and internet search at least one
According to.
In this way, electronic equipment can search data corresponding with scene information through a variety of ways, electronic equipment is improved
Information processing capability.
In this embodiment, electronic equipment can determine at least one data required to look up according to type I information.The
The quantity of one category information can be identical or not identical with the quantity of at least one data.For example, type I information includes " beating
Thunder ", " brightness of flowers and birdsongs " and " heat ", then corresponding can search at least one data, such as corresponding including " thunder "
Audio data, " bird cries " corresponding audio data, " fragrance of a flower " corresponding odor data and " fever " corresponding temperature data.
In this way, electronic equipment directly searches at least one data according to type I information, type I information is relative to field
For all information included by scape information, information content is less, reduces the workload of electronic equipment, improves search efficiency.
Scene output model in the embodiment of the present invention, it is believed that be a kind of control strategy, that is, can be used for controlling electronics
The output device of equipment exports at least one data, for example, can control the defeated of at least one data by scene output model
Mode out, the way of output for example may include output effect, output sequence, etc..
Optionally, in an alternative embodiment of the invention, scene output model is constructed based on scene information, comprising:
For the second category information in the type I information and scene information in scene information, scene output model is constructed.
For example, Current Content includes " transmitting hoofbeat at a distance, a dry goods goes at express speed ", corresponding type I information can be with
Including " hoofbeat ", the second category information may include " transmitting at a distance " and " going at express speed ".So for type I information and the
Two category informations building scene output model can be used to controlling electronic devices audio output device it is ascending according to volume,
The mode of frequency from low to high exports " hoofbeat " corresponding audio.In this way, user more can truly experience in current
Hold described scene, enhances reading experience, also improve the intelligence degree of electronic equipment.
Optionally, in an alternative embodiment of the invention,
At least one data for meeting scene are searched based on scene information, comprising:
Based on scene information, at least one number is obtained from the first database of all scenes corresponding to full content
According to;
Scene output model is constructed based on scene information, comprising:
Based on scene information, scene output model is obtained from first database.
Wherein, first database is to analyze needle obtained in real time for full content when electronic equipment loads full content
To the first database of all scenes of full content.
First database for example can be including every in the corresponding all scene informations of full content, all scene informations
The corresponding scene output model of each scene information in a corresponding data of scene information and all scene informations.
In the embodiment of the present invention, full content can be imported into electronic equipment by user before reading,
For example, an e-book is downloaded in electronic equipment, electronic equipment can full content that is pre-loaded and analyzing e-book,
The corresponding scene information of all scenes that full content includes is obtained, the data being consistent with all scenes is found out respectively, obtains
To the first database in the database for being directed to all scenes, that is, the present invention.User, can be direct when reading
The corresponding data of scene information and correspondence that user's Current Content of interest includes are obtained from existing first database
Scene output model.In this way, temporarily searching the corresponding data of scene and building scene output model without electronic equipment, have
Effect reduces the response time of electronic equipment, also shortens the time of user's waiting, improves the speed of electronic equipment processing information
Degree.
For example, being preloaded with an e-book in electronic equipment, electronic equipment obtains this portion after load is completed
The corresponding first database of e-book, include in first database the e-book the corresponding all scene informations of full content,
The corresponding data of each scene information in all scene informations and each scene information in all scene informations are corresponding
Scene output model.The content that user currently reads is, for example, " rain is gradually small ", then it is interior to can analyze this for electronic equipment
Hold, determine its scene information described, and at least one data for meeting scene can be searched in first database and be somebody's turn to do
The corresponding scene output model of scene information, at least one corresponding data export mould for example including " patter of rain ", corresponding scene
The audio output device that type can be controlling electronic devices exports the audio of " patter of rain " in such a way that volume is gradually reduced, in this way,
Electronic equipment can be directly output to few data.
Optionally, in an alternative embodiment of the invention, divide in real time when electronic equipment load full content for full content
Analyse the first database of all scenes obtained for full content, comprising:
Load cloud service;For connecting with cloud server, cloud server identifies cloud service with intelligent scene
Engine;
The intelligent scene identification of cloud server is input to based on the full content that cloud service is loaded electronic equipment
In engine, so that smart recognition engine is analyzed in real time for full content to obtain the of all scenes for full content
One database;
The first database of cloud server feedback is obtained based on cloud service.
In the embodiment of the present invention, electronic equipment can be connect by load cloud service with cloud server, so as to
The full content loaded is input to cloud server, there is cloud server intelligent scene to identify engine, can analyze complete
Portion's content simultaneously obtains first database corresponding with all scenes of full content, and final electronic equipment can be from cloud server
Obtain first database.As it can be seen that in this way, electronic equipment is not necessarily in local analytics full content, without accounting for
With the memory of electronic equipment, electric energy also can be effectively saved, electronic device information processing capacity is stronger.
In the embodiment of the present invention, electronic equipment can export at least one data based on scene output model, thus user
At least one data can be experienced.For example, at least one data includes " hoofbeat " corresponding audio data, electronic equipment energy
" hoofbeat " is enough exported audio output device (such as earphone) in such a way that volume gradually increases based on scene output model
Corresponding audio data.
Fig. 2 is referred to, based on the same inventive concept and the various embodiments described above, the embodiment of the present invention provides a kind of electronics and sets
Standby, the electronic equipment may include sensor 201, processor 202 and output device 203.
In the embodiment of the present invention,
Sensor 201, for obtaining user's Current Content of interest, wherein Current Content belongs to the portion of full content
Point;
Processor 202 determines the scene information of the discribed scene of Current Content for analyzing Current Content;Based on field
Scape information searching meets at least one data of scene;Scene output model is constructed based on scene information;And it is exported based on scene
Model exports at least one data by output device 203, so that user experiences at least one data to enhance user's needle
Experience to Current Content.
Optionally, in an alternative embodiment of the invention, processor 202 is used for:
Based on scene information, at least one number is obtained from the first database of all scenes corresponding to full content
According to;
Based on scene information, scene output model is obtained from first database;
Wherein, first database is to analyze needle obtained in real time for full content when electronic equipment loads full content
To the first database of all scenes of full content.
Optionally, in an alternative embodiment of the invention, processor 202 is used for:
Load cloud service;For connecting with cloud server, cloud server identifies cloud service with intelligent scene
Engine;
The intelligent scene identification of cloud server is input to based on the full content that cloud service is loaded electronic equipment
In engine, so that smart recognition engine is analyzed in real time for full content to obtain the of all scenes for full content
One database;
The first database of cloud server feedback is obtained based on cloud service.
Optionally, in an alternative embodiment of the invention, processor 202 is used for:
From the keyword for meeting predetermined condition is extracted in Current Content as the type I information in scene information;Wherein,
Predetermined condition is the identification information of at least one output device 203 included by electronic equipment;And
From the description information for keyword is extracted in Current Content as the second category information in scene information.
Optionally, in an alternative embodiment of the invention, processor 202 is used for:
For the type I information in scene information, at least one data locally and/or is in a network being searched.
Optionally, in an alternative embodiment of the invention, processor 202 is used for:
For the second category information in the type I information and scene information in scene information, scene output model is constructed.
Fig. 3 is referred to, based on the same inventive concept and the various embodiments described above, the embodiment of the present invention provide another electronics and set
Standby, the electronic equipment may include obtaining module 301, determining module 302, searching module 303, building module 304, output mould
Block 305.
Module 301 is obtained, for obtaining user's Current Content of interest by sensor, wherein Current Content belongs to
The part of full content;
Determining module 302 determines the scene information of the discribed scene of Current Content for analyzing Current Content;
Searching module 303, for searching at least one data for meeting scene based on scene information;
Module 304 is constructed, for constructing scene output model based on scene information;
Output module 305, for exporting at least one data by output device based on scene output model, so that handy
At least one data is experienced to enhance the experience that user is directed to Current Content in family.
Optionally, in an alternative embodiment of the invention,
Searching module 303 is used for:
Based on scene information, at least one number is obtained from the first database of all scenes corresponding to full content
According to;
Building module 304 is used for:
Based on scene information, scene output model is obtained from first database;
Wherein, first database is to analyze needle obtained in real time for full content when electronic equipment loads full content
To the first database of all scenes of full content.
Optionally, in an alternative embodiment of the invention, electronic equipment further include:
Loading module, for loading cloud service;For connecting with cloud server, cloud server has cloud service
Intelligent scene identifies engine;
Input module, the full content for being loaded electronic equipment based on cloud service are input to cloud server
Intelligent scene identifies in engine, so that smart recognition engine is analyzed in real time for full content to obtain for full content
The first database of all scenes;
Receiving module, for obtaining the first database of cloud server feedback based on cloud service.
Optionally, in an alternative embodiment of the invention, determining module 302 is used for:
From the keyword for meeting predetermined condition is extracted in Current Content as the type I information in scene information;Wherein,
Predetermined condition is the identification information of at least one output device included by electronic equipment;And
From the description information for keyword is extracted in Current Content as the second category information in scene information.
Optionally, in an alternative embodiment of the invention, searching module 303 is used for:
For the type I information in scene information, at least one data locally and/or is in a network being searched.
Optionally, in an alternative embodiment of the invention, output module 305 is used for:
For the second category information in the type I information and scene information in scene information, scene output model is constructed.
In the embodiment of the present invention, when user watches text or image by electronic equipment, electronic equipment can pass through analysis
The content that user currently pays close attention to, obtains the corresponding data of the discribed scene of content that user currently pays close attention to, and electronic equipment can be with
Scene output model is constructed according to the corresponding scene information of scene, and based on scene output model come output data.In this way
Mode, can enable a user to truly to experience the data being consistent with the discribed scene of content read, electronics
The information that equipment can export is more, therefore enriches the reading method that electronic equipment is capable of providing, and also improves electronic equipment
Intelligence degree.For users, a kind of reading experience on the spot in person can be obtained, the interest of reading is increased.
It is apparent to those skilled in the art that for convenience and simplicity of description, only with above-mentioned each function
The division progress of module can according to need and for example, in practical application by above-mentioned function distribution by different function moulds
Block is completed, i.e., the internal structure of device is divided into different functional modules, to complete all or part of function described above
Energy.The specific work process of the system, apparatus, and unit of foregoing description, can be with reference to corresponding in preceding method embodiment
Journey, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the module or unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the application
The all or part of the steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, ROM, RAM, magnetic disk
Or the various media that can store program code such as CD.
Specifically, the corresponding computer program instructions of one of the embodiment of the present application image processing method can be deposited
It stores up on CD, hard disk, the storage mediums such as USB flash disk, when the computer journey corresponding with a kind of image processing method in storage medium
Sequence instruction is read or is performed by an electronic equipment, includes the following steps:
User's Current Content of interest is obtained by sensor, wherein the Current Content belongs to the portion of full content
Point;
The Current Content is analyzed, determines the scene information of the discribed scene of the Current Content;
At least one data for meeting the scene are searched based on the scene information;
Scene output model is constructed based on the scene information;
At least one described data are exported by output device based on the scene output model, so that the user feels
By at least one described data to enhance the experience that the user is directed to the Current Content.
Optionally, the scene store in the storage medium and step: is met based on scene information lookup
At least one data, corresponding computer instruction is during being performed, comprising:
Based on the scene information, obtained from the first database of all scenes corresponding to the full content described
At least one data;
Store in the storage medium and step: scene output model, corresponding meter are constructed based on the scene information
Calculation machine instructs during being performed, comprising:
Based on the scene information, the scene output model is obtained from the first database;
Wherein, the first database is when the electronic equipment loads the full content for the full content reality
When analyze the first databases of all scenes obtained for the full content.
Optionally, stored in the storage medium and step: for described when electronic equipment loads the full content
Full content analyzes the first database of all scenes obtained for the full content, corresponding calculating in real time
Machine instructs during being performed, comprising:
Load cloud service;For the cloud service for connecting with cloud server, the cloud server has intelligence
Scene Recognition engine;
The full content for being loaded the electronic equipment based on the cloud service is input to the cloud service
In the intelligent scene identification engine of device, so that the smart recognition engine is analyzed in real time for the full content to obtain
Obtain the first database of all scenes for the full content;
The first database of the cloud server feedback is obtained based on the cloud service.
Optionally, store in the storage medium and step: analyzing the Current Content, determines the Current Content institute
The scene information of the scene of description, corresponding computer instruction is during being performed, comprising:
From the keyword for meeting predetermined condition is extracted in the Current Content as the first kind letter in the scene information
Breath;Wherein, the predetermined condition is the identification information of at least one output device included by the electronic equipment;And
From the description information for the keyword is extracted in the Current Content as second in the scene information
Category information.
Optionally, the scene store in the storage medium and step: is met based on scene information lookup
At least one data, corresponding computer instruction is during being performed, comprising:
For the type I information in the scene information, at least one described number locally and/or is in a network being searched
According to.
Optionally, store in the storage medium and step: constructing scene output model based on the scene information, right
The computer instruction answered is during being performed, comprising:
For the second category information in the type I information and the scene information in the scene information, the field is constructed
Scape output model.
The above, above embodiments are only described in detail to the technical solution to the application, but the above implementation
The explanation of example is merely used to help understand method and its core concept of the invention, should not be construed as limiting the invention.This
In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those skilled in the art, should all cover
Within protection scope of the present invention.
Claims (12)
1. a kind of information processing method, comprising:
User's Current Content of interest is obtained by sensor, wherein the Current Content belongs to the part of full content;
The Current Content is analyzed, determines the scene information of the discribed scene of the Current Content;
At least one data for meeting the scene are searched based on the scene information;
Scene output model is constructed based on the scene information;
At least one described data are exported by output device based on the scene output model, so that the user experiences
At least one described data are to enhance the experience that the user is directed to the Current Content.
2. the method as described in claim 1, which is characterized in that
At least one data for meeting the scene are searched based on the scene information, comprising:
Based on the scene information, obtained from the first database of all scenes corresponding to the full content it is described at least
One data;
Scene output model is constructed based on the scene information, comprising:
Based on the scene information, the scene output model is obtained from the first database;
Wherein, institute is analyzed in real time for the full content when first database loads the full content for electronic equipment
The first database of all scenes for the full content obtained.
3. method according to claim 2, which is characterized in that electronic equipment is directed to the whole when loading the full content
Content analyzes the first database of all scenes obtained for the full content in real time, comprising:
Load cloud service;For the cloud service for connecting with cloud server, the cloud server has intelligent scene
Identify engine;
The full content for being loaded the electronic equipment based on the cloud service is input to the cloud server
In the intelligent scene identification engine, so that intelligent scene identification engine is analyzed in real time for the full content to obtain
Obtain the first database of all scenes for the full content;
The first database of the cloud server feedback is obtained based on the cloud service.
4. the method as described in claim 1, which is characterized in that analyze the Current Content, determine that the Current Content is retouched
The scene information for the scene drawn, comprising:
From the keyword for meeting predetermined condition is extracted in the Current Content as the type I information in the scene information;Its
In, the predetermined condition is the identification information of at least one output device included by electronic equipment;And
From the description information for the keyword is extracted in the Current Content as the second class letter in the scene information
Breath.
5. method as claimed in claim 4, which is characterized in that meet the scene at least based on scene information lookup
One data, comprising:
For the type I information in the scene information, at least one described data locally and/or are in a network being searched.
6. method as claimed in claim 4, which is characterized in that construct scene output model based on the scene information, comprising:
For the second category information in the type I information and the scene information in the scene information, it is defeated to construct the scene
Model out.
7. a kind of electronic equipment, comprising:
Sensor, for obtaining user's Current Content of interest, wherein the Current Content belongs to the part of full content;
Processor determines the scene information of the discribed scene of the Current Content for analyzing the Current Content;Based on institute
It states scene information and searches at least one data for meeting the scene;Scene output model is constructed based on the scene information;And
At least one described data are exported by output device based on the scene output model so that the user experience it is described
At least one data is to enhance the experience that the user is directed to the Current Content.
8. electronic equipment as claimed in claim 7, which is characterized in that the processor is used for:
Based on the scene information, obtained from the first database of all scenes corresponding to the full content it is described at least
One data;
Based on the scene information, the scene output model is obtained from the first database;
Wherein, the first database is that the electronic equipment divides when loading the full content for the full content in real time
Analyse the first database of all scenes obtained for the full content.
9. electronic equipment as claimed in claim 8, which is characterized in that the processor is used for:
Load cloud service;For the cloud service for connecting with cloud server, the cloud server has intelligent scene
Identify engine;
The full content for being loaded the electronic equipment based on the cloud service is input to the cloud server
In the intelligent scene identification engine, so that intelligent scene identification engine is analyzed in real time for the full content to obtain
Obtain the first database of all scenes for the full content;
The first database of the cloud server feedback is obtained based on the cloud service.
10. electronic equipment as claimed in claim 7, which is characterized in that the processor is used for:
From the keyword for meeting predetermined condition is extracted in the Current Content as the type I information in the scene information;Its
In, the predetermined condition is the identification information of at least one output device included by the electronic equipment;And
From the description information for the keyword is extracted in the Current Content as the second class letter in the scene information
Breath.
11. electronic equipment as claimed in claim 10, which is characterized in that the processor is used for:
For the type I information in the scene information, at least one described data locally and/or are in a network being searched.
12. electronic equipment as claimed in claim 10, which is characterized in that the processor is used for:
For the second category information in the type I information and the scene information in the scene information, it is defeated to construct the scene
Model out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510556707.5A CN105183162B (en) | 2015-09-02 | 2015-09-02 | A kind of information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510556707.5A CN105183162B (en) | 2015-09-02 | 2015-09-02 | A kind of information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105183162A CN105183162A (en) | 2015-12-23 |
CN105183162B true CN105183162B (en) | 2019-04-23 |
Family
ID=54905288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510556707.5A Active CN105183162B (en) | 2015-09-02 | 2015-09-02 | A kind of information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105183162B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440307A (en) * | 2013-08-23 | 2013-12-11 | 北京智谷睿拓技术服务有限公司 | Method and device for providing media information |
CN103869946A (en) * | 2012-12-14 | 2014-06-18 | 联想(北京)有限公司 | Display control method and electronic device |
CN104866116A (en) * | 2015-03-25 | 2015-08-26 | 百度在线网络技术(北京)有限公司 | Method and device for outputting expression information |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3341734B2 (en) * | 1999-09-13 | 2002-11-05 | 凸版印刷株式会社 | Video display device |
WO2015100070A1 (en) * | 2013-12-27 | 2015-07-02 | Alibaba Group Holding Limited | Presenting information based on a video |
-
2015
- 2015-09-02 CN CN201510556707.5A patent/CN105183162B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103869946A (en) * | 2012-12-14 | 2014-06-18 | 联想(北京)有限公司 | Display control method and electronic device |
CN103440307A (en) * | 2013-08-23 | 2013-12-11 | 北京智谷睿拓技术服务有限公司 | Method and device for providing media information |
CN104866116A (en) * | 2015-03-25 | 2015-08-26 | 百度在线网络技术(北京)有限公司 | Method and device for outputting expression information |
Also Published As
Publication number | Publication date |
---|---|
CN105183162A (en) | 2015-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102416558B1 (en) | Video data processing method, device and readable storage medium | |
US12088887B2 (en) | Display method and apparatus for item information, device, and computer-readable storage medium | |
CN105654950B (en) | Adaptive voice feedback method and device | |
JP6894534B2 (en) | Information processing method and terminal, computer storage medium | |
JP5866728B2 (en) | Knowledge information processing server system with image recognition system | |
US10341461B2 (en) | System and method for automatically recreating personal media through fusion of multimodal features | |
CN110458360B (en) | Method, device, equipment and storage medium for predicting hot resources | |
CN110286976A (en) | Interface display method, device, terminal and storage medium | |
CN109410927A (en) | Offline order word parses the audio recognition method combined, device and system with cloud | |
CN106297782A (en) | A kind of man-machine interaction method and system | |
US20170337222A1 (en) | Image searching method and apparatus, an apparatus and non-volatile computer storage medium | |
CN105825522A (en) | Image processing method and electronic device for supporting the same | |
CN111491187B (en) | Video recommendation method, device, equipment and storage medium | |
CN113254684B (en) | Content aging determination method, related device, equipment and storage medium | |
CN112269853B (en) | Retrieval processing method, device and storage medium | |
US20150147045A1 (en) | Computer ecosystem with automatically curated video montage | |
CN104281656B (en) | The method and apparatus of label information are added in the application | |
CN109783624A (en) | Answer generation method, device and the intelligent conversational system in knowledge based library | |
CN109101931A (en) | A kind of scene recognition method, scene Recognition device and terminal device | |
CN106527695B (en) | A kind of information output method and device | |
CN105893404A (en) | Natural information identification based pushing system and method, and client | |
CN108898082A (en) | Image processing method, picture processing unit and terminal device | |
CN109710799B (en) | Voice interaction method, medium, device and computing equipment | |
CN109581881A (en) | Intelligent steward method, intelligent steward terminal and computer readable storage medium | |
CN110414001B (en) | Sentence generation method and device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |