CN109933687A - Information processing method, device and electronic equipment - Google Patents

Information processing method, device and electronic equipment Download PDF

Info

Publication number
CN109933687A
CN109933687A CN201910188469.5A CN201910188469A CN109933687A CN 109933687 A CN109933687 A CN 109933687A CN 201910188469 A CN201910188469 A CN 201910188469A CN 109933687 A CN109933687 A CN 109933687A
Authority
CN
China
Prior art keywords
user
default object
interactive voice
default
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910188469.5A
Other languages
Chinese (zh)
Other versions
CN109933687B (en
Inventor
王东洋
黎广斌
张旭辉
谢兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910188469.5A priority Critical patent/CN109933687B/en
Publication of CN109933687A publication Critical patent/CN109933687A/en
Application granted granted Critical
Publication of CN109933687B publication Critical patent/CN109933687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the present application discloses a kind of information processing method, device and electronic equipment, acquires the image collection of predeterminable area, has user and default object in the predeterminable area;The image collection of acquisition is analyzed, to determine user to the use state of default object;When use state of the user to default object is preset state, interactive voice associated with default object is exported, default object is handled with exciting user to be based on interactive voice.It is provided by the present application interacted with user by way of, user can be made persistently to handle default object, improve user to the attention of default object.

Description

Information processing method, device and electronic equipment
Technical field
This application involves technical field of information processing, more specifically to a kind of information processing method, device and electronics Equipment.
Background technique
Appoint currently, child during study (such as doing one's assignment), cannot often complete study in time because absent minded Business.For this purpose, usually parent supervises child's study, but this mode can make child have pressure, and effect is also unobvious, child Son still cannot complete learning tasks in time.
Summary of the invention
The purpose of the application is to provide a kind of information processing method, device and electronic equipment, with it is at least part of overcome it is existing There is technical problem present in technology.
To achieve the above object, this application provides following technical solutions:
A kind of information processing method, comprising:
The first image collection of predeterminable area is acquired, there is user and default object in the predeterminable area;
The first image set is analyzed, obtains the first analysis as a result, the first analysis result is for characterizing Use state of the user to the default object;
When the first analysis result characterize the user to the use state of the default object is preset state when, it is defeated Interactive voice associated with the default object out, so as to the user be based on the interactive voice to the default object into Row processing.
The above method, it is preferred that the output interactive voice associated with the default object, comprising:
Identify the object content in the default object;The object content be the untreated content of the user in extremely Small part content;
Search and the associated first information of the object content;
The first interactive voice is exported, includes the object content in the content of first interactive voice, described first hands over Mutual voice is used to indicate the user and handles the object content in the default object, obtains processing result, institute Stating should include the first information in processing result.
The above method, it is preferred that after exporting the first interactive voice, further includes:
Obtain the second image collection of the predeterminable area;
Second image collection is analyzed, obtains the second analysis as a result, the second analysis result is for characterizing The processing result that the user handles the object content;
The processing result is compared with the first information;
If in the processing result including the first information, the 4th interactive voice is exported, the 4th interactive voice is used New object content is handled in instruction user;
If in the processing result not including the first information, the second interactive voice, second interactive voice are exported Instruction user is again handled the object content.
The above method, it is preferred that further include:
Monitor the frequency that the user in preset duration is preset state to the use state of the default object;
When the frequency is greater than preset threshold, the first suggestion voice is exported, first suggestion voice is for prompting institute State the cost that user pays the use state of the default object for preset state due to it.
The above method, it is preferred that further include:
Monitor the frequency that the user in preset duration is preset state to the use state of the default object;
The frequency based on the frequency and each target user, calculates the ranking of the user;The target user is There is the user of preset incidence relation with the user;
The ranking is exported with the second suggestion voice.
The above method, it is preferred that there is preset incidence relation to include: by the target user and the user
The personal essential information of the target user and the personal essential information of the user are all satisfied identical conditions.
The above method, it is preferred that there is preset incidence relation to include: by the target user and the user
The target user is the network good friend of the user.
The above method, it is preferred that the way of output of the interactive voice and the personal essential information of the user match.
A kind of information processing unit, comprising:
Acquisition module, for acquiring the first image collection of predeterminable area, the predeterminable area is interior with user and pre- If object;
Analysis module obtains the first analysis as a result, first analysis for analyzing the first image set As a result for characterizing the user to the use state of the default object;
Output module, for being to the use state of the default object when the first analysis result characterizes the user When preset state, output interactive voice associated with the default object, so that the user is based on the interactive voice pair The default object is handled.
A kind of electronic equipment, comprising:
Image acquisition units, for acquiring image;
Memory, at least storing one group of instruction set;
Processor is carried out for calling and executing the described instruction collection in the memory by executing described instruction collection It operates below:
The first image collection of predeterminable area is acquired by described image acquisition unit, and there is user in the predeterminable area And default object;The first image set is analyzed, obtains the first analysis as a result, the first analysis result is used for The user is characterized to the use state of the default object;When the first analysis result characterizes the user to described default When the use state of object is preset state, output interactive voice associated with the default object, so as to user's base The default object is handled in the interactive voice.
By above scheme it is found that a kind of information processing method, device and electronic equipment provided by the present application, acquisition are default The image collection in region, the predeterminable area is interior to have user and default object;The image collection of acquisition is analyzed, with true User is determined to the use state of default object;When use state of the user to default object is preset state, export and default The associated interactive voice of object is handled default object with exciting user to be based on interactive voice.It is provided by the application The mode interacted with user, user can be made persistently to handle default object, improve user to the note of default object Meaning power.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of implementation flow chart of information processing method provided by the embodiments of the present application;
Fig. 2 is a kind of implementation process of output interactive voice associated with default object provided by the embodiments of the present application Figure;
Fig. 3 is a kind of structural schematic diagram of information processing unit provided by the embodiments of the present application;
Fig. 4 is another structural schematic diagram of information processing unit provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of electronic equipment provided by the embodiments of the present application;
Fig. 6 is another structural schematic diagram of electronic equipment provided by the embodiments of the present application.
Specification and claims and term " first " in above-mentioned attached drawing, " second ", " third " " the 4th " etc. (if In the presence of) it is part for distinguishing similar, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so that embodiments herein described herein can be in addition to illustrating herein Sequence in addition is implemented.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other under that premise of not paying creative labor Embodiment shall fall within the protection scope of the present invention.
Information processing method and device provided by the embodiments of the present application can be applied in electronic equipment, which can To be the common equipment of user, e.g., desk lamp, mobile phone, speaker etc. are also possible to the non-common equipment of user, an independence in this way Equipment, the device-specific is in executing information processing method provided by the present application.
Referring to Fig. 1, Fig. 1 is a kind of implementation flow chart of information processing method provided by the embodiments of the present application, can wrap It includes:
Step S11: acquiring the first image collection of predeterminable area, has user and default pair in the predeterminable area As.
The predeterminable area be user to default object carry out using region.User can be in advance to electronic equipment or electronics Position, angle of the image acquisition units of equipment etc. are adjusted, so as to collect the image of the predeterminable area.It can connect The image of continuous acquisition predeterminable area, that is, shoot the video of predeterminable area.
Default object can be the paper document that user uses, and (paper document can be paper, alternatively, can be with books Deng).Default object is also possible to other objects that user can be used, and such as electronic equipment, for example, tablet computer, user is used The electronic document that the tablet computer is shown;For another example electronic equipment can be electronic whiteboard etc..
Step S12: analyzing the first image collection, obtains the first analysis as a result, the first analysis result is for table Family is taken over for use to the use state of default object.
In the embodiment of the present application, user includes at least two to the use state of default object: user is persistently default to this Object is carried out using, user persistently using the default object.
Specifically when analyzing the image collection of acquisition, target detection can be carried out to each image, to identify image User and default object, determine the user to default object by the relative positional relationship between analysis user and default object Use state.For example, if detect user some position (such as head, face, ear or hand) and default object The distance between not in distance range corresponding with the position, illustrate that the default object is not used in user, for accurate judgement User can monitor the frequency that the default object is not used in user to the use state of default object, if the frequency is greater than the One preset value illustrates user not persistently using the default object, i.e. the attention of user does not concentrate on default object.If should Frequency illustrates that user persistently uses the default object less than the first preset value, i.e. user focuses on default object.
In another embodiment optionally analyzed the first image collection, it can only pass through the state of user itself Determine user to the use state of default object.For example, the operating frequency and movement range of user can be monitored, if user Movement range is more than preset range, and is greater than the second preset value more than the frequency of the preset range, can determine that user does not continue Using default object, i.e. the attention of user does not concentrate on default object.Conversely, illustrating that user persistently uses this default pair As i.e. user focuses on default object.
In the another embodiment optionally analyzed the first image collection, the first image collection can be input to The result instruction user of preparatory trained neural network model, neural network model output uses shape to default object State.Wherein, neural network model can be used markd video sample collection and be trained to obtain, and video sample concentrates each view Frequency sample all carries a label, and the user in the description of symbols sample is (i.e. lasting to the use state of the object in sample Object is not used using object or persistently).
Step S13: when the first analysis result characterization user to the use state of default object is preset state when, output with The default associated interactive voice of object is handled default object so that user is based on interactive voice.
In the embodiment of the present application, when the first analysis result characterization is preset state for the use state to default object When, illustrate that user does not continue using default object, at this point, output interactive voice, to excite user to handle default object Interest.
Wherein, the interactive voice exported can indicate that user handles the content specified in default object.Certainly, Interactive voice can not also specify content, as long as user can be excited to handle default object.
Optionally, interactive voice can be exported with the clause of enquirement, for example, " you know ... ".Certainly, interactive voice Can also tone output to remind or challenge, for example, " you or else handle me just detain you divide ".
In addition, when first analysis result characterization user to the use state of default object is not preset state when, forbid defeated Voice out, with interruption-free user.
Information processing method provided by the present application acquires the image collection of predeterminable area, has user in the predeterminable area And default object;The image collection of acquisition is analyzed, to determine user to the use state of default object;As user couple When the use state of default object is preset state, interactive voice associated with default object is exported, to excite user to be based on Interactive voice handles default object, and user can be made persistently to handle default object, improves user to default The attention of object.
In an optional embodiment, information processing method provided by the present application can be handed over by electronic equipment and cloud device Mutually complete, for example, the process of the image collection of acquisition predeterminable area and output interactive voice can be completed by electronic equipment, and its Its process is analyzed for example, combining to image as a result, determining that the process etc. of interactive voice can be set by cloud It is standby to complete.I.e. the first image collection of acquisition is uploaded to cloud by electronic equipment, and the first image collection is analyzed in cloud Obtain the first analysis as a result, when the first analysis result characterization user to the use state of default object is preset state when, generate Interactive voice is sent to electronic equipment by interactive voice, and electronic equipment exports interactive voice.
In an optional embodiment, a kind of implementation flow chart of above-mentioned output interactive voice associated with default object As described in Figure 2, may include:
Step S21: the object content in default object is identified;The object content be the untreated content of user at least Partial content.
The default object can be the file for carrying character, such as paper, training book, the textbook of student.It is above-mentioned based on this Object content can be the untreated content of user in file.
Step S22: search and the associated first information of object content.
It can be searched for from network and the associated first information of object content.For example, object content can be some topic, The first information can be the answer of the topic.
Step S23: exporting the first interactive voice, includes object content, the first interaction in the content of first interactive voice Voice is used to indicate user and handles the object content in default object, obtains processing result, should in the processing result Including the first information.
Object content is handled that is, the first interactive voice is used to indicate user, and provides processing result.Also By taking object content is some topic as an example, the first interactive voice can be the voice for requrying the users the answer of the topic.User Need to provide asked a question purpose answer.
Further, after exporting the first interactive voice, can also include:
Obtain the second image collection of predeterminable area.
Second image collection is analyzed, obtains the second analysis as a result, the second analysis result is for characterizing user couple The processing result that the object content is handled;
Processing result is compared with the first information, with the whether correct of detection processing result;
If comparison result shows in processing result to include the first information, illustrate that processing result is correct, then exports the 4th interaction Voice, the 4th interactive voice, user indicate that user handles new object content.
For example,
In an optional embodiment, it can return and execute the step of identifying the object content in default object, that is, return Step S21 and subsequent step are executed, at this point, the 4th interactive voice is exactly the first new interactive voice.It is new to determine in the embodiment Object content can be the adjacent content or non-conterminous content of object content determined with the last time.
In another optional embodiment, the 4th interactive voice can be " going to handle next object content ", this implementation In example, next object content is the adjacent content of the last object content determined.
If in processing result not including the first information, illustrate that processing result is incorrect, then export the second interactive voice, this Two interactive voices instruction user is again handled object content.
In the embodiment of the present application, after exporting the first interactive voice, detection user carries out processing to default object and provides Processing result, whether and it is correct to detect the processing result, if processing result is correct, continues to indicate user in new target Appearance is handled.If processing result is incorrect, export the second interactive voice, instruction user again to object content at Reason.
Further, after exporting the second interactive voice, the knot that user handles object content again is detected again Fruit can continue to indicate that user handles new object content if detection processing result is correct again.If again Detection processing result is incorrect, then can export third interactive voice, includes the first information in the third interactive voice, to accuse Know the correct processing result of user, in case strike user is to the enthusiasm of default object handles.
After exporting third interactive voice, it can continue to instruction user and new object content handled.
Certainly, in order to avoid user generates dependence to electronic equipment, what it is in output preset times includes correct processing result Voice after, the voice comprising correct processing result can be no longer exported, but export other incentive voices, for example, " too thick The heart, then do one time, answer questions reward ".
In an optional embodiment, information processing method provided by the present application can also include:
Monitor the frequency that user in preset duration is preset state to the use state of default object;The frequency, that is, unit time The number of interior appearance.That is, the application monitoring, in preset duration, user is default shape to the use state of default object The number this case that state occurred within the unit time.The total degree that above situation can be occurred in preset duration in user removes With preset duration, the number that above situation in preset duration occurs within the unit time is obtained.
When the frequency be greater than preset threshold when, export the first suggestion voice, first suggestion voice for prompt user due to Its cost that use state of default object is paid for preset state.
In the embodiment of the present application, when user is big to the frequency that the use state of default object is preset state in preset duration When preset threshold, user can be punished and user is paid a price, for example, deducting partial integration, which can It is not preset state and the encouragement obtained to the use state of default object in continuous duration to be user.
By this punitive measures, user is excited to handle default object.
In an optional embodiment, information processing method provided by the present application can also include:
Monitor the frequency that user in preset duration is preset state to the use state of default object;The frequency, that is, unit time The number of interior appearance.
The frequency of the frequency and each target user based on above-mentioned user, calculates the ranking of above-mentioned user;Above-mentioned target User is the user for having preset incidence relation with user.
It according to user is pre- to the use state of default object in preset duration by different user in the embodiment of the present application If the frequency of state is ranked up, wherein default object handled by different users can be different, can also be identical.
Above-mentioned ranking is exported with the second suggestion voice.
By exporting the ranking of user further to motivate user to handle default object.
Wherein, the personal essential information that above-mentioned target user can be personal essential information and the user is all satisfied same The user of condition.Personal essential information may include: age, gender, school etc..Wherein, personal essential information meets same Part may include: in personal essential information at least one of it is same or equivalent.For example, the age in the same age bracket, and/or, Gender is identical, and/or, school is identical and/or school's property is identical (for example, being all junior middle school, alternatively, being all primary school, alternatively, all It is high medium) etc..That is, target user and the user can not be network good friend in the present embodiment.
Alternatively,
Above-mentioned target user can be the network good friend of the user.
In an optional embodiment, the way of output of above-mentioned interactive voice can be with the personal essential information phase of user Match.
For example, if the age of user less than 10 years old, can be exported with people's acceptable tone of this age bracket, than Such as, " dotey, you know the 5 × 5 of the 3rd topic be equal to it is how many? " if the age of user be greater than 10 years old less than 15 years old, can be with another A kind of tone output, such as " classmate, you can solve the 5th topic not ".For another example different genders, the way of output of interactive voice Can be different, for example, if the gender of user is female, interactive voice can export for " little sister, you know the 3rd topic 5 × 5 equal to how many? " if the gender of user is male, interactive voice can be exported as " younger brother, you know the 5 × 5 of the 3rd topic Equal to how many? ".
The application is explained for this scene of doing one's assignment by child below.
Assuming that the desk lamp that child uses is integrated with information processing method disclosed in the present application.In the mistake that child does one's assignment Cheng Zhong, the video of camera shooting child and its operating area on desk lamp, when according to the video detection of shooting to child When not doing one's assignment persistently, i.e., when absent minded, scanned by camera and identifies that child does not provide the topic of answer also, And retrieve the answer of topic, then generate and put question to and be output by voice enquirements, for example, " dotey, you know the 3rd inscribe 5 × 5 equal to how many? ", then, continue to shoot video, be given just in operation when according to the video detection of shooting to child When true answer 25, the voice of encouragement can be exported, for example, " answering questions, dotey, you are excellent!", after the voice that output is encouraged, The interactive voice that instruction user does next topic can also be then exported, for example, " doing next topic ".If according to shooting Video detection gives wrong answer to child in operation, for example when 15, can export indicative interactive voice, for example, " dotey does not answer questions, and tries again!", and continue to test whether child gives correct option, if given just True answer, then can export the voice of encouragement, if not providing correct option also, can export the language comprising correct option Sound.
By above-mentioned interactive mode, the attention of child can be withdrawn into operation.
Corresponding with embodiment of the method, the application also provides a kind of information processing unit, information processing provided by the present application A kind of structural schematic diagram of device is as shown in figure 3, may include:
Acquisition module 31, have for acquiring the first image collection of predeterminable area, in the predeterminable area user and Default object;
Analysis module 32 obtains the first analysis as a result, described first point for analyzing the first image set Analysis result is for characterizing the user to the use state of the default object;
Output module 33, for characterizing the user to the use state of the default object when the first analysis result When for preset state, output interactive voice associated with the default object, so that the user is based on the interactive voice The default object is handled.
Information processing unit provided by the present application acquires the image collection of predeterminable area, has user in the predeterminable area And default object;The image collection of acquisition is analyzed, to determine user to the use state of default object;As user couple When the use state of default object is preset state, interactive voice associated with default object is exported, to excite user to be based on Interactive voice handles default object, and user can be made persistently to handle default object, improves user to default The attention of object.
In an optional embodiment, output module 33 specifically can be used for:
Identify the object content in the default object;The object content be the untreated content of the user in extremely Small part content;
Search and the associated first information of the object content;
The first interactive voice is exported, includes the object content in the content of first interactive voice, described first hands over Mutual voice is used to indicate the user and handles the object content in the default object, obtains processing result, institute Stating should include the first information in processing result.
In an optional embodiment, acquisition module 31 can be also used for the second image set for obtaining the predeterminable area It closes;
Analysis module 32 can be also used for analyzing second image collection, obtain the second analysis as a result, described Second analysis result is for characterizing the processing result that the user handles the object content;
Output module 33 can be also used for for the processing result being compared with the first information;If the processing As a result include the first information in, export the 4th interactive voice, the 4th interactive voice is used to indicate user to new mesh Mark content is handled;If in the processing result not including the first information, the second interactive voice is exported, described second hands over Mutual voice instruction user is again handled the object content.
In an optional embodiment, information processing unit provided by the present application can also include:
Monitoring modular is preset state to the use state of the default object for monitoring the user in preset duration The frequency;
Output module 33 can be also used for, and when the frequency is greater than preset threshold, export the first suggestion voice, and described the The generation that one suggestion voice is used to that the user to be prompted to pay the use state of the default object for preset state due to it Valence.
In an optional embodiment, information processing unit provided by the present application can also include:
Monitoring modular is preset state to the use state of the default object for monitoring the user in preset duration The frequency;
Sorting module calculates the ranking of the user for the frequency based on the frequency and each target user; The target user is the user for having preset incidence relation with the user;
Output module 33 can be also used for, and export the ranking with the second suggestion voice.
In an optional embodiment, the target user may include: with preset incidence relation with the user
The personal essential information of the target user and the personal essential information of the user are all satisfied identical conditions.
Alternatively,
The target user is the network good friend of the user.
In an optional embodiment, the way of output of interactive voice can be with the personal essential information phase of the user Match.
It is corresponding with embodiment of the method, another structural schematic diagram such as Fig. 4 institute of information processing unit provided by the present application Show, may include:
Acquisition module 41, have for acquiring the first image collection of predeterminable area, in the predeterminable area user and Default object;
Communication module 42, for the first image collection to be sent to cloud device, so that cloud device is to the first image set It closes and is analyzed to obtain the first analysis as a result, when the first analysis result characterization user is default shape to the use state of default object When state, interactive voice is generated, interactive voice is sent to electronic equipment;
Output module 43, for exporting interactive voice, so that the user is based on the interactive voice to described default pair As being handled.
Information processing unit provided by the present application acquires the image collection of predeterminable area, has user in the predeterminable area And default object;It is analyzed by image collection of the cloud to acquisition, to determine user to the use state of default object; When use state of the user to default object is preset state, interactive voice associated with default object is exported, with excitation User is based on interactive voice and handles default object, and user can be made persistently to handle default object, improves and uses Attention of the family to default object.
Corresponding with embodiment of the method, the application also provides a kind of electronic equipment, a kind of structural representation of the electronic equipment Figure is as shown in figure 5, may include: image acquisition units 51, memory 52, processor 53, voice-output unit 54;Wherein,
Image acquisition units 51, for acquiring image;
Memory 52, at least storing one group of instruction set;
Processor 53, for calling and executing the described instruction collection in the memory, by execute described instruction collection into The following operation of row:
The first image collection of predeterminable area is acquired by described image acquisition unit 51, is had in the predeterminable area and is used Family and default object;The first image set is analyzed, obtains the first analysis as a result, the first analysis result is used In the characterization user to the use state of the default object;When the first analysis result characterizes the user to described pre- If the use state of object is preset state, it is output by voice the output of unit 54 interaction associated with the default object Voice is handled the default object so that the user is based on the interactive voice.
Electronic equipment provided by the present application acquires the image collection of predeterminable area, have in the predeterminable area user and Default object;The image collection of acquisition is analyzed, to determine user to the use state of default object;When user is to default When the use state of object is preset state, interactive voice associated with default object is exported, to excite user to be based on interaction Voice handles default object, and user can be made persistently to handle default object, improves user to default object Attention.
In an optional embodiment, processor 53 is wrapped in output interactive voice associated with the default object It includes:
Identify the object content in the default object;The object content be the untreated content of the user in extremely Small part content;
Search and the associated first information of the object content;
The first interactive voice is exported, includes the object content in the content of first interactive voice, described first hands over Mutual voice is used to indicate the user and handles the object content in the default object, obtains processing result, institute Stating should include the first information in processing result.
In an optional embodiment, processor 53 be can be also used for:
After exporting the first interactive voice, the second image collection of the predeterminable area is obtained;
Second image collection is analyzed, obtains the second analysis as a result, the second analysis result is for characterizing The processing result that the user handles the object content;
The processing result is compared with the first information;
If in the processing result including the first information, the 4th interactive voice is exported, the 4th interactive voice is used New object content is handled in instruction user;
If in the processing result not including the first information, the second interactive voice, second interactive voice are exported Instruction user is again handled the object content.
In an optional embodiment, processor 53 be can be also used for:
Monitor the frequency that the user in preset duration is preset state to the use state of the default object;
When the frequency is greater than preset threshold, the first suggestion voice is exported, first suggestion voice is for prompting institute State the cost that user pays the use state of the default object for preset state due to it.
In an optional embodiment, processor 53 be can be also used for:
Monitor the frequency that the user in preset duration is preset state to the use state of the default object;
The frequency based on the frequency and each target user, calculates the ranking of the user;The target user is There is the user of preset incidence relation with the user;
The ranking is exported with the second suggestion voice.
In an optional embodiment, the target user may include: with preset incidence relation with the user
The personal essential information of the target user and the personal essential information of the user are all satisfied identical conditions.
Alternatively,
The target user is the network good friend of the user.
In an optional embodiment, the way of output of interactive voice can be with the personal essential information phase of the user Match.
Corresponding with embodiment of the method, another structural schematic diagram of electronic equipment provided by the present application is as shown in fig. 6, can To include: image acquisition units 61, memory 62, processor 63, communication unit 64 and voice-output unit 65;Wherein,
Image acquisition units 61, for acquiring image;
Memory 62, at least storing one group of instruction set;
Processor 63, for calling and executing the described instruction collection in the memory 62, by executing described instruction collection It performs the following operation:
The first image collection of predeterminable area is acquired by described image acquisition unit 51, is had in the predeterminable area and is used Family and default object;The first image set is sent to cloud device, so that cloud device is to the first image collection Conjunction is analyzed, and obtains the first analysis as a result, the first analysis result is for characterizing the user to the default object Use state;When the first analysis result characterize the user to the use state of the default object is preset state when, Interactive voice is generated, the interactive voice is sent to electronic equipment.
The interactive voice is received by communication unit 64, unit 65 is output by voice and exports the interactive voice, with Toilet is stated user and is handled based on the interactive voice the default object.
Electronic equipment provided by the present application acquires the image collection of predeterminable area, have in the predeterminable area user and Default object;It is analyzed by image collection of the cloud to acquisition, to determine user to the use state of default object;When with Family exports interactive voice associated with default object, to when the use state of default object is preset state to excite user Default object is handled based on interactive voice, user can be made persistently to handle default object, improves user couple The attention of default object.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.Another point, shown or discussed mutual coupling, direct-coupling or communication connection can To be the indirect coupling or communication connection of device or unit through some interfaces, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It should be appreciated that can be combined with each other combination in the embodiment of the present application from power, each embodiment, feature, can realize Solve aforementioned technical problem.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (10)

1. a kind of information processing method characterized by comprising
The first image collection of predeterminable area is acquired, there is user and default object in the predeterminable area;
The first image set is analyzed, obtains the first analysis as a result, the first analysis result is described for characterizing Use state of the user to the default object;
When the first analysis result characterize the user to the use state of the default object is preset state when, output with The associated interactive voice of default object, so as to the user be based on the interactive voice to the default object at Reason.
2. the method according to claim 1, wherein the output interactive language associated with the default object Sound, comprising:
Identify the object content in the default object;The object content is at least portion in the untreated content of the user Divide content;
Search and the associated first information of the object content;
The first interactive voice is exported, includes the object content in the content of first interactive voice, the first interaction language Sound is used to indicate the user and handles the object content in the default object, obtains processing result, the place Managing in result should include the first information.
3. according to the method described in claim 2, it is characterized in that, after exporting the first interactive voice, further includes:
Obtain the second image collection of the predeterminable area;
Second image collection is analyzed, obtains the second analysis as a result, the second analysis result is described for characterizing The processing result that user handles the object content;
The processing result is compared with the first information;
If in the processing result including the first information, the 4th interactive voice is exported, the 4th interactive voice is for referring to Show that user handles new object content;
If in the processing result not including the first information, the second interactive voice, the second interactive voice instruction are exported User is again handled the object content.
4. the method according to claim 1, wherein further include:
Monitor the frequency that the user in preset duration is preset state to the use state of the default object;
When the frequency is greater than preset threshold, the first suggestion voice is exported, first suggestion voice is for prompting the use The cost that family pays the use state of the default object for preset state due to it.
5. the method according to claim 1, wherein further include:
Monitor the frequency that the user in preset duration is preset state to the use state of the default object;
The frequency based on the frequency and each target user, calculates the ranking of the user;The target user for institute State the user that user has preset incidence relation;
The ranking is exported with the second suggestion voice.
6. according to the method described in claim 5, it is characterized in that, the target user and the user are closed with preset association System includes:
The personal essential information of the target user and the personal essential information of the user are all satisfied identical conditions.
7. according to the method described in claim 5, it is characterized in that, the target user and the user are closed with preset association System includes:
The target user is the network good friend of the user.
8. the method according to claim 1, wherein of the way of output of the interactive voice and the user People's essential information matches.
9. a kind of information processing unit characterized by comprising
Acquisition module has user and default pair for acquiring the first image collection of predeterminable area in the predeterminable area As;
Analysis module obtains the first analysis as a result, the first analysis result for analyzing the first image set For characterizing the user to the use state of the default object;
Output module, for being default to the use state of the default object when the first analysis result characterizes the user When state, output interactive voice associated with the default object, so as to the user based on the interactive voice to described Default object is handled.
10. a kind of electronic equipment characterized by comprising
Image acquisition units, for acquiring image;
Memory, at least storing one group of instruction set;
Processor, it is following by executing the progress of described instruction collection for calling and executing the described instruction collection in the memory Operation:
The first image collection that predeterminable area is acquired by described image acquisition unit, have in the predeterminable area user and Default object;The first image set is analyzed, obtains the first analysis as a result, the first analysis result is for characterizing Use state of the user to the default object;When the first analysis result characterizes the user to the default object Use state when being preset state, output interactive voice associated with the default object, so that the user is based on institute Interactive voice is stated to handle the default object.
CN201910188469.5A 2019-03-13 2019-03-13 Information processing method and device and electronic equipment Active CN109933687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910188469.5A CN109933687B (en) 2019-03-13 2019-03-13 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910188469.5A CN109933687B (en) 2019-03-13 2019-03-13 Information processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109933687A true CN109933687A (en) 2019-06-25
CN109933687B CN109933687B (en) 2022-05-31

Family

ID=66987076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910188469.5A Active CN109933687B (en) 2019-03-13 2019-03-13 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109933687B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110767000A (en) * 2019-10-28 2020-02-07 安徽信捷智能科技有限公司 Children's course synchronizer based on image recognition
CN114999243A (en) * 2022-07-18 2022-09-02 深圳市沃特沃德信息有限公司 Audio-visual conversion method, device and equipment for keeping learning efficiency

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1785803A2 (en) * 2005-11-09 2007-05-16 ASUSTeK Computer Inc. Monitor with reminder sound
US20090205042A1 (en) * 2005-12-15 2009-08-13 Koninklijke Philips Electronics, N.V. External user interface based measurement association
CN103500331A (en) * 2013-08-30 2014-01-08 北京智谷睿拓技术服务有限公司 Reminding method and device
US20140207453A1 (en) * 2013-01-22 2014-07-24 Electronics And Telecommunications Research Institute Method and apparatus for editing voice recognition results in portable device
CN104199557A (en) * 2014-09-24 2014-12-10 联想(北京)有限公司 Information processing method, information processing device and electronic equipment
CN104332032A (en) * 2014-11-11 2015-02-04 广东小天才科技有限公司 Study reminding method and wearable device
CN105117699A (en) * 2015-08-19 2015-12-02 小米科技有限责任公司 User behavior monitoring method and device
CN105787442A (en) * 2016-02-19 2016-07-20 电子科技大学 Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof
CN106297213A (en) * 2016-08-15 2017-01-04 欧普照明股份有限公司 Detection method, detection device and lighting
CN108460124A (en) * 2018-02-26 2018-08-28 北京物灵智能科技有限公司 Exchange method and electronic equipment based on figure identification
CN108460707A (en) * 2018-03-12 2018-08-28 林为庆 A kind of the operation intelligent supervision method and its system of student

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1785803A2 (en) * 2005-11-09 2007-05-16 ASUSTeK Computer Inc. Monitor with reminder sound
US20090205042A1 (en) * 2005-12-15 2009-08-13 Koninklijke Philips Electronics, N.V. External user interface based measurement association
US20140207453A1 (en) * 2013-01-22 2014-07-24 Electronics And Telecommunications Research Institute Method and apparatus for editing voice recognition results in portable device
CN103500331A (en) * 2013-08-30 2014-01-08 北京智谷睿拓技术服务有限公司 Reminding method and device
CN104199557A (en) * 2014-09-24 2014-12-10 联想(北京)有限公司 Information processing method, information processing device and electronic equipment
CN104332032A (en) * 2014-11-11 2015-02-04 广东小天才科技有限公司 Study reminding method and wearable device
CN105117699A (en) * 2015-08-19 2015-12-02 小米科技有限责任公司 User behavior monitoring method and device
CN105787442A (en) * 2016-02-19 2016-07-20 电子科技大学 Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof
CN106297213A (en) * 2016-08-15 2017-01-04 欧普照明股份有限公司 Detection method, detection device and lighting
CN108460124A (en) * 2018-02-26 2018-08-28 北京物灵智能科技有限公司 Exchange method and electronic equipment based on figure identification
CN108460707A (en) * 2018-03-12 2018-08-28 林为庆 A kind of the operation intelligent supervision method and its system of student

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
D. JAYASHREE等: ""Voice based application as medicine spotter for visually impaired"", 《 2016 SECOND INTERNATIONAL CONFERENCE ON SCIENCE TECHNOLOGY ENGINEERING AND MANAGEMENT (ICONSTEM)》 *
刘阳春: ""基于DSP的语音、图像采集处理系统的设计与实现"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110767000A (en) * 2019-10-28 2020-02-07 安徽信捷智能科技有限公司 Children's course synchronizer based on image recognition
CN114999243A (en) * 2022-07-18 2022-09-02 深圳市沃特沃德信息有限公司 Audio-visual conversion method, device and equipment for keeping learning efficiency
CN114999243B (en) * 2022-07-18 2022-11-04 深圳市沃特沃德信息有限公司 Audio-visual conversion method, device and equipment for keeping learning efficiency

Also Published As

Publication number Publication date
CN109933687B (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN110782962A (en) Hearing language rehabilitation device, method, electronic equipment and storage medium
CN109410675B (en) Exercise recommendation method based on student portrait and family education equipment
CN109241519B (en) Quality evaluation model acquisition method and device, computer equipment and storage medium
JP2018010501A (en) Interview system
CN111179935B (en) Voice quality inspection method and device
CN109817312A (en) A kind of medical bootstrap technique and computer equipment
JP6531323B1 (en) PROGRAM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD
CN106682137A (en) Intelligent stock investment adviser questioning-answering interaction method and intelligent stock investment adviser questioning-answering interaction system
CN108920450A (en) A kind of knowledge point methods of review and electronic equipment based on electronic equipment
CN109933687A (en) Information processing method, device and electronic equipment
CN108932760A (en) Work attendance method and terminal based on recognition of face
CN109739354A (en) A kind of multimedia interaction method and device based on sound
CN109271503A (en) Intelligent answer method, apparatus, equipment and storage medium
CN109886775A (en) House advantage and disadvantage appraisal procedure, device, equipment and computer readable storage medium
CN110874405A (en) Service quality inspection method, device, equipment and computer readable storage medium
CN111026949A (en) Question searching method and system based on electronic equipment
CN112015574A (en) Remote medical education training method, device, equipment and storage medium
CN112053205A (en) Product recommendation method and device through robot emotion recognition
CN111400539B (en) Voice questionnaire processing method, device and system
CN109065015B (en) Data acquisition method, device and equipment and readable storage medium
CN111046293B (en) Method and system for recommending content according to evaluation result
CN106202539B (en) Syndication search method and device
CN110796017A (en) Method and device for determining article drop-out and method and device for training model
CN109635214A (en) A kind of method for pushing and electronic equipment of education resource
CN110443122A (en) Information processing method and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant