CN104777989B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN104777989B
CN104777989B CN201410014374.9A CN201410014374A CN104777989B CN 104777989 B CN104777989 B CN 104777989B CN 201410014374 A CN201410014374 A CN 201410014374A CN 104777989 B CN104777989 B CN 104777989B
Authority
CN
China
Prior art keywords
user
characteristic parameters
condition
file
acquisition unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410014374.9A
Other languages
Chinese (zh)
Other versions
CN104777989A (en
Inventor
李斌
王晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410014374.9A priority Critical patent/CN104777989B/en
Priority to US14/230,704 priority patent/US10613687B2/en
Publication of CN104777989A publication Critical patent/CN104777989A/en
Application granted granted Critical
Publication of CN104777989B publication Critical patent/CN104777989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of information processing methods, comprising: obtains the current characteristic parameter of collected first user of acquisition unit;When the characteristic parameter meets preset first condition, the first response message is generated;According to first response message, the first position of first user and the identification information of the first user are determined using the characteristic parameter;Corresponding first user file is obtained according to the identification information of first user, and the first display position is determined according to the first position of first user;Using first user file and first display position, the first instruction is generated;According to first instruction, the touch-display unit is made to show first user file at first display position.The invention also discloses a kind of electronic equipment, it can be displayed for a user automatically using the present invention and meet the file of user identity and position and improve the experience of user to provide more convenient operating environment.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of wireless communications, and in particular, to an information processing method and an electronic device.
Background
With the increasing popularity of large-screen devices, more and more users will use large-screen devices. However, when a large-screen device is used, because the operation range of the screen is too large, a user usually does not use the entire touch display area of the device, and it is necessary to display the file selected by the user at the corresponding display position for the user's operation. Therefore, a user needs to select a file and select a display position through manual operation, and when the display position of the file cannot be reached by the user, the user cannot operate the file.
Disclosure of Invention
In view of this, an object of the present invention is to provide an information processing method and an electronic device, which can provide a more convenient environment for a user and improve user experience.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment is provided with an acquisition unit and a touch display unit; when the acquisition unit is in a working state and acquires the characteristic parameters of at least one user in real time, the method comprises the following steps:
acquiring the current characteristic parameters of the first user acquired by the acquisition unit;
judging whether the characteristic parameters meet a preset first condition or not, and generating first response information when the characteristic parameters meet the first condition;
according to the first response information, determining a first position of the first user and identification information of the first user by using the characteristic parameters;
acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user;
generating a first instruction by using the first user file and the first display position;
according to the first instruction, the touch display unit displays the first user file at the first display position.
The invention also provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment is provided with an acquisition unit and a touch display unit; when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, the method comprises the following steps:
acquiring the current characteristic parameters of the first user acquired by the acquisition unit;
judging whether the characteristic parameters meet a preset first condition or not, and if not, generating a third instruction;
and the touch display unit analyzes the third instruction, acquires a third analysis result, and closes the first user file according to the third analysis result.
The present invention provides an electronic device, including: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the current characteristic parameters of the first user when the acquisition unit is in a working state and acquires the characteristic parameters of at least one user in real time, and sending the characteristic parameters to the information processing unit;
the information processing unit is used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit, judging whether the characteristic parameters meet a preset first condition or not, and if the characteristic parameters meet the first condition, generating first response information; according to the first response information, determining a first position of the first user and identification information of the first user by using the characteristic parameters; acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user; generating a first instruction by using the first user file and the first display position, and sending the first instruction to a touch display unit;
the touch display unit is used for enabling the touch display unit to display the first user file at the first display position according to the first instruction.
The present invention provides an electronic device, including: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the current characteristic parameters of the first user;
the information processing unit is used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, judging whether the characteristic parameters meet a preset first condition or not, if not, generating a third instruction, and sending the third instruction to the touch display unit;
and the touch display unit is used for analyzing the third instruction, acquiring a third analysis result and closing the first user file according to the third analysis result.
According to the information processing method and the electronic equipment, provided by the invention, when the characteristic parameters of the user meet the first condition, the corresponding file can be automatically selected for the user and displayed at the corresponding position. Therefore, files which accord with the user identity and the user position can be automatically selected and displayed for the user, a more convenient environment is provided for the user, and the user experience is improved.
Drawings
FIG. 1 is a first flowchart illustrating an information processing method according to an embodiment of the present invention;
FIG. 2 illustrates a first scenario in which embodiments of the present invention are used;
FIG. 3 illustrates a second scenario in which embodiments of the present invention are used;
FIG. 4 illustrates a third scenario in which embodiments of the present invention are used;
FIG. 5 is a flowchart illustrating a second information processing method according to an embodiment of the present invention;
FIG. 6 is a third schematic flow chart illustrating an information processing method according to an embodiment of the present invention;
FIG. 7 is a fourth flowchart illustrating an information processing method according to an embodiment of the present invention;
FIG. 8 illustrates a fourth use scenario of an embodiment of the present invention;
FIG. 9 illustrates a fifth use scenario of an embodiment of the present invention;
fig. 10 is a schematic view of a composition structure of an electronic device according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The first embodiment,
The embodiment of the invention provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment can be large-screen equipment and is provided with an acquisition unit and a touch display unit; when the collecting unit is in a working state and collects the characteristic parameters of at least one user in real time, as shown in fig. 1, the method includes:
step 101: and acquiring the current characteristic parameters of the first user acquired by the acquisition unit.
Step 102: and judging whether the characteristic parameters meet a preset first condition, and generating first response information when the characteristic parameters meet the first condition.
Step 103: and determining the first position of the first user and the identification information of the first user by using the characteristic parameters according to the first response information.
Step 104: and acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user.
Step 105: and generating a first instruction by using the first user file and the first display position.
Step 106: according to the first instruction, the touch display unit displays the first user file at the first display position.
Here, the acquisition unit may include: a microphone array, and/or a fisheye camera, and/or a WFOV lens.
The microphone array is a prior art, and a microphone array can be realized by installing a plurality of microphones on an electronic device.
The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
For example, as shown in fig. 2, a fisheye camera or WFOV lens 211 is used to perform feature collection on the peripheral face of the user, where the fisheye camera or WFOV lens 211 is placed at one end of the electronic device 21, and the collection range may be as shown in the figure, so as to collect facial feature parameters of the user, and obtain the distance and orientation between the WFOV lens and the user.
As shown in fig. 3, when the electronic device 31 is equipped with N microphones 311 to form a microphone array, when a user makes a sound, the N microphones 311 each collect a sound wave of the sound made by the user, and the sound wave collected by each microphone and the direction of the sound wave determine the volume, the sound direction, and the voiceprint characteristic parameters of the user. The implementation of the microphone array used by the embodiments of the present invention is prior art and will not be described in detail here. Preferably, as shown in fig. 4, when a microphone array is formed by using N microphones 411 on the electronic device 41, if there are multiple users beside the electronic device 41, as shown in the figure, there are a first user 41 and a second user 42, the microphone array may collect sound waves emitted by the users respectively, and classify the collected sound waves through the voiceprint feature parameters to obtain feature parameters of the first user 41 and the second user 42 respectively; the specific implementation method is the prior art, and is not described herein.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound azimuth, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
Wherein the determining whether the characteristic parameter satisfies a preset first condition may include: determining the distance between the first user and the acquisition unit according to the sound parameters or the face characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
Wherein the determining the distance from the first user to the acquisition unit according to the sound parameter or the face feature parameter may be: when the characteristic parameter is a sound parameter, calculating to obtain the distance between the first user and the acquisition unit by using the sound size or the phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit.
Preferably, the determining the first location of the first user and the identification information of the first user by using the characteristic parameters includes: when the characteristic parameters are voice parameters, taking voice orientation parameters in the voice parameters as a first position of the first user, extracting voiceprint characteristic parameters in the voice parameters, comparing the voiceprint characteristic parameters with voiceprint characteristics in a prestored voiceprint characteristic table of the user, and extracting identification information corresponding to the first user; wherein the user voiceprint feature list may include: identification information of the user and corresponding voiceprint characteristic parameters.
Or, when the feature parameter is a face feature parameter, taking a direction parameter in the face feature parameter as a first position of the first user, extracting a face feature or an eye feature value in the face feature parameter, comparing the face feature or the eye feature value with a face feature or an eye feature value in a pre-stored user feature table, and extracting identification information corresponding to the first user.
Therefore, by adopting the scheme provided by the embodiment, when the characteristic parameters of the user meet the first condition, the corresponding file can be automatically selected for the user and displayed at a proper position. Therefore, files which accord with the user identity and the user position can be automatically selected and displayed for the user, a more convenient environment is provided for the user, and the user experience is improved.
Example II,
The embodiment of the invention provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment can be large-screen equipment and is provided with an acquisition unit and a touch display unit; when the collecting unit is in a working state and collects the characteristic parameters of at least one user in real time, as shown in fig. 5, the method includes:
step 501: and acquiring the current characteristic parameters of the first user acquired by the acquisition unit.
Step 502: and judging whether the characteristic parameters meet a preset first condition, and generating first response information when the characteristic parameters meet the first condition.
Step 503: and determining the first position of the first user and the identification information of the first user by using the characteristic parameters according to the first response information.
Step 504: and acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user.
Step 505: and generating a first instruction by using the first user file and the first display position.
Step 506: according to the first instruction, the touch display unit displays the first user file at the first display position.
Step 507: and continuously acquiring the current characteristic parameters of the first user acquired by the acquisition unit.
Step 508: when the characteristic parameter meets a preset first condition, determining a second position of the first user according to the characteristic parameter, judging whether a distance difference value between the second position and the first position meets the second condition, and when the second condition is met, generating second response information.
Step 509: and taking the second position as the updated first position according to the second response information.
Step 510: determining an updated first display position using the updated first position; and generating a second instruction by using the updated first display position and the first user file.
Step 511: and according to the second instruction, the touch display unit displays the first user file at the updated first display position.
Here, the acquisition unit may include: a microphone array, and/or a fisheye camera, and/or a WFOV lens.
The microphone array is a prior art, and a microphone array can be realized by installing a plurality of microphones on an electronic device.
The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
For example, as shown in fig. 2, a fisheye camera or WFOV lens 211 is used to perform feature collection on the peripheral face of the user, where the fisheye camera or WFOV lens 211 is placed at one end of the electronic device 21, and the collection range may be as shown in the figure, so as to collect facial feature parameters of the user, and obtain the distance and orientation between the WFOV lens and the user.
As shown in fig. 3, when the electronic device 31 is equipped with N microphones 311 to form a microphone array, when a user makes a sound, the N microphones 311 each collect a sound wave of the sound made by the user, and the sound wave collected by each microphone and the direction of the sound wave determine the volume, sound direction, and voiceprint characteristic parameters of the user; it should be noted that the position identification of the sound can also be performed by the sound phase difference. The implementation of the microphone array used by the embodiments of the present invention is prior art and will not be described in detail here. Preferably, as shown in fig. 4, when a microphone array is formed by using N microphones 411 on the electronic device 41, if there are multiple users beside the electronic device 41, as shown in the figure, there are a first user 41 and a second user 42, the microphone array may collect sound waves emitted by the users respectively, and classify the collected sound waves through the voiceprint feature parameters to obtain feature parameters of the first user 41 and the second user 42 respectively; the specific implementation method is the prior art, and is not described herein.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
Wherein the determining whether the characteristic parameter satisfies a preset first condition may include: determining the distance between the first user and the acquisition unit according to the sound parameters or the face characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
Wherein the determining the distance from the first user to the acquisition unit according to the sound parameter or the face feature parameter may be: when the characteristic parameter is a sound parameter, calculating to obtain the distance between the first user and the acquisition unit by using the sound size or the phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit. The calculating of the distance between the first user and the acquisition unit by using the sound magnitude or the phase difference in the sound parameters is the prior art, and is not described herein again.
Preferably, the determining the first location of the first user and the identification information of the first user by using the characteristic parameters includes: when the characteristic parameters are voice parameters, taking voice orientation parameters in the voice parameters as a first position of the first user, extracting voiceprint characteristic parameters in the voice parameters, comparing the voiceprint characteristic parameters with voiceprint characteristics in a prestored voiceprint characteristic table of the user, and extracting identification information corresponding to the first user; wherein the user voiceprint feature list may include: identification information of the user and corresponding voiceprint characteristic parameters.
Or, when the feature parameter is a face feature parameter, taking a direction parameter in the face feature parameter as a first position of the first user, extracting a face feature or an eye feature value in the face feature parameter, comparing the face feature or the eye feature value with a face feature or an eye feature value in a pre-stored user feature table, and extracting identification information corresponding to the first user.
The determining a second location of the first user according to the characteristic parameter, determining whether a distance difference between the second location and the first location satisfies a second condition, and if the distance difference satisfies the second condition, generating second response information may include: when the characteristic parameter is a sound parameter, determining a second position of the first user according to a sound direction in the sound parameter obtained currently, comparing a distance difference between the second position and the first position, judging whether the distance difference is greater than a preset threshold value, if so, judging that the distance difference between the second position and the first position meets a second condition, and generating second response information; otherwise, no operation is performed;
or, when the feature parameter is a face feature parameter, taking a direction parameter in the currently acquired face feature parameter as a second position of the first user, comparing a distance difference between the second position and the first position, judging whether the distance difference is greater than a preset threshold value, if so, judging that the distance difference between the second position and the first position meets a second condition, and generating second response information; otherwise, no operation is performed.
The causing, according to the second instruction, the touch display unit to display the first user file at the updated first display position may be: and according to the second instruction, determining an updated first display position of the first user file, and displaying the first user file at the updated first display position.
Preferably, the second position of the first user is determined according to the characteristic parameter, whether a distance difference between the second position and the first position meets a second condition is determined, if the second condition is met, before second response information is generated, whether the first user file is subjected to any processing or operation is determined, if the first user file is subjected to any processing or operation, a timer is started when the processing or operation is completed, if the timer is greater than a preset timeout threshold, a new processing instruction for the first user file is not received, the second position of the first user is determined according to the characteristic parameter, whether a distance difference between the second position and the first position meets the second condition is determined, and if the second condition is met, second response information is generated; the timeout threshold of the timer may be set according to actual needs, for example, may be set to 5 minutes or 10 minutes.
Therefore, the situation that the file position of the first user is updated due to the fact that the user needs to communicate with other users in a close range to move can be avoided, and therefore stability of the user in checking or operating the file is guaranteed.
Therefore, by adopting the scheme provided by the embodiment, when the characteristic parameters of the user meet the first condition, the corresponding file can be automatically selected for the user and displayed at a proper position. Therefore, the file can be selected and displayed according with the identity and the position of the user, a more convenient environment is provided for the user to use the electronic equipment, and the user experience is improved.
In addition, due to the technical scheme provided by the embodiment, the display position of the file can be updated along with the change of the position of the user, and the corresponding file is displayed at the updated display position. Therefore, when the user only checks the file and does not operate the file, the file can be moved according to the movement of the user without manually moving the file by the user, and the use experience of the user can be further improved.
Example III,
The embodiment of the invention provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment can be large-screen equipment and is provided with an acquisition unit and a touch display unit; when the collecting unit is in a working state and collects the characteristic parameters of at least one user in real time, as shown in fig. 1, the method includes:
step 101: and acquiring the current characteristic parameters of the first user acquired by the acquisition unit.
Step 102: and judging whether the characteristic parameters meet a preset first condition, and generating first response information when the characteristic parameters meet the first condition.
Step 103: and determining the first position of the first user and the identification information of the first user by using the characteristic parameters according to the first response information.
Step 104: and acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user.
Step 105: and generating a first instruction by using the first user file and the first display position.
Step 106: according to the first instruction, the touch display unit displays the first user file at the first display position.
Here, the acquisition unit may include: a microphone array, and/or a fisheye camera, and/or a WFOV lens.
The microphone array is a prior art, and a microphone array can be realized by installing a plurality of microphones on an electronic device.
The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
For example, as shown in fig. 2, a fisheye camera or WFOV lens 211 is used to perform feature collection on the peripheral face of the user, where the fisheye camera or WFOV lens 211 is placed at one end of the electronic device 21, and the collection range may be as shown in the figure, so as to collect facial feature parameters of the user, and obtain the distance and orientation between the WFOV lens and the user.
As shown in fig. 3, when the electronic device 31 is equipped with N microphones 311 to form a microphone array, when a user makes a sound, the N microphones 311 each collect a sound wave of the sound made by the user, and the sound wave collected by each microphone and the direction of the sound wave determine the volume, sound direction, and voiceprint characteristic parameters of the user; it should be noted that the position identification of the sound can also be performed by the sound phase difference. The implementation of the microphone array used by the embodiments of the present invention is prior art and will not be described in detail here. Preferably, as shown in fig. 4, when a microphone array is formed by using N microphones 411 on the electronic device 41, if there are multiple users beside the electronic device 41, as shown in the figure, there are a first user 41 and a second user 42, the microphone array may collect sound waves emitted by the users respectively, and classify the collected sound waves through the voiceprint feature parameters to obtain feature parameters of the first user 41 and the second user 42 respectively; the specific implementation method is the prior art, and is not described herein.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
Wherein the determining whether the characteristic parameter satisfies a preset first condition may include: determining the distance between the first user and the acquisition unit according to the sound parameters or the face characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
Wherein the determining the distance from the first user to the acquisition unit according to the sound parameter or the face feature parameter may be: when the characteristic parameter is a sound parameter, calculating to obtain the distance between the first user and the acquisition unit by using the sound size or the phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit. The calculating of the distance between the first user and the acquisition unit by using the sound magnitude or the phase difference in the sound parameters is the prior art, and is not described herein again.
Preferably, the determining the first location of the first user and the identification information of the first user by using the characteristic parameters includes: when the characteristic parameters are voice parameters, taking voice orientation parameters in the voice parameters as a first position of the first user, extracting voiceprint characteristic parameters in the voice parameters, comparing the voiceprint characteristic parameters with voiceprint characteristics in a prestored voiceprint characteristic table of the user, and extracting identification information corresponding to the first user; wherein the user voiceprint feature list may include: identification information of the user and corresponding voiceprint characteristic parameters.
Or, when the feature parameter is a face feature parameter, taking a direction parameter in the face feature parameter as a first position of the first user, extracting a face feature or an eye feature value in the face feature parameter, comparing the face feature or the eye feature value with a face feature or an eye feature value in a pre-stored user feature table, and extracting identification information corresponding to the first user.
After the causing the touch display unit to display the first user file at the first display position according to the first instruction, the method may further include:
the acquisition unit acquires characteristic parameters of a first user; determining a second position of the first user according to the characteristic parameters; judging whether the distance difference between the second position and the first position meets a second condition, and if the distance difference meets the second condition, generating second response information; according to the second response information, the second position is used as the updated first position; determining an updated first display position using the updated first position; generating a second instruction by using the updated first display position and the first user file; and according to the second instruction, enabling the touch display unit to display the first user file at the updated first display position.
Determining a second position of the first user according to the characteristic parameter, determining whether a distance difference between the second position and the first position satisfies a second condition, and if the distance difference satisfies the second condition, generating second response information may include: when the characteristic parameter is a sound parameter, determining a second position of the first user according to a sound direction in the sound parameter obtained currently, comparing a distance difference between the second position and the first position, judging whether the distance difference is greater than a preset threshold value, if so, judging that the distance difference between the second position and the first position meets a second condition, and generating second response information; otherwise, no operation is performed;
or, when the feature parameter is a face feature parameter, taking a direction parameter in the currently acquired face feature parameter as a second position of the first user, comparing a distance difference between the second position and the first position, judging whether the distance difference is greater than a preset threshold value, if so, judging that the distance difference between the second position and the first position meets a second condition, and generating second response information; otherwise, no operation is performed.
The displaying, by the touch display unit, the first user file at the updated first display position may be: and according to the second instruction, determining an updated first display position of the first user file, and displaying the first user file at the updated first display position.
Preferably, the determining a first display position according to the first position of the first user may include: and according to the first position of the first user and the preset file display size, taking the display area and the display size of the first user file as the first display position.
Preferably, the second position of the first user is determined according to the characteristic parameter, whether a distance difference between the second position and the first position meets a second condition is determined, if the second condition is met, before second response information is generated, whether the first user file is subjected to any processing or operation is determined, if the first user file is subjected to any processing or operation, a timer is started when the processing or operation is completed, if the timer is greater than a preset timeout threshold, a new processing instruction for the first user file is not received, the second position of the first user is determined according to the characteristic parameter, whether a distance difference between the second position and the first position meets the second condition is determined, and if the second condition is met, second response information is generated; the timeout threshold of the timer may be set according to actual needs, for example, may be set to 5 minutes or 10 minutes.
Therefore, the situation that the file position of the first user is updated due to the fact that the user needs to communicate with other users in a close range to move can be avoided, and therefore stability of the user in checking or operating the file is guaranteed.
The obtaining of the corresponding first user file according to the identification information of the first user may include: searching all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; selecting a first user file from all the files according to the attribute parameters; the attribute parameters may include a name of the file, a last operation time of the file, the number of operations of the file, and the like;
the selecting the first user file from the all files according to the attribute parameters may include: selecting the file operated by the first user last as a first user file according to the last operation time of the file in the attribute parameters; or selecting the file with the maximum operation times of the first user as the file of the first user according to the operation times of the file in the attribute parameters.
In this way, a file closer to the demand can be provided to the user as the first user file.
Or, the obtaining the corresponding first user file according to the identification information of the first user may include: searching all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; generating list information of all files corresponding to the first user by using the attribute parameters; and taking the list information of all the files as the first user file.
Preferably, the generating of the list information of all files corresponding to the first user by using the attribute parameters may include: and generating list information of all files corresponding to the first user by using the file names in the attribute parameters of all files.
Preferably, after the list information of all the files is used as the first user file, the user may select a file to be operated from the list information, and the file to be operated selected by the user is displayed at the first display position.
Therefore, the user can be prompted to store all the files, and the user can further select the required file from all the files.
Therefore, by adopting the scheme provided by the embodiment, when the characteristic parameters of the user meet the first condition, the corresponding file can be automatically selected for the user and displayed at a proper position. Therefore, the file can be selected and displayed according with the identity and the position of the user, a more convenient environment is provided for the user to use the electronic equipment, and the user experience is improved.
In addition, due to the technical scheme provided by the embodiment, the display position of the file can be updated along with the change of the position of the user, and the corresponding file is displayed at the updated display position. Therefore, when the user only checks the file and does not operate the file, the file can be moved according to the movement of the user without manually moving the file by the user, and the use experience of the user can be further improved.
Example four,
The embodiment of the invention also provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment can be large-screen equipment and is provided with an acquisition unit and a touch display unit; when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, as shown in fig. 6, the method includes:
step 601: and acquiring the current characteristic parameters of the first user acquired by the acquisition unit.
Step 602: and judging whether the characteristic parameters meet a preset first condition, and if not, generating a third instruction.
Step 603: and the touch display unit analyzes the third instruction, acquires a third analysis result, and closes the first user file according to the third analysis result.
The acquisition unit may include: a microphone array, and/or a fisheye camera, and/or a WFOV lens.
The microphone array is a prior art, and a microphone array can be realized by installing a plurality of microphones on an electronic device. The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
The determining whether the characteristic parameter satisfies a preset first condition may include: determining the distance between the first user and the acquisition unit according to the sound parameters or the face characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
Wherein the determining the distance from the first user to the acquisition unit according to the sound parameter or the face feature parameter may be: when the characteristic parameter is a sound parameter, calculating to obtain the distance between the first user and the acquisition unit by using the sound size or the phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit. The calculating of the distance between the first user and the acquisition unit by using the sound magnitude or the phase difference in the sound parameters is the prior art, and is not described herein again.
Preferably, the determining whether the characteristic parameter meets a preset first condition, and if not, generating a third instruction may include: when the characteristic parameter meets a preset first condition, performing subsequent operations according to the method provided in any one of the first to third embodiments, which is not described herein.
Preferably, the generating the third instruction may include: determining the identification information of the first user according to the characteristic parameters; determining a corresponding first user file according to the identification information of the first user; and selecting the first user file from all currently opened file lists, and taking an instruction for closing the first user file as the third instruction.
One example of a scenario for operating the above embodiment may be as follows: when a first user file is displayed at a first display position of the touch display unit, continuously acquiring characteristic parameters of a first user in real time;
determining whether the distance between the current distance acquisition unit of the first user and the preset distance threshold value is larger than the preset distance threshold value or not according to the characteristic parameters, if so, determining that the preset first condition is not met, and determining the identification information of the first user according to the characteristic parameters; determining a corresponding first user file according to the identification information of the first user; selecting the first user file from all currently opened file lists, and taking an instruction for closing the first user file as the third instruction;
and closing the first user file displayed in the touch display unit according to the third instruction.
Therefore, by adopting the scheme provided by the embodiment, when the file is displayed at the position corresponding to the user, if the characteristic parameter of the user is detected to be not in accordance with the first condition, the corresponding file is automatically closed for the user. Therefore, the problem that the user forgets to close the file when leaving the electronic equipment can be avoided, and the privacy of the user is protected.
Example V,
The embodiment of the invention also provides an information processing method, which is applied to electronic equipment, wherein the electronic equipment can be large-screen equipment and is provided with an acquisition unit and a touch display unit; when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, as shown in fig. 7, the method includes:
step 701: and acquiring the current characteristic parameters of the first user acquired by the acquisition unit.
Step 702: judging whether the characteristic parameters meet a preset first condition, if not, executing a step 703; otherwise, the process flow is ended.
Step 703: a timer is started.
Step 704: judging whether the timer reaches a preset threshold value, if not, executing step 705; if so, go to step 706.
Step 705: and when the characteristic parameters meet a preset first condition, not generating a third instruction, closing the timer and ending the processing flow.
Step 706: and the touch display unit analyzes the third instruction, acquires a third analysis result, and closes the first user file according to the third analysis result.
The acquisition unit may be: microphone arrays, fisheye cameras, WFOV lenses, and other hardware devices.
The microphone array is a prior art, and can be realized by installing a plurality of microphones on an electronic device. The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
The determining whether the characteristic parameter satisfies a preset first condition may include: determining the distance between the first user and the acquisition unit according to the sound parameters or the face characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
For example, as shown in fig. 8, a WFOV lens or a fisheye camera is used to perform feature collection on the peripheral face of the user, where the WFOV lens or the fisheye camera is placed at one end of the electronic device 81, and the collection range may be as shown in the figure, and when the distance 82 from the user to the collection unit is not less than the specified distance, it is determined that the feature parameter does not satisfy the preset first condition.
Alternatively, as shown in fig. 9, N microphones 911 are installed on the electronic device 91 to form a microphone array, so as to collect the sound of the first user 92 in real time; when the first user 92 turns away from the electronic device 91, the microphone array analyzes information such as a distance, an orientation, a voiceprint characteristic parameter and the like of the first user 92 through sound waves collected by the N microphones 911, and when it is determined that the distance between the first user 92 and the electronic device is not smaller than a specified distance, it is determined that the characteristic parameter does not meet a preset first condition.
Wherein the determining the distance from the first user to the acquisition unit according to the sound parameter or the face feature parameter may be: when the characteristic parameter is a sound parameter, calculating to obtain the distance between the first user and the acquisition unit by using the sound size or the phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit. The calculating of the distance between the first user and the acquisition unit by using the sound magnitude or the phase difference in the sound parameters is the prior art, and is not described herein again.
Preferably, the determining whether the characteristic parameter meets a preset first condition, and if not, generating a third instruction may include: when the characteristic parameter meets a preset first condition, performing subsequent operations according to the method provided in any one of the first to third embodiments, which is not described herein.
Preferably, the generating the third instruction may include: determining the identification information of the first user according to the characteristic parameters; determining a corresponding first user file according to the identification information of the first user; and selecting the first user file from all currently opened file lists, and taking an instruction for closing the first user file as the third instruction.
One example of a scenario for operating the above embodiment may be as follows: when a first user file is displayed at a first display position of the touch display unit, continuously acquiring characteristic parameters of a first user in real time;
determining whether the distance between the current distance acquisition unit of the first user and the preset distance threshold value is larger than the preset distance threshold value or not according to the characteristic parameters, if so, determining that the preset first condition is not met, and determining the identification information of the first user according to the characteristic parameters; determining a corresponding first user file according to the identification information of the first user; selecting the first user file from all currently opened file lists, and taking an instruction for closing the first user file as the third instruction;
and closing the first user file displayed in the touch display unit according to the third instruction.
Preferably, after performing the completing step 705, the step 701 may be continued.
Therefore, by adopting the scheme provided by the embodiment, when the file is displayed at the position corresponding to the user, if the characteristic parameter of the user is detected to be not in accordance with the first condition, the corresponding file is automatically closed for the user. Therefore, the problem that the user forgets to close the file when leaving the electronic equipment can be avoided, and the privacy of the user is protected.
In addition, a timer can be added to judge whether the characteristic parameter meets the first condition again before the third instruction is generated. Therefore, when the user is slightly far away from the large screen but does not want to quit the file viewing, the operation of frequently closing and opening the first file is avoided, and the operation experience of the user is guaranteed.
Example six,
An embodiment of the present invention provides an electronic device, which may be a large-screen device, and as shown in fig. 10, the electronic device includes: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the current characteristic parameters of the first user when the first user is in a working state and sending the characteristic parameters to the information processing unit;
the information processing unit is used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit, judging whether the characteristic parameters meet a preset first condition or not, and if the characteristic parameters meet the first condition, generating first response information; according to the first response information, determining a first position of the first user and identification information of the first user by using the characteristic parameters; acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user; generating a first instruction by using the first user file and the first display position, and sending the first instruction to a touch display unit;
the touch display unit is used for enabling the touch display unit to display the first user file at the first display position according to the first instruction.
Here, the acquisition unit may be: microphone arrays, fisheye cameras, WFOV lenses, and other hardware devices. The microphone array is a prior art, and can be realized by installing a plurality of microphones on an electronic device. The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
The information processing unit is specifically configured to determine a distance from the first user to the acquisition unit according to the sound parameter or the facial feature parameter, determine whether the distance from the first user to the acquisition unit is smaller than a specified distance, and if so, determine that the feature parameter meets a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
The information processing unit is specifically configured to, when the characteristic parameter is a sound parameter, calculate a distance between the first user and the acquisition unit by using a sound magnitude or a phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit. The calculating of the distance between the first user and the acquisition unit by using the sound magnitude or the phase difference in the sound parameters is the prior art, and is not described herein again.
Preferably, the information processing unit is specifically configured to, when the feature parameter is a sound parameter, take a sound orientation parameter in the sound parameter as a first location of the first user, extract a voiceprint feature parameter in the sound parameter, compare the voiceprint feature parameter with a voiceprint feature in a pre-stored user voiceprint feature table, and extract identification information corresponding to the first user; wherein the user voiceprint feature list may include: identification information of the user and corresponding voiceprint characteristic parameters.
Or, when the feature parameter is a face feature parameter, taking a direction parameter in the face feature parameter as a first position of the first user, extracting a face feature or an eye feature value in the face feature parameter, comparing the face feature or the eye feature value with a face feature or an eye feature value in a pre-stored user feature table, and extracting identification information corresponding to the first user.
Therefore, by adopting the scheme provided by the embodiment, when the characteristic parameters of the user meet the first condition, the corresponding file can be automatically selected for the user and displayed at a proper position. Therefore, the file can be selected and displayed according with the identity and the position of the user, a more convenient environment is provided for the user to use the electronic equipment, and the user experience is improved.
Example seven,
An electronic device, the electronic device comprising: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the characteristic parameters of a first user when the acquisition unit is in a working state and sending the characteristic parameters to the information processing unit;
the information processing unit is used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit, judging whether the characteristic parameters meet a preset first condition or not, and if the characteristic parameters meet the first condition, generating first response information; according to the first response information, determining a first position of the first user and identification information of the first user by using the characteristic parameters; acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user; generating a first instruction by using the first user file and the first display position, and sending the first instruction to a touch display unit;
the touch display unit is used for enabling the touch display unit to display the first user file at the first display position according to the first instruction.
Here, the acquisition unit may include: a microphone array, and/or a fisheye camera, and/or a WFOV lens.
The microphone array is a prior art, and a microphone array can be realized by installing a plurality of microphones on an electronic device.
The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
For example, as shown in fig. 2, a fisheye camera or WFOV lens 211 is used to perform feature collection on the peripheral face of the user, where the fisheye camera or WFOV lens 211 is placed at one end of the electronic device 21, and the collection range may be as shown in the figure, so as to collect facial feature parameters of the user, and obtain the distance and orientation between the WFOV lens and the user.
As shown in fig. 3, when the electronic device 31 is equipped with N microphones 311 to form a microphone array, when a user makes a sound, the N microphones 311 each collect a sound wave of the sound made by the user, and the sound wave collected by each microphone and the direction of the sound wave determine the volume, sound direction, and voiceprint characteristic parameters of the user; the implementation of the microphone array used by the embodiments of the present invention is prior art and will not be described in detail here. Preferably, as shown in fig. 4, when a microphone array is formed by using N microphones 411 on the electronic device 41, if there are multiple users beside the electronic device 41, as shown in the figure, there are a first user 41 and a second user 42, the microphone array may collect sound waves emitted by the users respectively, and classify the collected sound waves through the voiceprint feature parameters to obtain feature parameters of the first user 41 and the second user 42 respectively; the specific implementation method is the prior art, and is not described herein.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters.
The information processing unit is specifically configured to determine, according to the sound parameter or the face feature parameter, whether a distance between the first user and the acquisition unit is smaller than a specified distance, and if so, determine that the feature parameter meets a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
Preferably, the information processing unit is specifically configured to, when the feature parameter is a sound parameter, take a sound orientation parameter in the sound parameter as a first location of the first user, extract a voiceprint feature parameter in the sound parameter, compare the voiceprint feature parameter with a voiceprint feature in a pre-stored user voiceprint feature table, and extract identification information corresponding to the first user; wherein the user voiceprint feature list may include: identification information of the user and corresponding voiceprint characteristic parameters.
Or, when the feature parameter is a face feature parameter, taking a direction parameter in the face feature parameter as a first position of the first user, extracting a face feature or an eye feature value in the face feature parameter, comparing the face feature or the eye feature value with a face feature or an eye feature value in a pre-stored user feature table, and extracting identification information corresponding to the first user.
The information processing unit is also used for receiving the characteristic parameters of the first user acquired by the acquisition unit after the first instruction is produced; when the characteristic parameter meets a preset first condition, determining a second position of the first user according to the characteristic parameter, judging whether a distance difference value between the second position and the first position meets the second condition, and if the distance difference value meets the second condition, generating second response information; according to the second response information, the second position is used as the updated first position; determining an updated first display position using the updated first position; generating a second instruction by using the updated first display position and the first user file, and sending the second instruction to the touch display unit; correspondingly, the touch display unit is further configured to, according to the second instruction, cause the touch display unit to display the first user file at the updated first display position.
The information processing unit is specifically configured to, when the feature parameter is a sound parameter, determine a second position of the first user according to a sound direction in the sound parameter that is currently obtained, compare a distance difference between the second position and the first position, determine whether the distance difference is greater than a preset threshold value, if so, determine that the distance difference between the second position and the first position meets a second condition, and generate second response information; otherwise, no operation is performed;
or, when the feature parameter is a face feature parameter, taking a direction parameter in the currently acquired face feature parameter as a second position of the first user, comparing a distance difference between the second position and the first position, judging whether the distance difference is greater than a preset threshold value, if so, judging that the distance difference between the second position and the first position meets a second condition, and generating second response information; otherwise, no operation is performed.
The information processing unit is specifically configured to determine, according to the second instruction, an updated first display position of the first user file, and display the first user file at the updated first display position.
Preferably, the information processing unit is specifically configured to use a display area and a display size of the first user file as the first display position according to the first position of the first user and a preset file display size.
Preferably, the information processing unit may determine, according to the characteristic parameter, a second position of the first user, determine whether a distance difference between the second position and the first position satisfies a second condition, determine, before generating second response information if the second condition is satisfied, whether the first user file has undergone any processing or operation, if the second condition is satisfied, start a timer when the processing or operation is completed, if the timer is greater than a preset timeout threshold, and if a new processing instruction for the first user file is not yet received, start determining, according to the characteristic parameter, the second position of the first user, determine whether a distance difference between the second position and the first position satisfies the second condition, and if the second condition is satisfied, generate the second response information; the timeout threshold of the timer may be set according to actual needs, for example, may be set to 5 minutes or 10 minutes.
Therefore, the situation that the file position of the first user is updated due to the fact that the user needs to communicate with other users in a close range to move can be avoided, and therefore stability of the user in checking or operating the file is guaranteed.
The information processing unit is specifically configured to search all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; selecting a first user file from all the files according to the attribute parameters; the attribute parameters may include a name of the file, a last operation time of the file, the number of operations of the file, and the like;
the information processing unit is specifically configured to select, according to the last operation time of the file in the attribute parameter, a file that is last operated by the first user as a first user file; or selecting the file with the maximum operation times of the first user as the file of the first user according to the operation times of the file in the attribute parameters.
In this way, a file closer to the demand can be provided to the user as the first user file.
Or, the information processing unit is specifically configured to search all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; generating list information of all files corresponding to the first user by using the attribute parameters; and taking the list information of all the files as the first user file.
Preferably, the information processing unit is specifically configured to generate list information of all files corresponding to the first user by using names of files in attribute parameters of all files.
Preferably, after the list information of all the files is used as the first user file, the user may select a file to be operated from the list information, and the file to be operated selected by the user is displayed at the first display position.
Therefore, the user can be prompted to store all the files, and the user can further select the required file from all the files.
Therefore, by adopting the scheme provided by the embodiment, when the characteristic parameters of the user meet the first condition, the corresponding file can be automatically selected for the user and displayed at a proper position. Therefore, the file can be selected and displayed according with the identity and the position of the user, a more convenient environment is provided for the user to use the electronic equipment, and the user experience is improved.
In addition, due to the technical scheme provided by the embodiment, the display position of the file can be updated along with the change of the position of the user, and the corresponding file is displayed at the updated display position. Therefore, when the user only checks the file and does not operate the file, the file can be moved according to the movement of the user without manually moving the file by the user, and the use experience of the user can be further improved.
Example eight,
An embodiment of the present invention further provides an electronic device, where the electronic device includes: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the current characteristic parameters of the first user;
the information processing unit is used for acquiring the current characteristic parameter of the first user from the acquisition unit when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, judging whether the characteristic parameter meets a preset first condition or not, if not, generating a third instruction, and sending the third instruction to the touch display unit;
and the touch display unit is used for analyzing the third instruction, acquiring a third analysis result and closing the first user file according to the third analysis result.
The information processing unit is specifically used for starting a timer; judging whether the timer reaches a preset threshold value, if not, when the characteristic parameter meets a preset first condition, not generating a third instruction, and closing the timer; if so, a third instruction is generated.
The acquisition unit may be: microphone arrays, and/or fisheye cameras, or WFOV lenses. The microphone array is a prior art, and can be realized by installing a plurality of microphones on an electronic device. The fisheye camera or the WFOV lens is the prior art, wherein the shooting range of the fisheye camera can reach 220 degrees or 230 degrees, so that at most two fisheye cameras can be used for acquiring all images around the electronic equipment.
For example, as shown in fig. 8, a WFOV lens or a fisheye camera is used to perform feature collection on the peripheral face of the user, where the WFOV lens or the fisheye camera is placed at one end of the electronic device 81, and the collection range may be as shown in the figure, and when the distance 82 from the user to the collection unit is not less than the specified distance, it is determined that the feature parameter does not satisfy the preset first condition.
Alternatively, as shown in fig. 9, N microphones 911 are installed on the electronic device 91 to form a microphone array, so as to collect the sound of the first user 92 in real time; when the first user 92 turns away from the electronic device 91, the microphone array analyzes information such as a distance, an orientation, a voiceprint characteristic parameter and the like of the first user 92 through sound waves collected by the N microphones 911, and when it is determined that the distance between the first user 92 and the electronic device is not smaller than a specified distance, it is determined that the characteristic parameter does not meet a preset first condition.
When the collecting unit is a microphone array, the characteristic parameter of the first user may be a sound parameter. Wherein the sound parameters may include: volume level, phase difference, sound bearing, and voiceprint characteristic parameters.
When the acquisition unit is a fisheye camera or a WFOV lens, the characteristic parameters of the first user may include: facial feature parameters. The facial feature parameters may include a distance parameter and a facial image, or include a distance parameter and an eye image, and the like.
The information processing unit is specifically configured to determine a distance from the first user to the acquisition unit according to the sound parameter or the facial feature parameter, determine whether the distance from the first user to the acquisition unit is smaller than a specified distance, and if so, determine that the feature parameter meets a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
The information processing unit is specifically configured to, when the characteristic parameter is a sound parameter, calculate a distance between the first user and the acquisition unit by using a sound magnitude or a phase difference in the sound parameter; or when the feature parameter is a facial feature parameter, using the distance parameter as the distance between the first user and the acquisition unit.
Preferably, the information processing unit is specifically configured to, when the characteristic parameter meets a preset first condition, perform subsequent operations according to the method provided in any one of the first to third embodiments, which is not described herein again.
Preferably, the information processing unit is specifically configured to determine, according to the feature parameter, identification information of the first user; determining a corresponding first user file according to the identification information of the first user; and selecting the first user file from all currently opened file lists, and taking an instruction for closing the first user file as the third instruction.
One example of a scenario for operating the above embodiment may be as follows: when a first user file is displayed at a first display position of the touch display unit, continuously acquiring characteristic parameters of a first user in real time;
determining whether the distance between the current distance acquisition unit of the first user and the preset distance threshold value is larger than the preset distance threshold value or not according to the characteristic parameters, if so, determining that the preset first condition is not met, and determining the identification information of the first user according to the characteristic parameters; determining a corresponding first user file according to the identification information of the first user; selecting the first user file from all currently opened file lists, and taking an instruction for closing the first user file as the third instruction;
and closing the first user file displayed in the touch display unit according to the third instruction.
Therefore, by adopting the scheme provided by the embodiment, when the file is displayed at the position corresponding to the user, if the characteristic parameter of the user is detected to be not in accordance with the first condition, the corresponding file is automatically closed for the user. Therefore, the problem that the user forgets to close the file when leaving the electronic equipment can be avoided, and the privacy of the user is protected.
In addition, a timer can be added to judge whether the characteristic parameter meets the first condition again before the third instruction is generated. Therefore, when the user is slightly far away from the large screen but does not want to quit the file viewing, the operation of frequently closing and opening the first file is avoided, and the operation experience of the user is guaranteed.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. An information processing method is applied to electronic equipment, and the electronic equipment is provided with an acquisition unit and a touch display unit; when the acquisition unit is in a working state and acquires the characteristic parameters of at least one user in real time, the method comprises the following steps:
acquiring the current characteristic parameters of the first user acquired by the acquisition unit; wherein the at least one user comprises: the first user and the second user;
judging whether the characteristic parameters meet a preset first condition or not, and generating first response information when the characteristic parameters meet the first condition;
according to the first response information, determining a first position of the first user and identification information of the first user by using the characteristic parameters;
acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user;
generating a first instruction by using the first user file and the first display position;
according to the first instruction, the touch display unit displays the first user file at the first display position;
the judging whether the characteristic parameters meet a preset first condition or not comprises the following steps: determining the distance between the first user and the acquisition unit according to the characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
2. The method of claim 1, wherein after the touch display unit displays the first user file at the first display location, the method further comprises:
continuously acquiring the current characteristic parameters of the first user acquired by the acquisition unit;
when the characteristic parameter meets a preset first condition, determining a second position of the first user according to the characteristic parameter, judging whether a distance difference value between the second position and the first position meets the second condition, and if the distance difference value meets the second condition, generating second response information;
according to the second response information, the second position is used as the updated first position;
determining an updated first display position using the updated first position;
generating a second instruction by using the updated first display position and the first user file;
and according to the second instruction, the touch display unit displays the first user file at the updated first display position.
3. The method according to claim 1, wherein the obtaining the corresponding first user file according to the identification information of the first user comprises:
searching all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; and selecting a first user file from all the files according to the attribute parameters.
4. The method according to claim 1, wherein the obtaining the corresponding first user file according to the identification information of the first user comprises:
searching all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; generating list information of all files corresponding to the first user by using the attribute parameters; and taking the list information as a first user file.
5. An information processing method is applied to electronic equipment, and the electronic equipment is provided with an acquisition unit and a touch display unit; when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, the method comprises the following steps:
acquiring current characteristic parameters of a first user in at least one user acquired by the acquisition unit; wherein the at least one user comprises: the first user and the second user;
judging whether the characteristic parameters meet a preset first condition or not, and if not, generating a third instruction;
the touch display unit analyzes the third instruction, obtains a third analysis result, and closes the first user file according to the third analysis result;
the judging whether the characteristic parameters meet a preset first condition or not comprises the following steps: determining the distance between the first user and the acquisition unit according to the characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
6. The method of claim 5, wherein generating the third instruction comprises:
starting a timer;
judging whether the timer reaches a preset threshold value or not,
if not, when the characteristic parameter meets a preset first condition, not generating a third instruction, and closing the timer;
if so, a third instruction is generated.
7. An electronic device, the electronic device comprising: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the current characteristic parameters of the first user when the acquisition unit is in a working state and acquires the characteristic parameters of at least one user in real time, and sending the characteristic parameters to the information processing unit; wherein the at least one user comprises: the first user and the second user;
the information processing unit is used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit, judging whether the characteristic parameters meet a preset first condition or not, and if the characteristic parameters meet the first condition, generating first response information; according to the first response information, determining a first position of the first user and identification information of the first user by using the characteristic parameters; acquiring a corresponding first user file according to the identification information of the first user, and determining a first display position according to the first position of the first user; generating a first instruction by using the first user file and the first display position, and sending the first instruction to a touch display unit;
the touch display unit is used for enabling the touch display unit to display the first user file at the first display position according to the first instruction;
the judging whether the characteristic parameters meet a preset first condition or not comprises the following steps: determining the distance between the first user and the acquisition unit according to the characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
8. The electronic device of claim 7,
the information processing unit is also used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit after the first instruction is produced; when the characteristic parameter meets a preset first condition, determining a second position of the first user according to the characteristic parameter, judging whether a distance difference value between the second position and the first position meets the second condition, and if the distance difference value meets the second condition, generating second response information; according to the second response information, the second position is used as the updated first position; determining an updated first display position using the updated first position; generating a second instruction by using the updated first display position and the first user file, and sending the second instruction to the touch display unit;
correspondingly, the touch display unit is further configured to, according to the second instruction, cause the touch display unit to display the first user file at the updated first display position.
9. The electronic device of claim 7,
the information processing unit is specifically configured to search all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; and selecting a first user file from all the files according to the attribute parameters.
10. The electronic device of claim 7, wherein the electronic device,
the information processing unit is specifically configured to search all corresponding files according to the identification information of the first user; acquiring attribute parameters of all the files; generating list information of all files corresponding to the first user by using the attribute parameters; and taking the list information as a first user file.
11. An electronic device, the electronic device comprising: the system comprises an acquisition unit, a touch display unit and an information processing unit; wherein,
the acquisition unit is used for acquiring the current characteristic parameters of a first user in at least one user; wherein the at least one user comprises: the first user and the second user;
the information processing unit is used for acquiring the current characteristic parameters of the first user acquired by the acquisition unit when the acquisition unit is in a working state and the touch display unit displays a first user file at a first display position, judging whether the characteristic parameters meet a preset first condition or not, if not, generating a third instruction, and sending the third instruction to the touch display unit;
the touch display unit is used for analyzing the third instruction, acquiring a third analysis result and closing the first user file according to the third analysis result;
the judging whether the characteristic parameters meet a preset first condition or not comprises the following steps: determining the distance between the first user and the acquisition unit according to the characteristic parameters, judging whether the distance between the first user and the acquisition unit is smaller than a specified distance, and if so, judging that the characteristic parameters meet a preset first condition; otherwise, judging that the characteristic parameters do not meet the preset first condition.
12. The electronic device of claim 11,
the information processing unit is specifically used for starting a timer; judging whether the timer reaches a preset threshold value, if not, when the characteristic parameter meets a preset first condition, not generating a third instruction, and closing the timer; if so, a third instruction is generated.
CN201410014374.9A 2014-01-13 2014-01-13 A kind of information processing method and electronic equipment Active CN104777989B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410014374.9A CN104777989B (en) 2014-01-13 2014-01-13 A kind of information processing method and electronic equipment
US14/230,704 US10613687B2 (en) 2014-01-13 2014-03-31 Information processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410014374.9A CN104777989B (en) 2014-01-13 2014-01-13 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN104777989A CN104777989A (en) 2015-07-15
CN104777989B true CN104777989B (en) 2019-03-29

Family

ID=53619489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410014374.9A Active CN104777989B (en) 2014-01-13 2014-01-13 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104777989B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019044104A1 (en) 2017-08-31 2020-08-13 ソニー株式会社 Information processing apparatus, information processing method, and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286201A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Information automatic asking control method and device
CN102446096A (en) * 2011-09-20 2012-05-09 宇龙计算机通信科技(深圳)有限公司 Terminal and position-based display method
CN103049084A (en) * 2012-12-18 2013-04-17 深圳国微技术有限公司 Electronic device and method for adjusting display direction according to face direction
CN103186326A (en) * 2011-12-27 2013-07-03 联想(北京)有限公司 Application object operation method and electronic equipment
CN103279260A (en) * 2013-04-10 2013-09-04 苏州三星电子电脑有限公司 Direction self-adaptation display system and adjusting method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286201A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Information automatic asking control method and device
CN102446096A (en) * 2011-09-20 2012-05-09 宇龙计算机通信科技(深圳)有限公司 Terminal and position-based display method
CN103186326A (en) * 2011-12-27 2013-07-03 联想(北京)有限公司 Application object operation method and electronic equipment
CN103049084A (en) * 2012-12-18 2013-04-17 深圳国微技术有限公司 Electronic device and method for adjusting display direction according to face direction
CN103279260A (en) * 2013-04-10 2013-09-04 苏州三星电子电脑有限公司 Direction self-adaptation display system and adjusting method thereof

Also Published As

Publication number Publication date
CN104777989A (en) 2015-07-15

Similar Documents

Publication Publication Date Title
TWI775091B (en) Data update method, electronic device and storage medium thereof
WO2021031609A1 (en) Living body detection method and device, electronic apparatus and storage medium
KR101977703B1 (en) Method for controlling photographing in terminal and terminal thereof
CN101316324B (en) Terminal and image processing method thereof
CN105956518A (en) Face identification method, device and system
US10003785B2 (en) Method and apparatus for generating images
CN107025419B (en) Fingerprint template inputting method and device
JP2021526698A (en) Image generation methods and devices, electronic devices, and storage media
WO2021136975A1 (en) Image processing methods and apparatuses, electronic devices, and storage media
CN105302315A (en) Image processing method and device
CN103747180A (en) Photo shooting method and photographing terminal
US20150296317A1 (en) Electronic device and recording method thereof
CN104933419B (en) The method, apparatus and red film for obtaining iris image identify equipment
CN107871001B (en) Audio playing method and device, storage medium and electronic equipment
CN108985176A (en) image generating method and device
CN108154466A (en) Image processing method and device
CN105653032A (en) Display adjustment method and apparatus
US20210201478A1 (en) Image processing methods, electronic devices, and storage media
CN107885482B (en) Audio playing method and device, storage medium and electronic equipment
CN110717399A (en) Face recognition method and electronic terminal equipment
CN104754234A (en) Photographing method and device
CN109034106B (en) Face data cleaning method and device
CN106054918A (en) Method and device for providing information of unmanned aerial vehicle
CN106303198A (en) Photographing information acquisition methods and device
CN115525140A (en) Gesture recognition method, gesture recognition apparatus, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant