CN104424073A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN104424073A
CN104424073A CN201310366996.3A CN201310366996A CN104424073A CN 104424073 A CN104424073 A CN 104424073A CN 201310366996 A CN201310366996 A CN 201310366996A CN 104424073 A CN104424073 A CN 104424073A
Authority
CN
China
Prior art keywords
state
user
sound
unit
data acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310366996.3A
Other languages
Chinese (zh)
Inventor
付荣耀
杨锦平
许银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310366996.3A priority Critical patent/CN104424073A/en
Publication of CN104424073A publication Critical patent/CN104424073A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method and electronic equipment. The electronic equipment comprises a data collecting unit. The method comprises the following steps that: when a user uses the electronic equipment, a first position state used for representing the position relationship between the user and the data collecting unit is obtained through detection; whether the first position state meets a preset position state or not is judged, a first judging result is obtained; and when the first judging result shows that the first position state does not meet the preset position state, first prompt information used for prompting the user of regulating the position relationship is generated and output.

Description

A kind of method of information processing and electronic equipment
Technical field
The application relates to electronic technology field, the method for particularly a kind of information processing and electronic equipment.
Background technology
Along with the fast development of network technology and electronic technology, increasing electronic equipment enters into the life of people, the function of various electronic equipment is also more and more abundanter, more and more hommization, makes user have better Experience Degree in the process using electronic equipment.With regard to adept machine, present smart mobile phone, can say and be modified into a small-sized computer, not only have powerful storage space, can install various software, and various cell-phone function also does and human nature more and more meticulousr.
At present, some data collectors all can be installed to gather the physiological information of user in electronic equipment, such as, sound collection equipment or image capture device, because the physiological information that user and electronic equipment have a lot of interactive mode all will input based on user could realize, such as, in the process of Voice command electronic equipment or voice call, basic condition is exactly the acoustic information that first will obtain user; Equally, use electronic equipment video user or take pictures in process, basis be also want to have in the coverage of camera user picture.
But present inventor is in the process realizing invention technical scheme in the embodiment of the present application, find that above-mentioned technology at least exists following technical matters:
No matter be sound collection equipment or image capture device, all there is a best acquisition range, the physiological information of user is only had to be that the data message collected in the acquisition range of this best is only the most clearly, but existing data acquisition equipment image data when after unlatching with regard to direct image data, do not judge whether it is the data of acquisition range collection user in the best, therefore, there is the technical matters that can not judge whether the acquisition range being the best before image data in existing data acquisition equipment; And also exist and can not generate information, be adjusted to technical matters within the scope of optimal acquisition to point out user.
Summary of the invention
The embodiment of the present application, by providing a kind of method and electronic equipment of information processing, is the technical matters of best acquisition range in order to can not judging whether before image data of solving that data acquisition equipment of the prior art exists; And also exist can not generate information, be adjusted to technical matters within the scope of optimal acquisition to point out user.
On the one hand, the embodiment of the present application provides a kind of method of information processing, is applied to an electronic equipment, and described electronic equipment comprises a data acquisition unit, and described method comprises:
When a user is when using described electronic equipment, detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit; Judge whether described primary importance state meets a predeterminated position state, obtains the first judged result; When described first judged result shows that described primary importance state does not meet described predeterminated position state, generate and export the first information for pointing out described user to adjust described position relationship.
Optionally, when described data acquisition unit is specially at least one sound collection unit, described primary importance state is specially the first distance state and/or the first angle state.
Optionally, described detection obtains the primary importance state for characterizing position relationship between current described user and described data acquisition unit, specifically comprises: the intensity of sound receiving described user is the first sound of the first intensity of sound; Based on described first intensity of sound, detect and obtain described first distance state and/or described first angle state.
Optionally, when described data acquisition unit is specially a sound collection unit, described based on described first intensity of sound, detect and obtain described first distance state, being specially: based on described first intensity of sound, obtaining described first distance state for characterizing between described user and a described sound collection unit.
Optionally, when described data acquisition unit is specially the sound collection unit array comprising M sound collection unit, M be more than or equal to 2 integer, described based on described first intensity of sound, detect and obtain described first distance state and/or described first angle state, specifically comprise: based on described first intensity of sound, obtain each sound collection unit in described sound collection unit array and receive the time of reception of described first sound, obtain M time of reception; Calculate the mistiming of every two time of receptions in a described M time of reception, obtain M-1 mistiming; Based on a described M-1 mistiming, obtain for characterizing described first distance state between described user and a described M sound collection unit and/or for characterizing described first angle state between described user and a described M sound collection unit.
Optionally, describedly judge that whether described primary importance state meets a predeterminated position state, obtain the first judged result, be specially: judge whether described first distance state meets a predeterminable range state, and/or judge whether described first angle state meets a predetermined angle state, obtains the first judged result; Wherein, when described first distance state does not meet described predeterminable range state, and/or when described first angle state does not meet described predetermined angle state, described first judged result shows that described primary importance state does not meet described predeterminated position state.
Optionally, when described data acquisition unit is specially image acquisition units, described detection obtains the primary importance state for characterizing position relationship between current described user and described data acquisition unit, specifically comprises: in the acquisition range of described image acquisition units, detect the face whether having described user; When there being described face, detect the primary importance state of face within described acquisition range obtaining described user.
Optionally, described when there being described face, detecting the primary importance state of face within described acquisition range obtaining described user, specifically comprising: when there being described face, detect the first light intensity obtained on described face; Based on described first light intensity, detect and obtain the primary importance state of described face within described acquisition range.
Optionally, described first information can be specifically vibration prompting information, auditory tone cues information, luminous prompting information.
Optionally, described generation also exports the first information for pointing out described user to adjust described position relationship, be specially: generate the vibration prompting information for pointing out described user to adjust described position relationship by multi-direction device for force feedback, and export described vibration prompting information; Or generate the auditory tone cues information for pointing out described user to adjust described position relationship by sound prompt device, and export described auditory tone cues information; Or generate the luminous prompting information for pointing out described user to adjust described position relationship by prompting device, and export described luminous prompting information.
Optionally, described when a user is when using described electronic equipment, before detecting the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit, described method also comprises: when described user is when using described electronic equipment, detects described data acquisition unit and whether being in using state; Wherein, when described data acquisition unit is in described using state, perform step: detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit.
On the other hand, the embodiment of the present application also provides a kind of electronic equipment, and described electronic equipment comprises a data acquisition unit, and described electronic equipment also comprises:
Detecting unit, for when a user is when using described electronic equipment, detects the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit; Judging unit, for judging whether described primary importance state meets a predeterminated position state, obtains the first judged result; Generation unit, for when described first judged result shows that described primary importance state does not meet described predeterminated position state, generates and exports the first information for pointing out described user to adjust described position relationship.
Optionally, when described data acquisition unit is specially at least one sound collection unit, described primary importance state is specially the first distance state and/or the first angle state.
Optionally, described detecting unit, specifically comprises: sound reception unit, for receiving the first sound that the intensity of sound of described user is the first intensity of sound; First detection sub-unit, for based on described first intensity of sound, detects and obtains described first distance state and/or described first angle state.
Optionally, when described data acquisition unit is specially a sound collection unit, described first detection sub-unit, specifically for: based on described first intensity of sound, obtain described first distance state for characterizing between described user and a described sound collection unit.
Optionally, when described data acquisition unit is specially the sound collection unit array comprising M sound collection unit, M be more than or equal to 2 integer, described first detection sub-unit, specifically comprise: time receiving element, for based on described first intensity of sound, obtain each sound collection unit in described sound collection unit array and receive the time of reception of described first sound, obtain M time of reception; Computing unit, for calculating the mistiming of every two time of receptions in a described M time of reception, obtains M-1 mistiming; First obtains subelement, for based on a described M-1 mistiming, obtain for characterizing described first distance state between described user and a described M sound collection unit and/or for characterizing described first angle state between described user and a described M sound collection unit.
Optionally, described judging unit, specifically for: judge whether described first distance state meets a predeterminable range state, and/or judge whether described first angle state meets a predetermined angle state, obtains the first judged result; Wherein, when described first distance state does not meet described predeterminable range state, and/or when described first angle state does not meet described predetermined angle state, described first judged result shows that described primary importance state does not meet described predeterminated position state.
Optionally, when described data acquisition unit is specially image acquisition units, described detecting unit, specifically comprises: Face datection unit, for detecting the face whether having described user in the acquisition range of described image acquisition units; Second detection sub-unit, for when there being described face, detects the primary importance state of face within described acquisition range obtaining described user.
Optionally, described second detection sub-unit, specifically comprises: light receiver unit, for when there being described face, detects the first light intensity obtained on described face; Second obtains subelement, for based on described first light intensity, detects and obtains the primary importance state of described face within described acquisition range.
Optionally, described first information can be specifically vibration prompting information, auditory tone cues information, luminous prompting information.
Optionally, described generation unit, is specially: multi-direction device for force feedback, adjusts the vibration prompting information of described position relationship, and export described vibration prompting information for generating the described user of prompting; Or sound prompt device, adjust the auditory tone cues information of described position relationship for generating the described user of prompting, and export described auditory tone cues information; Or prompting device, adjust the luminous prompting information of described position relationship for generating the described user of prompting, and export described luminous prompting information.
The one or more technical schemes provided in the embodiment of the present application, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt when user is when using the data acquisition unit in electronic equipment, just detect the location status between user and data acquisition unit and judge whether this location status meets a predeterminated position state, just generate and export information to the technological means pointing out user to adjust the location status between data acquisition unit ungratified time, solving the technical matters that can not judge whether the acquisition range being the best before image data that data acquisition equipment of the prior art exists; And also exist can not generate information, technical matters within the scope of optimal acquisition is adjusted to point out user, achieve when user's Usage data collection equipment, data acquisition equipment can detect the effect with the location status of user automatically, and when location status does not meet preset state, the technique effect of user's adjusted position configuration state can also be pointed out.
(2) due in the embodiment of the present application, owing to adopting when after the sound obtaining user in microphone, just detect distance between user and microphone and deviation angle according to intensity of sound, judge that the present location status of user is default location status again, if not, just generating output information points out user to adjust the technological means with the position relationship of microphone, solving existing microphone can not according to the technical matters of the position relationship of the sound detection of user and user, achieve and can generate information according to user to the position relationship of microphone and point out user to do the technique effect of corresponding adjustment.
Accompanying drawing explanation
The method flow diagram of a kind of information processing that Fig. 1 provides for the embodiment of the present application;
The structural representation of a kind of electronic equipment that Fig. 2 provides for the embodiment of the present application.
Embodiment
The embodiment of the present application, by providing a kind of method and electronic equipment of information processing, solves the technical matters that can not judge whether the acquisition range being the best before image data that data acquisition equipment of the prior art exists; And also exist can not generate information, be adjusted to technical matters within the scope of optimal acquisition to point out user.
Technical scheme in the embodiment of the present application is for solving the problem, and general thought is as follows:
There is provided a kind of method of information processing, be applied to an electronic equipment, described electronic equipment comprises a data acquisition unit, and described method comprises:
When a user is when using described electronic equipment, detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit; Judge whether described primary importance state meets a predeterminated position state, obtains the first judged result; When described first judged result shows that described primary importance state does not meet described predeterminated position state, generate and export the first information for pointing out described user to adjust described position relationship.
Visible, the embodiment of the present application is owing to adopting when user is when using electronic equipment, detect the location status between user and data acquisition unit and judge whether this location status meets a predeterminated position state, just generate and export information to the technological means pointing out user to adjust the location status between data acquisition unit ungratified time, solving the technical matters that can not judge whether the acquisition range being the best before image data that data acquisition equipment of the prior art exists; And also exist can not generate information, technical matters within the scope of optimal acquisition is adjusted to point out user, achieve when user's Usage data collection equipment, data acquisition equipment can detect the effect with the location status of user automatically, and when location status does not meet preset state, the technique effect of user's adjusted position configuration state can also be pointed out.
In order to better understand technique scheme, below in conjunction with Figure of description and concrete embodiment, technique scheme is described in detail, the specific features being to be understood that in the embodiment of the present application and embodiment is the detailed description to technical scheme, instead of the restriction to technical scheme, when not conflicting, the technical characteristic in the embodiment of the present application and embodiment can combine mutually.
The electronic equipment that the method for the information processing provided in the embodiment of the present application is applied to mainly refers to the electronic equipment with data acquisition unit, or just only refer to data acquisition equipment, wherein data acquisition unit or equipment mainly refer to the equipment that can gather the data relevant with user, such as, gather the fingerprint identification device in the fingerprint collecting equipment of user fingerprints information, gather the camera in the camera installation of user images information, gather the microphone etc. in the sound pick-up outfit of user voice information, the method provided in the embodiment of the present application can be applied in fingerprint collecting equipment, camera installation, in the electronic equipments such as sound pick-up outfit, also fingerprint identification device can be directly applied to, camera, in the data collectors such as microphone, illustrate to be applied in electronic equipment in the following embodiments.
As shown in Figure 1, the method for the information processing that the embodiment of the present application provides, specifically comprises step:
S1: when a user is when using described electronic equipment, detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit;
Further, before described step S1, described method also comprises:
When described user is when using described electronic equipment, detects described data acquisition unit and whether being in using state; Wherein, when described data acquisition unit is in described using state, perform step: detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit.
In the embodiment of the present application, if electronic equipment is data acquisition unit inherently, when user is when using this electronic equipment, system will detect the position relationship obtained between user and electronic equipment (and data acquisition unit), characterizes this position relationship with primary importance state; If arrange or be connected with the electronic equipment of data acquisition unit, when user is when using electronic equipment, before detecting the position relationship between user and data acquisition unit, also comprise one to data acquisition unit whether be unlocked use detecting step, if data acquisition unit has been unlocked, system just can perform step S1, detects and obtains primary importance state.Such as, when being provided with fingerprint identification device in smart mobile phone, and the prompting user that fingerprint identification device have employed to be provided in the embodiment of the present application adjusts the method for position relationship, if user is just at use mobile phone, then system can not detect the position relationship of user and fingerprint recognition state, only have after systems axiol-ogy is opened to fingerprint identification module, just can detect the finger of user and the position relationship of fingerprint recognition state, obtain primary importance state, when primary importance state does not meet predeterminated position state, just generating information points out user to adjust the position of finger.
S2: judge whether described primary importance state meets a predeterminated position state, obtains the first judged result;
In specific implementation process, known by the example of the fingerprint identification device in above-mentioned steps S1, after obtain the position relationship primary importance state between user and data acquisition unit through step S1, in step s 2, just judge whether the primary importance state obtained meets the location status preset, and obtains the first judged result.At this, be noted that, predeterminated position state specifically refers to data acquisition unit when gathering the data message of user, and the desired positions relation between user, can be arranged by designer, also can be self-defined setting in user's use procedure, such as, the location status that can gather the finger print information of most complete display that designer is arranged for fingerprint collecting unit, or user is when using camera, self-defined setting at least photograph 2/3 of user's face time, the location status between user and camera is predeterminated position state.
S3: when described first judged result shows that described primary importance state does not meet described predeterminated position state, generates and exports the first information for pointing out described user to adjust described position relationship.
In specific implementation process, after performing the position relationship primary importance state that step S1 obtains between user and data acquisition unit, and step S2 performs after will obtain the first judged result, if when the first judged result shows that primary importance state does not meet predeterminated position state, just perform step S3: generate and export the first information, wherein, the first information adjusts position relationship between data acquisition unit for pointing out user.
Further, described first information can be specifically vibration prompting information, auditory tone cues information, luminous prompting information.
Further, described generation also exports the first information for pointing out described user to adjust described position relationship, be specially: generate the vibration prompting information for pointing out described user to adjust described position relationship by multi-direction device for force feedback, and export described vibration prompting information; Or generate the auditory tone cues information for pointing out described user to adjust described position relationship by sound prompt device, and export described auditory tone cues information; Or generate the luminous prompting information for pointing out described user to adjust described position relationship by prompting device, and export described luminous prompting information.
In specific implementation process, not having restriction in the embodiment of the present application to the type of the first information, can be vibration prompting information, auditory tone cues information, luminous prompting information, and text prompt information.Export information device to generation accordingly also not limit, can be realized by multi-direction device for force feedback for vibration prompting information, such as, in the camera, the face being user when the primary importance state detected departs from predeterminated position state left, just can be shaken to the right by multi-direction device for force feedback, should adjust to the right to point out user; To user, information of voice prompt is sent by sound prompt device (loudspeaker) for auditory tone cues information; Or change to generate by the light in electronic equipment and export luminous prompting information; Or show Word message in the display and adjust position accordingly to point out user.Have and adopt which kind of prompting mode those skilled in that art can do different choice according to the demand of design, in the embodiment of the present application this is not limited.
Visible, the embodiment of the present application is owing to adopting when user is when using the data acquisition unit in electronic equipment, just detect the location status between user and data acquisition unit and judge whether this location status meets a predeterminated position state, just generate and export information to the technological means pointing out user to adjust the location status between data acquisition unit ungratified time, solving the technical matters that can not judge whether the acquisition range being the best before image data that data acquisition equipment of the prior art exists; And also exist can not generate information, technical matters within the scope of optimal acquisition is adjusted to point out user, achieve when user's Usage data collection equipment, data acquisition equipment can detect the effect with the location status of user automatically, and when location status does not meet preset state, the technique effect of user's adjusted position configuration state can also be pointed out.
Provide at least two kinds of embodiments in the embodiment of the present application to introduce in the embodiment of the present application the method providing information processing, the first is when data acquisition unit is specially sound collection unit, and the second is when data acquisition unit is specially image acquisition units.
Further, when described data acquisition unit is specially at least one sound collection unit, described primary importance state is specially the first distance state and/or the first angle state.
In specific implementation process, when data acquisition unit is specially at least one sound collection unit, here direct is that microphone illustrates with sound collection unit, when data acquisition unit is microphone, position relationship between user and microphone is also specially distance relation between user and microphone and/or angular relationship, is called the first distance state and/or the first angle state in an embodiment.
Further, described step S1, specifically comprises: the intensity of sound receiving described user is the first sound of the first intensity of sound; Based on described first intensity of sound, detect and obtain described first distance state and/or described first angle state.
In specific implementation process, when data acquisition unit is microphone, introduce the specific implementation that will detect the first distance state and/or the first angle state below, intensity of sound mainly by obtaining user detects, therefore, performing step S1 is: the intensity of sound receiving user is the first sound of the first intensity of sound; Then the first distance state between user and microphone and/or the first angle state is detected based on the first intensity of sound.Such as, when user is when using microphone, microphone, when have received first sound of user, just detects the relative distance between user and microphone and/or the relative angle between user and microphone according to the first intensity of sound.
Have to only have a microphone and at least two microphones to set forth the implementation being come detecting distance and/angle by sound for example below:
Further, when described data acquisition unit is specially a sound collection unit, described based on described first intensity of sound, detect and obtain described first distance state, be specially:
Based on described first intensity of sound, obtain described first distance state for characterizing between described user and a described sound collection unit.
In specific implementation process, when only having a microphone, judge position relationship between user and microphone main according to the distance judged between user and microphone, particular by the intensity of sound value received to judge the distance between user and microphone, although user's one's voice in speech frequency may just differ, but Operation system setting has the volume value of a standard, as long as when the intensity of sound value obtained is less than this standard pronunciation value, system will think that the distance between user and microphone is distant, be greater than predeterminable range, user can be pointed out to adjust closer by the distance in microphone, of course, after receiving this information, user also can select to adjust position, directly sound decibel is improved, as long as user generates to system the information exported and does corresponding to adjust.Certainly, except intensity of sound, the distance between user and microphone can also be detected by other means, such as, navigated to the face of user by camera, then calculate the distance of microphone to face according to range sensor.
Further, when described data acquisition unit is specially the sound collection unit array comprising M sound collection unit, M be more than or equal to 2 integer, described based on described first intensity of sound, detect and obtain described first distance state and/or described first angle state, be specially:
Based on described first intensity of sound, obtain each sound collection unit in described sound collection unit array and receive the time of reception of described first sound, obtain M time of reception;
Calculate the mistiming of every two time of receptions in a described M time of reception, obtain M-1 mistiming;
Based on a described M-1 mistiming, obtain for characterizing described first distance state between described user and a described M sound collection unit and/or for characterizing described first angle state between described user and a described M sound collection unit.
In specific implementation process, when there being one group microphone (at least two), in this case, one group of microphone just has a best sound collection region, in general, it should be one group of position that microphone is middle, at this moment, judge position relationship between user and microphone main according to can be not only the distance judged between user and microphone, can also be judge the angle between user and microphone centre position, specific implementation can be adopt microphone array to detect: when a user speaks, system to obtain in one group of microphone each microphones to the time of reception of the first sound, M microphone has M time of reception, then M-1 the mistiming between M time of reception is calculated, finally orient sound source according to this M-1 mistiming, thus the distance obtaining the most centre position of user and this group microphone and the angle departed from, then user is pointed out to do corresponding adjustment, to be adjusted to the best sound pickup area of this group microphone.
Further, described step S2, is specially:
Judge whether described first distance state meets a predeterminable range state, and/or
Judge whether described first angle state meets a predetermined angle state, obtains the first judged result;
Wherein, when described first distance state does not meet described predeterminable range state, and/or when described first angle state does not meet described predetermined angle state, described first judged result shows that described primary importance state does not meet described predeterminated position state.
In specific implementation process, when data acquisition unit is specially microphone, known by elaboration above, the mode that step S2 obtains the first judged result is just specifically, the first distance state obtained does not meet predeterminable range state, or the first angle state obtained does not meet predetermined angle state, a condition in these two conditions is set up, the first judged result then obtained just shows that the primary importance state between described user and microphone does not meet described predeterminated position state, thus performs step S3.
Visible, in the embodiment of the present application, owing to adopting when after the sound obtaining user in microphone, just detect distance between user and microphone and deviation angle according to intensity of sound, judge that the present location status of user is default location status again, if not, just generating output information points out user to adjust the technological means with the position relationship of microphone, solving existing microphone can not according to the technical matters of the position relationship of the sound detection of user and user, achieve and user can be pointed out to do the technique effect of corresponding adjustment to the position relationship of microphone according to user.
The embodiment of the second when data acquisition unit is specially image acquisition units provided in the embodiment of the present application is provided below:
Further, when described data acquisition unit is specially image acquisition units, described step S1, specifically comprises: in the acquisition range of described image acquisition units, detect the face whether having described user; When there being described face, detect the primary importance state of face within described acquisition range obtaining described user.
Further, described when there being described face, detecting the primary importance state of face within described acquisition range obtaining described user, specifically comprising: when there being described face, detect the first light intensity obtained on described face; Based on described first light intensity, detect and obtain the primary importance state of described face within described acquisition range.
In specific implementation process, be specifically that camera is illustrated with image acquisition units, mentioned in above-mentioned citing and predeterminated position state can be had at least 2/3 of user's face according to self-defined being set to of user in image, here another kind of implementation is reoffered, the location status of user is detected according to the light intensity on user's face, when data acquisition unit is camera, performing step S1 is: first detect the face whether having user in the acquisition range of camera, when having face, just detect the first light intensity obtained on face, the location status residing for face of user is judged according to the first light intensity, here, similar with the testing process of microphone, also a light intensity will be preset, when the first light intensity on face user being detected does not reach this default light intensity value, just prompting user wants adjusted position configuration state.Certainly, whether the embodiment provided in the embodiment of the present application, can be selected to open by user, and such as, when taking pictures, user also can select to close measuring ability, does not adjust position according to the prompting of system.
Based on same inventive concept, the embodiment of the present application also provides a kind of electronic equipment, and described electronic equipment comprises a data acquisition unit, and as shown in Figure 2, described electronic equipment also comprises:
Detecting unit 10, for when a user is when using described electronic equipment, detects the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit;
Judging unit 20, for judging whether described primary importance state meets a predeterminated position state, obtains the first judged result;
Generation unit 30, for when described first judged result shows that described primary importance state does not meet described predeterminated position state, generates and exports the first information for pointing out described user to adjust described position relationship.
Further, when described data acquisition unit is specially at least one sound collection unit, described primary importance state is specially the first distance state and/or the first angle state.
Further, described detecting unit 10, specifically comprises: sound reception unit, for receiving the first sound that the intensity of sound of described user is the first intensity of sound; First detection sub-unit, for based on described first intensity of sound, detects and obtains described first distance state and/or described first angle state.
Further, when described data acquisition unit is specially a sound collection unit, described first detection sub-unit, specifically for: based on described first intensity of sound, obtain described first distance state for characterizing between described user and a described sound collection unit.
Further, when described data acquisition unit is specially the sound collection unit array comprising M sound collection unit, M be more than or equal to 2 integer, described first detection sub-unit, specifically comprise: time receiving element, for based on described first intensity of sound, obtain each sound collection unit in described sound collection unit array and receive the time of reception of described first sound, obtain M time of reception; Computing unit, for calculating the mistiming of every two time of receptions in a described M time of reception, obtains M-1 mistiming; First obtains subelement, for based on a described M-1 mistiming, obtain for characterizing described first distance state between described user and a described M sound collection unit and/or for characterizing described first angle state between described user and a described M sound collection unit.
Further, described judging unit 20, specifically for: judge whether described first distance state meets a predeterminable range state, and/or judge whether described first angle state meets a predetermined angle state, obtains the first judged result; Wherein, when described first distance state does not meet described predeterminable range state, and/or when described first angle state does not meet described predetermined angle state, described first judged result shows that described primary importance state does not meet described predeterminated position state.
Further, when described data acquisition unit is specially image acquisition units, described detecting unit 10, specifically comprises: Face datection unit, for detecting the face whether having described user in the acquisition range of described image acquisition units; Second detection sub-unit, for when there being described face, detects the primary importance state of face within described acquisition range obtaining described user.
Further, described second detection sub-unit, specifically comprises: light receiver unit, for when there being described face, detects the first light intensity obtained on described face; Second obtains subelement, for based on described first light intensity, detects and obtains the primary importance state of described face within described acquisition range.
Further, described first information can be specifically vibration prompting information, auditory tone cues information, luminous prompting information.
Further, described generation unit 30, is specially: multi-direction device for force feedback, adjusts the vibration prompting information of described position relationship, and export described vibration prompting information for generating the described user of prompting; Or sound prompt device, adjust the auditory tone cues information of described position relationship for generating the described user of prompting, and export described auditory tone cues information; Or prompting device, adjust the luminous prompting information of described position relationship for generating the described user of prompting, and export described luminous prompting information.
The one or more technical schemes provided in the embodiment of the present application, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt when user is when using the data acquisition unit in electronic equipment, just detect the location status between user and data acquisition unit and judge whether this location status meets a predeterminated position state, just generate and export information to the technological means pointing out user to adjust the location status between data acquisition unit ungratified time, solving the technical matters that can not judge whether the acquisition range being the best before image data that data acquisition equipment of the prior art exists; And also exist can not generate information, technical matters within the scope of optimal acquisition is adjusted to point out user, achieve when user's Usage data collection equipment, data acquisition equipment can detect the effect with the location status of user automatically, and when location status does not meet preset state, the technique effect of user's adjusted position configuration state can also be pointed out.
(2) due in the embodiment of the present application, owing to adopting when after the sound obtaining user in microphone, just detect distance between user and microphone and deviation angle according to intensity of sound, judge that the present location status of user is default location status again, if not, just generating output information points out user to adjust the technological means with the position relationship of microphone, solving existing microphone can not according to the technical matters of the position relationship of the sound detection of user and user, achieve and can generate information according to user to the position relationship of microphone and point out user to do the technique effect of corresponding adjustment.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (21)

1. a method for information processing, is applied to an electronic equipment, and described electronic equipment comprises a data acquisition unit, and described method comprises:
When a user is when using described electronic equipment, detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit;
Judge whether described primary importance state meets a predeterminated position state, obtains the first judged result;
When described first judged result shows that described primary importance state does not meet described predeterminated position state, generate and export the first information for pointing out described user to adjust described position relationship.
2. the method for claim 1, is characterized in that, when described data acquisition unit is specially at least one sound collection unit, described primary importance state is specially the first distance state and/or the first angle state.
3. method as claimed in claim 2, is characterized in that, described detection obtains the primary importance state for characterizing position relationship between current described user and described data acquisition unit, specifically comprises:
The intensity of sound receiving described user is the first sound of the first intensity of sound;
Based on described first intensity of sound, detect and obtain described first distance state and/or described first angle state.
4. method as claimed in claim 3, is characterized in that, when described data acquisition unit is specially a sound collection unit, described based on described first intensity of sound, detects and obtains described first distance state, be specially:
Based on described first intensity of sound, obtain described first distance state for characterizing between described user and a described sound collection unit.
5. method as claimed in claim 3, it is characterized in that, when described data acquisition unit is specially the sound collection unit array comprising M sound collection unit, M be more than or equal to 2 integer, described based on described first intensity of sound, detect and obtain described first distance state and/or described first angle state, specifically comprise:
Based on described first intensity of sound, obtain each sound collection unit in described sound collection unit array and receive the time of reception of described first sound, obtain M time of reception;
Calculate the mistiming of every two time of receptions in a described M time of reception, obtain M-1 mistiming;
Based on a described M-1 mistiming, obtain for characterizing described first distance state between described user and a described M sound collection unit and/or for characterizing described first angle state between described user and a described M sound collection unit.
6. method as claimed in claim 2, is characterized in that, describedly judges whether described primary importance state meets a predeterminated position state, obtains the first judged result, is specially:
Judge whether described first distance state meets a predeterminable range state, and/or
Judge whether described first angle state meets a predetermined angle state, obtains the first judged result;
Wherein, when described first distance state does not meet described predeterminable range state, and/or when described first angle state does not meet described predetermined angle state, described first judged result shows that described primary importance state does not meet described predeterminated position state.
7. the method for claim 1, is characterized in that, when described data acquisition unit is specially image acquisition units, described detection obtains the primary importance state for characterizing position relationship between current described user and described data acquisition unit, specifically comprises:
The face whether having described user is detected in the acquisition range of described image acquisition units;
When there being described face, detect the primary importance state of face within described acquisition range obtaining described user.
8. method as claimed in claim 7, is characterized in that, described when there being described face, detects the primary importance state of face within described acquisition range obtaining described user, specifically comprises:
When there being described face, detect the first light intensity obtained on described face;
Based on described first light intensity, detect and obtain the primary importance state of described face within described acquisition range.
9. the method for claim 1, is characterized in that, described first information can be specifically vibration prompting information, auditory tone cues information, luminous prompting information.
10. method as claimed in claim 9, is characterized in that, described generation the first information exported for pointing out described user to adjust described position relationship, is specially:
Generate the vibration prompting information for pointing out described user to adjust described position relationship by multi-direction device for force feedback, and export described vibration prompting information; Or
Generate the auditory tone cues information for pointing out described user to adjust described position relationship by sound prompt device, and export described auditory tone cues information; Or
Generate the luminous prompting information for pointing out described user to adjust described position relationship by prompting device, and export described luminous prompting information.
11. the method for claim 1, it is characterized in that, described when a user is when using described electronic equipment, before detecting the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit, described method also comprises:
When described user is when using described electronic equipment, detects described data acquisition unit and whether being in using state;
Wherein, when described data acquisition unit is in described using state, perform step: detect the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit.
12. 1 kinds of electronic equipments, described electronic equipment comprises a data acquisition unit, and described electronic equipment also comprises:
Detecting unit, for when a user is when using described electronic equipment, detects the primary importance state obtained for characterizing position relationship between current described user and described data acquisition unit;
Judging unit, for judging whether described primary importance state meets a predeterminated position state, obtains the first judged result;
Generation unit, for when described first judged result shows that described primary importance state does not meet described predeterminated position state, generates and exports the first information for pointing out described user to adjust described position relationship.
13. electronic equipments as claimed in claim 12, is characterized in that, when described data acquisition unit is specially at least one sound collection unit, described primary importance state is specially the first distance state and/or the first angle state.
14. electronic equipments as claimed in claim 13, it is characterized in that, described detecting unit, specifically comprises:
Sound reception unit, for receiving the first sound that the intensity of sound of described user is the first intensity of sound;
First detection sub-unit, for based on described first intensity of sound, detects and obtains described first distance state and/or described first angle state.
15. electronic equipments as claimed in claim 14, is characterized in that, when described data acquisition unit is specially a sound collection unit, and described first detection sub-unit, specifically for:
Based on described first intensity of sound, obtain described first distance state for characterizing between described user and a described sound collection unit.
16. electronic equipments as claimed in claim 14, is characterized in that, when described data acquisition unit is specially the sound collection unit array comprising M sound collection unit, M be more than or equal to 2 integer, described first detection sub-unit, specifically comprises:
Time receiving element, for based on described first intensity of sound, obtains each sound collection unit in described sound collection unit array and receives the time of reception of described first sound, obtain M time of reception;
Computing unit, for calculating the mistiming of every two time of receptions in a described M time of reception, obtains M-1 mistiming;
First obtains subelement, for based on a described M-1 mistiming, obtain for characterizing described first distance state between described user and a described M sound collection unit and/or for characterizing described first angle state between described user and a described M sound collection unit.
17. electronic equipments as claimed in claim 13, is characterized in that, described judging unit, specifically for:
Judge whether described first distance state meets a predeterminable range state, and/or
Judge whether described first angle state meets a predetermined angle state, obtains the first judged result;
Wherein, when described first distance state does not meet described predeterminable range state, and/or when described first angle state does not meet described predetermined angle state, described first judged result shows that described primary importance state does not meet described predeterminated position state.
18. electronic equipments as claimed in claim 12, it is characterized in that, when described data acquisition unit is specially image acquisition units, described detecting unit, specifically comprises:
Face datection unit, for detecting the face whether having described user in the acquisition range of described image acquisition units;
Second detection sub-unit, for when there being described face, detects the primary importance state of face within described acquisition range obtaining described user.
19. electronic equipments as claimed in claim 18, it is characterized in that, described second detection sub-unit, specifically comprises:
Light receiver unit, for when there being described face, detects the first light intensity obtained on described face;
Second obtains subelement, for based on described first light intensity, detects and obtains the primary importance state of described face within described acquisition range.
20. electronic equipments as claimed in claim 12, is characterized in that, described first information can be specifically vibration prompting information, auditory tone cues information, luminous prompting information.
21. electronic equipments as claimed in claim 20, it is characterized in that, described generation unit, is specially:
Multi-direction device for force feedback, adjusts the vibration prompting information of described position relationship, and exports described vibration prompting information for generating the described user of prompting; Or
Sound prompt device, adjusts the auditory tone cues information of described position relationship, and exports described auditory tone cues information for generating the described user of prompting; Or
Prompting device, adjusts the luminous prompting information of described position relationship, and exports described luminous prompting information for generating the described user of prompting.
CN201310366996.3A 2013-08-21 2013-08-21 Information processing method and electronic equipment Pending CN104424073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310366996.3A CN104424073A (en) 2013-08-21 2013-08-21 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310366996.3A CN104424073A (en) 2013-08-21 2013-08-21 Information processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN104424073A true CN104424073A (en) 2015-03-18

Family

ID=52973121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310366996.3A Pending CN104424073A (en) 2013-08-21 2013-08-21 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104424073A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104810011A (en) * 2015-03-24 2015-07-29 宁波江北鑫祥音响电子有限公司 Frame of musical instrument
CN105702253A (en) * 2016-01-07 2016-06-22 北京云知声信息技术有限公司 Voice awakening method and device
CN106375606A (en) * 2016-11-14 2017-02-01 青岛海信移动通信技术股份有限公司 Method, device and system for controlling display of smart communication equipment in call state
CN107181868A (en) * 2017-06-05 2017-09-19 青岛海信移动通信技术股份有限公司 It is a kind of to control mobile terminal to go out the method and device of screen
CN107580113A (en) * 2017-08-18 2018-01-12 广东欧珀移动通信有限公司 Reminding method, device, storage medium and terminal
CN107636685A (en) * 2015-06-25 2018-01-26 英特尔公司 Automatic meta-tag in image
CN107688765A (en) * 2016-08-03 2018-02-13 北京小米移动软件有限公司 Fingerprint collecting method and device
CN107968974A (en) * 2017-12-07 2018-04-27 北京小米移动软件有限公司 Microphone control method, system, microphone and storage medium
CN108320742A (en) * 2018-01-31 2018-07-24 广东美的制冷设备有限公司 Voice interactive method, smart machine and storage medium
CN110287755A (en) * 2018-03-19 2019-09-27 广东欧珀移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN110443200A (en) * 2019-08-06 2019-11-12 北京七鑫易维信息技术有限公司 The location regulation method and device of electronic equipment
CN111128250A (en) * 2019-12-18 2020-05-08 秒针信息技术有限公司 Information processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1952684A (en) * 2005-10-20 2007-04-25 松下电器产业株式会社 Method and device for localization of sound source by microphone
CN201928341U (en) * 2010-11-22 2011-08-10 康佳集团股份有限公司 Mobile terminal capable of prompting hand-free talking distance
CN102413282A (en) * 2011-10-26 2012-04-11 惠州Tcl移动通信有限公司 Self-shooting guidance method and equipment
CN102722250A (en) * 2012-06-07 2012-10-10 何潇 Method and system for interactive editing of image control points
US20120328137A1 (en) * 2011-06-09 2012-12-27 Miyazawa Yusuke Sound control apparatus, program, and control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1952684A (en) * 2005-10-20 2007-04-25 松下电器产业株式会社 Method and device for localization of sound source by microphone
CN201928341U (en) * 2010-11-22 2011-08-10 康佳集团股份有限公司 Mobile terminal capable of prompting hand-free talking distance
US20120328137A1 (en) * 2011-06-09 2012-12-27 Miyazawa Yusuke Sound control apparatus, program, and control method
CN102413282A (en) * 2011-10-26 2012-04-11 惠州Tcl移动通信有限公司 Self-shooting guidance method and equipment
CN102722250A (en) * 2012-06-07 2012-10-10 何潇 Method and system for interactive editing of image control points

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104810011B (en) * 2015-03-24 2018-06-08 宁波江北鑫祥音响电子有限公司 A kind of music stand
CN104810011A (en) * 2015-03-24 2015-07-29 宁波江北鑫祥音响电子有限公司 Frame of musical instrument
CN107636685A (en) * 2015-06-25 2018-01-26 英特尔公司 Automatic meta-tag in image
CN105702253A (en) * 2016-01-07 2016-06-22 北京云知声信息技术有限公司 Voice awakening method and device
CN107688765A (en) * 2016-08-03 2018-02-13 北京小米移动软件有限公司 Fingerprint collecting method and device
CN106375606A (en) * 2016-11-14 2017-02-01 青岛海信移动通信技术股份有限公司 Method, device and system for controlling display of smart communication equipment in call state
CN107181868A (en) * 2017-06-05 2017-09-19 青岛海信移动通信技术股份有限公司 It is a kind of to control mobile terminal to go out the method and device of screen
CN107181868B (en) * 2017-06-05 2020-02-07 青岛海信移动通信技术股份有限公司 Method and device for controlling screen turn-off of mobile terminal
CN107580113A (en) * 2017-08-18 2018-01-12 广东欧珀移动通信有限公司 Reminding method, device, storage medium and terminal
CN107580113B (en) * 2017-08-18 2019-09-24 Oppo广东移动通信有限公司 Reminding method, device, storage medium and terminal
CN107968974A (en) * 2017-12-07 2018-04-27 北京小米移动软件有限公司 Microphone control method, system, microphone and storage medium
CN108320742A (en) * 2018-01-31 2018-07-24 广东美的制冷设备有限公司 Voice interactive method, smart machine and storage medium
CN108320742B (en) * 2018-01-31 2021-09-14 广东美的制冷设备有限公司 Voice interaction method, intelligent device and storage medium
CN110287755A (en) * 2018-03-19 2019-09-27 广东欧珀移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN110443200A (en) * 2019-08-06 2019-11-12 北京七鑫易维信息技术有限公司 The location regulation method and device of electronic equipment
CN110443200B (en) * 2019-08-06 2022-02-01 北京七鑫易维信息技术有限公司 Position adjusting method and device of electronic equipment
CN111128250A (en) * 2019-12-18 2020-05-08 秒针信息技术有限公司 Information processing method and device

Similar Documents

Publication Publication Date Title
CN104424073A (en) Information processing method and electronic equipment
CN111443884A (en) Screen projection method and device and electronic equipment
CN109032039B (en) Voice control method and device
US20200167581A1 (en) Anti-counterfeiting processing method and related products
CN110556127B (en) Method, device, equipment and medium for detecting voice recognition result
CN112633306B (en) Method and device for generating countermeasure image
US9854439B2 (en) Device and method for authenticating a user of a voice user interface and selectively managing incoming communications
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN109151642B (en) Intelligent earphone, intelligent earphone processing method, electronic device and storage medium
KR101006368B1 (en) Apparatus and method for controling camera of portable terminal
CN104935698A (en) Photographing method of smart terminal, photographing device and smart phone
CN108156374A (en) A kind of image processing method, terminal and readable storage medium storing program for executing
CN111370025A (en) Audio recognition method and device and computer storage medium
CN109561255B (en) Terminal photographing method and device and storage medium
CN109961802B (en) Sound quality comparison method, device, electronic equipment and storage medium
CN107911563B (en) Image processing method and mobile terminal
EP4040332A1 (en) Method and apparatus for upgrading an intelligent model and non-transitory computer readable storage medium
CN114666433A (en) Howling processing method and device in terminal equipment and terminal
CN111401283A (en) Face recognition method and device, electronic equipment and storage medium
CN108446665B (en) Face recognition method and mobile terminal
CN111428080A (en) Storage method, search method and device for video files
CN105468196A (en) Photographing device and method
CN104200817A (en) Speech control method and system
CN112329909B (en) Method, apparatus and storage medium for generating neural network model
JP2023522908A (en) Information processing method and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150318

RJ01 Rejection of invention patent application after publication