WO2015100923A1 - 用户信息获取方法及移动终端 - Google Patents

用户信息获取方法及移动终端 Download PDF

Info

Publication number
WO2015100923A1
WO2015100923A1 PCT/CN2014/078089 CN2014078089W WO2015100923A1 WO 2015100923 A1 WO2015100923 A1 WO 2015100923A1 CN 2014078089 W CN2014078089 W CN 2014078089W WO 2015100923 A1 WO2015100923 A1 WO 2015100923A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
mobile terminal
voice
module
Prior art date
Application number
PCT/CN2014/078089
Other languages
English (en)
French (fr)
Inventor
张凡
陈卓
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2015100923A1 publication Critical patent/WO2015100923A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present invention relates to the field of communications, and in particular, to a method for acquiring user information and a mobile terminal.
  • Virtual human technology that is, human digitalization, is one of the most sought-after applications of information technology in the context of big data technology. It is based on the collection of large data volume of human body features, and through the data processing and operation, extracts key human body features, and realizes a digital person based on the calculation model through artificial intelligence and equipment. The characteristics of the virtual person's actions, expressions, and language are realized by display technologies. These features are very similar to the corresponding features of the collected real people. Based on this technology, the virtual person can communicate with other real people or virtual people autonomously.
  • the collection of human body characteristic data is mainly collected by a special collection device in a special experimental environment, and the collected data covers a narrow range and a poor representativeness, resulting in poor accuracy of the finally obtained algorithm model.
  • the present invention provides a user information acquisition method and a mobile terminal, which are used to solve the problem that the human body feature data collected by a special collection device in the related art has a narrow coverage range and a representative component difference, resulting in a poor accuracy of the finally obtained algorithm model. The problem.
  • the present invention provides a method for acquiring user information, including: a mobile terminal collects user information through its own data collection module, where the user information includes user feature information and/or user motion information; The terminal sends the collected user information to the background computing platform.
  • the user information includes the user feature information
  • the user feature information includes user voice information and/or user face information.
  • the mobile terminal collects the user voice information by using a voice data collection sub-module of the data collection module, including: The mobile terminal determines whether the voice capture mode is currently entered. If yes, the voice data collection sub-module is enabled to collect the user voice information.
  • the mobile terminal determines whether the voice capture mode is entered, whether to be in a call state, and/or whether it is in the call state.
  • the external terminal is in an external state; or the mobile terminal receives an external voice collection instruction, and the voice data collection sub-module is enabled to collect the user voice information according to the external voice collection instruction.
  • the mobile terminal collects the user's face information by using an image data collection sub-module of the data collection module, including: The mobile terminal determines whether the display screen is currently facing the user, and if so, the image data collection sub-module is enabled to collect the user's facial information; or, the determining whether the display terminal of the mobile terminal is currently facing the user and whether the display screen is currently If the image data collection sub-module is opened, the image data collection sub-module is used to acquire the user's facial information; or the mobile terminal receives an external image acquisition instruction, and the image data collection sub-module is started according to the external image acquisition instruction.
  • the user's face information includes: The mobile terminal determines whether the display screen is currently facing the user, and if so, the image data collection sub-module is enabled to collect the user's facial information; or, the determining whether the display terminal of the mobile terminal is currently facing the user and whether the display screen is currently If the image data collection sub-module is opened, the image data collection sub-module is used to acquire the
  • the acquiring, by the mobile terminal, the voice data collection sub-module to collect the user voice information comprises: the mobile terminal collecting at least one piece of user voice information collected by the voice data collection sub-module The pre-stored user voice binding information is compared to determine whether the two match, and if so, the user voice information is stored.
  • the method before the comparing, by the mobile terminal, the user voice information with the user voice binding information, the method further includes: determining whether the user voice information includes valid voice data. .
  • the acquiring, by the mobile terminal, the image data collection sub-module to collect the user image information includes: The mobile terminal compares at least one user image information collected by the image data collection sub-module with pre-stored user image binding information to determine whether the two match, and if so, stores the user image information.
  • the mobile terminal when the user information includes the user feature information, further includes: the mobile terminal determining whether the currently used user is The user of the binding.
  • the user motion information includes user walking speed information and/or user path information.
  • the sending, by the mobile terminal, the collected user information to the background computing platform includes: when the user information includes the user feature information, extracting the user feature information Feature component data; transmitting the extracted feature component data to the background computing platform.
  • the sending the extracted feature component data to the background computing platform comprises: storing the extracted feature component data in a data transmission queue; determining according to a current network environment a sending rule of data in the data transmission queue; transmitting data in the data transmission queue to the background computing platform according to the rule.
  • the present invention provides a mobile terminal, including a processing module, a data collection module, and a sending module.
  • the processing module is configured to control the data collection module to collect user information, and send the message to the background through the sending module.
  • the computing platform wherein the user information includes user feature information and/or user motion information.
  • the user feature information when the user information includes the user feature information, the user feature information includes user voice information and/or user face information.
  • the data collection module includes a voice data collection sub-module, and when the user feature information includes the user voice information, the processing module controls the data collection module to collect the user voice.
  • the processing module determines whether the mobile terminal is currently in the voice collection mode, and if so, the voice data collection sub-module is enabled to collect the user voice information; and the processing module determines whether the mobile terminal enters the voice collection mode, including determining the location Whether the mobile terminal is in a call state and/or is in an externally placed state; or, the processing module receives an external voice collection instruction, and starts the voice data collection sub-module to collect the user voice information according to the external voice collection instruction.
  • the data collection module includes an image data collection sub-module, and when the user feature information includes the user's face information, the processing module controls the data collection module to collect the user's face.
  • the information includes: the processing module determines whether the display screen of the mobile terminal is currently facing the user, and if yes, the image data collection sub-module is enabled to collect the user facial information; or the processing module determines the mobile terminal Whether the display screen is currently facing the user and whether the display screen is currently illuminated. If yes, the image data collection sub-module is opened to collect the user facial information; or the processing module receives an external image acquisition instruction, according to The external image acquisition instruction starts the image data collection sub-module to collect the user facial information.
  • the processing module that starts the voice data collection sub-module to collect the user voice information includes: the processing module uses at least one piece of user voice information collected by the voice data collection sub-module The pre-stored user voice binding information is compared to determine whether the two match, and if so, the user voice information is stored. In an embodiment of the present invention, the processing module is further configured to determine whether the user voice information includes valid vocal data before comparing the user voice information with the user voice binding information. .
  • the processing module that starts the image data collection sub-module to collect the user image information includes: The processing module compares at least one user image information collected by the image data collection sub-module with pre-stored user image binding information to determine whether the two match, and if so, stores the user image information.
  • the processing module is further configured to: before the user information includes the user feature information, control whether the currently used user is determined before the data collecting module collects the user feature information Is a user bound to the mobile terminal.
  • the user motion information includes user walking speed information and/or user path information.
  • the processing module when the processing module sends the user information to the background computing platform, includes: when the processing module determines that the user information includes the user feature information, And extracting the feature component data included in the user feature information; and sending the extracted feature component data to the background computing platform by using the sending module.
  • the user information acquisition method and the mobile terminal provided by the present invention collect user information including user feature information and/or user motion information through the data collection module of the mobile terminal itself, and send the collected user information.
  • the background computing platform (for example, the computing platform in the background) is processed by the background computing platform according to the user's letter to form an effective human body database.
  • FIG. 1 is a schematic diagram of a communication system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a user information acquisition process according to an embodiment of the present invention
  • FIG. 3 is a flowchart of collecting user voice information in an automatic mode according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of collecting user voice information in a manual mode according to an embodiment of the present invention
  • FIG. 5 is a schematic flowchart of collecting user image information in an automatic mode according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart of a process for collecting user image information in an automatic mode according to Embodiment 1 of the present invention
  • FIG. 7 is a schematic flowchart of collecting user image information in a manual mode according to Embodiment 1 of the present invention
  • FIG. 8 is a schematic diagram of Embodiment 2 of the present invention
  • Schematic diagram of the mobile terminal BEST MODE FOR CARRYING OUT THE INVENTION
  • Embodiment 1 At present, most users carry various mobile terminals, such as smart phones, IPADs, PADs, e-books, etc. at any time. Most of these mobile terminals are equipped with data acquisition modules, and the data acquisition modules are generally included. Voice data acquisition sub-module (such as microphone), image data acquisition sub-module (such as camera), speed acquisition sub-module (such as acceleration sensor), timing sub-module (such as timer), orientation acquisition sub-module (such as geomagnetic sensor), etc. .
  • the existing mobile terminals are basically equipped with various communication modules, including wireless communication modules, such as a WIFI module, a 3G communication module, a 4G communication module, and the like.
  • the existing mobile terminal is very suitable for collecting user information, and with the development of broadband mobile communication, the collected large amount of user information is timely uploaded to the background computing platform (for example, the cloud computing platform in the background), please refer to FIG. 1 Show.
  • the background computing platform obtains the user information, and after processing through the information filtering and pattern recognition algorithm, it combines a valid human body feature database in the background.
  • the human characteristics database can be used to provide human body characteristic data support for virtual humans who are equivalent to real people.
  • the user information collected by the user's portable mobile terminal in the user's usual various usage states covers a wider range, the training is stronger, and the representativeness is better, so that the model adjustment accuracy can be made higher.
  • Step 201 The mobile terminal collects user information through its own data collection module.
  • the user in this embodiment The information includes user feature information and/or user motion information.
  • the user feature information in this embodiment refers to the user's own feature information, and may include, for example, user voice information and/or user face information.
  • the user motion information in this embodiment is Refers to walking speed information and/or user path information when the user carries the mobile terminal; the walking speed here includes the average walking speed information of the user walking and/or the walking speed information of the user at different time periods;
  • Step 202 The mobile terminal will The collected user information is sent to the background computing platform.
  • the mode in which the mobile terminal collects user information may be set to an automatic mode and a manual mode.
  • the mobile terminal can automatically detect its current state, and according to the detection result, determine whether to start user information collection and specifically collect which user information.
  • the manual mode the mobile terminal initiates the collection process of the corresponding user information mainly based on external data acquisition instructions. The following describes the collection process of various user information.
  • the process for the mobile terminal to collect the user voice information is as shown in FIG. 3, and the method includes the following steps: Step 301: The mobile terminal determines whether the mobile terminal is in the voice collection mode, and specifically includes determining whether the mobile terminal is in a call state and/or is in an outband state.
  • the call status in this step can be regular voice and / or video calls, or LiTong third-party software (such as WeChat, QQ, Momo) Voice and/or video call;
  • the external placement state in this step refers to a state in which the mobile terminal is taken out and placed, such as a desktop placement state, although the user may not use the mobile terminal to make a call, but can still be turned on.
  • the voice data collection sub-module of the mobile terminal collects the voice information that the user usually speaks.
  • various proximity sensors of the mobile terminal may be used to determine whether the mobile terminal is currently in an externally placed state.
  • Step 302 Turn on the mobile terminal.
  • Voice data collection sub-module collects user voice Information; until the mobile terminal is not in the voice collection mode (for example, the call ends or the mobile terminal is no longer in an externally placed state, etc.); the voice data collection sub-module here may specifically be a microphone.
  • the process for the mobile terminal to collect the user voice information is shown in FIG. 4, including: Step 401: The mobile terminal receives an external voice collection instruction. Step 402: The mobile terminal starts the voice data collection submodule according to the external voice collection instruction. Collecting user voice information; until receiving an external voice collection end command, or determining that the current call end of the mobile terminal is finished or is no longer in an externally placed state.
  • the mobile terminal starts the voice data collection sub-module to collect the user voice information.
  • the voice data collection sub-module can be started on the mobile terminal to collect the user voice information.
  • a piece of user voice binding information that is, a recording of the user himself bound to the mobile terminal, is pre-stored in the mobile terminal.
  • the mobile terminal compares at least one piece of user voice information collected by the voice data collection sub-module with the pre-stored user voice binding information, and determines whether the two match, such as matching, to store the user voice of the segment. Information; otherwise, discard the user voice information.
  • the user voice information that needs to be matched may be set according to a specific application scenario. the amount. For example, it can be set that only the previous voice data of the voice collection needs to be matched. If a piece of voice data matches the pre-stored user voice binding information, the subsequent collected user voice information is no longer matched. Or in order to improve the reliability of voice data collection, all the user voice data collected may be matched according to the above process, and only the user voice information matching the pre-stored user voice binding information is stored, and the part that will not be matched is used. Data is discarded.
  • the mobile terminal may further include determining whether the user voice information includes valid vocal data (the vocal data audio frequency is 60HZ). -2KHZ); If included, the subsequent matching work is performed; otherwise, the user voice information is directly discarded. Since the user voice information collected by the mobile terminal includes the voice data and the background sound data of the user's speech; in this embodiment, the audio noise suppression module may first remove the background noise, and then determine whether there is valid vocal data through the volume threshold. In this embodiment, only the previous voice data of the voice collection may be judged according to the specific application scenario. If it is determined that one voice data contains valid voice data, the voice data collected subsequently is not judged.
  • the above judgment process of whether the current user is bound to the mobile terminal itself is performed in the process of collecting voice data.
  • the determination of whether the user is the user of the user bound to the mobile terminal may also be performed before step 302 and step 402 above.
  • it can be judged by various authentication methods such as a user name and password.
  • the user may be required to input a username and password before using the mobile terminal, and only after inputting the correct username and password can be authenticated.
  • the process of the user collecting facial information of the user is shown in Figure 5 or Figure 6. As shown in FIG.
  • Step 501 The mobile terminal determines whether the display screen is currently facing the user. If yes, the transposition step 502, no U, continue to determine; Step 502: The mobile terminal starts the image data collection sub-module to collect the user. Face information; The image data collection sub-module at this time may be a front camera of the mobile terminal.
  • Step 601 The mobile terminal determines whether its display screen is currently facing the user, and if so, transposes step 502, no, and continues to determine;
  • Step 602 The mobile terminal determines whether the display screen is currently illuminated, if yes, transposes step 603; otherwise, transposes step 601 or continues to judge;
  • Step 603 Moves on the image data collection sub-module to collect user facial information;
  • the image data acquisition sub-module of the time can be a front camera of the mobile terminal.
  • the mobile terminal determines whether there are multiple ways in which the display is currently facing the user, and may select according to a specific situation.
  • a mobile terminal that sets a gyroscope it can be judged by the gyroscope whether the display of the mobile terminal is currently facing the user; it can also be judged by combining an acceleration sensor and a geomagnetic sensor. In addition, it is also determined whether the display of the mobile terminal is currently facing the user by determining whether the user has a contact or touch display; or determining the current display of the mobile terminal by using an open state of a browser, a reader, a video player, or the like in the mobile terminal. Is for the user. In the above step 602, it is specifically determined whether the display screen is currently illuminated according to the operating state of the LCD or the OLED of the display screen.
  • Step 701 The mobile terminal receives an external image capturing instruction.
  • Step 702 The mobile terminal starts the image data collecting submodule according to the external image capturing instruction. Collect user face information.
  • the mobile terminal determines whether the current ambient light meets the shooting requirement through the light detecting sub-module (for example, the light sensor), and if the content is satisfied, the subsequent shooting step is performed. Otherwise, end the shooting.
  • the light detecting sub-module for example, the light sensor
  • the user image binding information may be pre-stored in the mobile terminal before the mobile terminal starts the image data collection sub-module to collect image data according to the determination of the user's own data validity. It may be a standard photo of the user, and then the mobile terminal may collect at least one user image information collected by the image data collection sub-module and the pre-stored in the process of the image data acquisition sub-module being opened by the mobile terminal. The user image binding information is compared to determine whether the two match, and if so, the user image information is stored; otherwise, the user image information is discarded. In this embodiment, the amount of user image information that needs to be matched may be set according to a specific application scenario.
  • the user image data of the previous or previous pieces matching the pre-stored user image binding information may be completely saved.
  • the facial feature data in the subsequent user image data is extracted for storage, and the remaining other data can be discarded.
  • only facial data such as face contours, bun, eyebrows, nose tips, eye lines, and lip lines can be stored. This storage method can reduce the amount of data collected in subsequent areas.
  • the determination of whether the user is the user of the user bound to the mobile terminal may also be performed before step 502, step 603, and step 702.
  • it can be judged by various authentication methods such as a user name and password.
  • the user may be required to input a username and password before using the mobile terminal, and only after inputting the correct username and password can the authentication pass.
  • the collection of the user motion information may also be performed by the mobile terminal automatically detecting the acquisition process in the automatic mode; or the user may start the acquisition process by sending an external motion information collection instruction to the mobile terminal.
  • the background computing platform can combine the user's motion information with the user's voice feature information and/or user image information as the basis for the user's physical and emotional response in different environmental conditions. For example, when the background computing platform finds that the user's pace is suddenly accelerated by uniform displacement and the audio frequency changes drastically at night time, it can be determined that the user may be in danger and is getting rid of.
  • the acceleration sensor, the geomagnetic sensor, and/or the gyroscope of the mobile terminal may be specifically used, and the key data such as the motion range, the speed, and the trajectory of the user may be further calculated by combining the GPS.
  • the obtained user motion information can be directly stored.
  • the data transmission amount is reduced as much as possible, and the network load is reduced.
  • Method extracting feature component data included in the user feature information for user feature information included in the user information; and transmitting the extracted feature component data to the background computing platform.
  • the process of extracting the feature components of the user voice information and the user image information will be described below.
  • the noise outside the voice signal can be filtered by the band pass filter, and then sent to the information feature extraction sub-module, and the feature component is extracted from the voice signal by the information feature extraction sub-module, and other redundant data is discarded.
  • the reconstruction is then performed by the background computing platform based on these feature components and pre-stored typical audio information for the user.
  • the typical audio information here can be used to guide the user input during initialization, for example, The user is asked to read a number of key words, sentences, and so on. After acquiring the feature components of the user's voice message, these feature components can be stored and then the voice code is activated.
  • the speech coding in this embodiment adopts a hybrid coding method combining waveform coding and parameter coding.
  • the advantages are as follows:
  • the hybrid coding includes several speech feature parameters and partial waveform coding information, and the waveform is achieved. The advantages of high quality coding and low rate of parametric coding.
  • the compressed voice data is stored in the data transmission queue.
  • the extraction process of the feature component of the user image information is: the information feature extraction sub-module compares the collected user image information with the pre-stored standard facial image to obtain the feature component, and then extracts the feature.
  • the components are compressed by the H.264 standard.
  • the advantages of this compression method are: Compared with other video coding in the related art, the image quality is better under the same bandwidth, and the facial feature information can be completely guaranteed. Moreover, H.264 is very fault-tolerant, and solves the problem of packet loss and other errors in an unstable network environment, which is very suitable for wireless transmission environments.
  • the compressed image data is stored in the data transfer queue.
  • the user motion information included in the user information is less than the data amount of the user motion information, so the mobile terminal can be directly sent to the background computing platform after being collected. It can also be stored in the data transfer queue and sent together with voice data and image data.
  • the main function of the transmission queue is to determine the transmission rule of the data according to the current mobile terminal network environment. Specifically, the data transmission can be scheduled according to the current wireless working state of the mobile terminal, so as to avoid putting a great pressure on the wireless network.
  • the data transmission rule determined by the transmission queue is to directly send all data; when the mobile terminal detects that the current network environment is a 3G network, the data transmission rule determined by the transmission queue
  • the data is packaged into small packets of 200 KB or less, transmitted in different time periods throughout the day, and the transmission time can also be controlled by the user. Avoid the impact on the user's normal wireless Internet access.
  • Embodiment 2 In order to better understand the present invention, the present invention will be further described below in conjunction with the specific structure of the mobile terminal. Referring to FIG.
  • the figure shows a schematic structural diagram of a mobile terminal according to the embodiment, which includes: a processing module, a data collection module, and a sending module.
  • the processing module is configured to control the data collecting module to collect user information, and send the information.
  • the module sends to the background computing platform; the user information includes user feature information and/or user motion information.
  • the user feature information in this embodiment refers to the user's own feature information, and may include, for example, user voice information and/or user face information.
  • the user motion information in this embodiment refers to the walking speed when the user carries the mobile terminal.
  • the walking speed here includes the average walking speed information of the user walking and/or the walking speed information of the user in different time periods; in this embodiment, the mode in which the mobile terminal collects the user information may be set.
  • the mobile terminal can automatically detect its current state, and according to the detection result, determine whether to start user information collection and specifically collect which user information.
  • the mobile terminal initiates the collection process of the corresponding user information mainly based on external data acquisition instructions. The following describes the collection process of various user information.
  • the process of the user terminal collecting the user voice information includes: the processing module of the mobile terminal determines whether the voice terminal is currently in the voice collection mode, and specifically includes determining whether the mobile terminal is in a call state and/or is in an externally placed state; if yes, opening
  • the voice data collection submodule included in the data collection module of the mobile terminal collects user voice information; until the mobile terminal is not in the voice collection mode.
  • the voice data collection sub-module here may specifically be a microphone.
  • the call state in this embodiment may be a regular voice and/or video call, or may be a voice and/or video call made by a third party software (such as WeChat, QQ, Momo);
  • the placement state refers to a state in which the mobile terminal is taken out and placed, for example, a desktop placement state. At this time, although the user may not use the mobile terminal to make a call, the voice data collection sub-module of the mobile terminal may still be turned on to the user's usual voice. The information is collected. In this embodiment, various proximity sensors of the mobile terminal may be specifically used to determine whether the mobile terminal is currently in an externally placed state.
  • the process for the mobile terminal to collect the user voice information includes: the processing module of the mobile terminal receives an external voice collection instruction, and the external voice collection instruction starts the voice data collection sub-module to collect the user voice information; until the external voice collection is received The instruction, or judges that the current call of the mobile terminal ends or is no longer in an externally placed state, and the like.
  • the mobile terminal may pre-store a piece of user voice binding information, that is, The recording of the user himself bound to the mobile terminal.
  • the processing module compares at least one piece of user voice information collected by the voice data collection sub-module with the pre-stored user voice binding information, and determines whether the two match, such as matching, to store the user voice of the segment. Information; otherwise, discard the user voice information.
  • the amount of user voice information that needs to be matched can be set according to a specific application scenario. For example, it can be set that only the previous voice data of the voice collection needs to be matched, and if the voice data matched to the pre-stored user voice binding information is matched, the subsequent collected user language is no longer used. The audio information is matched.
  • all the user voice data collected may be matched according to the above process, and only the user voice information matching the pre-stored user voice binding information is stored, and the part that will not be matched is Data is discarded.
  • the processing module of the mobile terminal before the processing module of the mobile terminal compares the user voice information with the pre-stored user voice binding information, it is further configured to determine whether the user voice information includes valid vocal data (vocal data audio). The frequency is between 60HZ and 2KHZ); if it is included, the subsequent matching work is performed; otherwise, the user voice information is directly discarded.
  • the user voice information collected by the mobile terminal includes voice data and background sound data of the user's voice; the processing module in this embodiment may first remove the background noise through the audio noise suppression module, and then determine whether there is a valid person through the volume threshold. Acoustic data. In this embodiment, only the previous voice data of the voice collection may be judged according to the specific application scenario. If it is determined that one voice data packet contains valid voice data, the voice data collected subsequently is not judged.
  • the processing module may be further configured to directly determine whether the current user is the user bound to the mobile terminal before the voice data collection sub-module is started to collect the voice data. For example, the processing module can be judged by various authentication methods such as a username and password.
  • the process for the mobile terminal to collect the user's facial information includes: the processing module of the mobile terminal determines whether the display screen of the mobile terminal is currently facing the user, and if so, the image data collection sub-module included in the data acquisition module is enabled to collect the user facial information; The image data collection sub-module at this time may be a front camera of the mobile terminal. Or the processing module of the mobile terminal determines whether the display screen of the mobile terminal is currently facing the user, and whether the display screen is currently illuminated.
  • the image data collection sub-module included in the data acquisition module is enabled to collect the user facial information.
  • the processing module of the mobile terminal determines whether the display is currently oriented to the user, and can be selected according to specific conditions. For example, for a mobile terminal that sets a gyroscope, the processing module can determine whether the display of the mobile terminal is currently facing the user through the gyroscope; the processing module can also be judged by combining the acceleration sensor and the geomagnetic sensor.
  • the processing module may further determine whether the display of the mobile terminal is currently facing the user by determining whether the user has contact or touch the display; or the processing module determines the movement by using an open state of the application software such as a browser, a reader, or a video player in the mobile terminal.
  • the display of the terminal is currently intended for the user.
  • the processing module can determine whether the display is currently illuminated according to the operating state of the LCD or OLED of the display screen.
  • the process for the mobile terminal to collect the user's facial information includes: the processing module of the mobile terminal receives an external image capturing instruction, and the image data collecting sub-module is opened according to the external image capturing command to collect the user facial information.
  • the processing module can also be configured to determine whether the current ambient light meets the shooting requirement by the light detecting sub-module (such as a light sensor) of the mobile terminal before the image data collecting sub-module is turned on. If the image data acquisition sub-module is turned on, the image capturing module is turned on for subsequent shooting. Step, otherwise, end the shot.
  • the user image binding information may be pre-stored in the mobile terminal before the processing module starts the image data collection sub-module to collect the image data, which may be a user's The standard photo), and then in the process of the processing module opening the image data collection sub-module to collect the user image information, the processing module may compare at least one user image information collected by the image data collection sub-module with the pre-stored user image binding information.
  • the amount of user image information that needs to be matched may be set according to a specific application scenario. For example, it may be set to match only the previous image or image data of the image acquisition, and if only one of the image data matches the pre-stored user image binding information, the subsequent acquisition is no longer performed. User voice information is matched. It should be noted that, in this embodiment, when the processing module stores the collected user image data, the processing module may first complete the previous or previous user image data matching the pre-stored user image binding information.
  • the processing module may be further configured to perform a determination of whether the user is the user himself or herself bound to the mobile terminal before the image data collection sub-module is turned on. For example, the processing module can be judged by various authentication methods such as a user name password.
  • the user may be required to input the username and password before using the mobile terminal, and only after inputting the correct username and password can the authentication pass. At this time, it can be determined that the current user is the user bound to the mobile terminal.
  • the collection of the user motion information may also be automatically detected by the processing module of the mobile terminal in the automatic mode.
  • the user may also start the collection process by sending an external motion information collection instruction to the processing module of the mobile terminal. . It is mainly used to collect daily action characteristics of users, such as walking speed, user path information, and so on.
  • the background computing platform can combine the user's motion information with the user's voice feature information and/or User image information serves as the basis for the user's physical and emotional reactions in different environmental conditions.
  • the processing module may specifically calculate the motion range, speed, trajectory and other key data of the user by using an acceleration sensor, a geomagnetic sensor and/or a gyroscope of the mobile terminal.
  • the user motion information obtained can be directly stored.
  • the data transmission amount is reduced as much as possible, and the network load is reduced.
  • the processing module extracts the user feature information included in the user information, and extracts the feature component data included in the user feature information by using the information feature extraction module included therein; and sends the extracted feature component data to the background computing platform in an actual application
  • the processing module can be a processor with processing functionality. The process of extracting the feature components of the user voice information and the user image information by the processing module will be described below.
  • the processor can filter out the noise outside the voice signal through the band pass filter, and then extract the feature component from the language message through the included signal feature extraction sub-module, and discard the other redundant data.
  • the reconstruction is then performed by the background computing platform based on these feature components and pre-stored typical audio information for the user.
  • Typical audio information herein may be directed to the user during initialization, such as words, sentences, etc. that may direct the user to read a number of key utterances.
  • these feature components can be stored and then voice coded.
  • the speech coding in this embodiment adopts a hybrid coding method combining waveform coding and parameter coding.
  • the advantages are as follows:
  • the hybrid coding includes several speech feature parameters and partial waveform coding information, and the waveform is achieved. The advantages of high quality coding and low rate of parametric coding.
  • the compressed voice data is stored in the data transmission queue.
  • the extraction process of the feature component of the user image information is: the processor obtains the feature component by comparing the collected user profile information with the pre-stored standard facial image by using the information feature extraction sub-module included therein, Then, the extracted feature components are compressed by the H.264 standard.
  • the advantages of this compression method are: Compared with other video coding in the related art, the image quality is better under the same bandwidth, and the facial feature information can be completely guaranteed.
  • H.264 is very fault-tolerant and solves the problem of sending in an unstable network environment. Errors such as lost packets, ideal for wireless transmission environments. After the compression is completed, the compressed image data is stored in the data transfer queue.
  • the user motion information included in the user information is less than the data amount of the user motion information, so the mobile terminal can be directly sent to the background computing platform after being collected. It can also be stored in the data transfer queue and sent together with voice data and image data.
  • the main function of the transmission queue is to determine the transmission rule of the data according to the current mobile terminal network environment. Specifically, the data transmission can be scheduled according to the current wireless working state of the mobile terminal, so as to avoid putting a great pressure on the wireless network.
  • the data transmission rule determined by the transmission queue is to directly send all data; when the processor detects that the current network environment of the mobile terminal is a 3G network, the transmission queue determines The data transmission rule is to pack the data into small packets of 200 KB or less, which are sent in different time periods throughout the day, and the transmission time can also be controlled by the user. Avoid the impact on the user's normal wireless Internet access.

Abstract

 本发明公开了一种用户信息获取方法及移动终端,通过移动终端自身的数据采集模块采集包括用户特征信息和/或用户运动信息的用户信息,并将采集到的用户信息发送给后台计算平台(例如后台的云计算平台),又后台计算平台根据该用户信息处理后形成一个有效的人体特征数据库。由于本发明利用用户随身携带、使用的移动终端(例如手机、IPAD或电子书终端等等)获取用户信息,而不是利用专门的采集设备在特定的环境下获取用户信息,因此获取的用户信息数据涵盖的范围会更为广泛,代表性也更好,根据该用户信息最终得到的算法模型的准确率更高。

Description

用户信息获取方法及移动终端 技术领域 本发明涉及通信领域, 具体涉及一种用户信息获取方法及移动终端。 背景技术 虚拟人技术, 也就是人类自身数字化, 是目前大数据技术背景下最令业界追捧的 信息技术前沿应用之一。 它是基于对人体特征的大数据量采集, 并通过数据处理、 运 算后, 提取出关键人体特征, 并通过人工智能, 利用设备仪器, 实现一个基于计算模 型的数字化人。 虚拟人的动作、 表情、 语言等特征通过显示技术实现, 这些特征和被 采集的真实人的相应特征非常相似, 基于该技术, 虚拟人可以自主和其他真实人或者 虚拟人沟通交流。 目前虚拟人技术的实用化还存在很多技术难点。 例如, 目前对人体 特征数据的采集主要是通过专门的采集设备在专门的实验环境下进行采集, 所采集到 的数据涵盖范围窄、 代表性差, 导致最终得到的算法模型准确率差。 发明内容 本发明提供了一种用户信息获取方法及移动终端, 用于解决相关技术中通过专门 的采集设备采集到的人体特征数据涵盖范围窄、 代表件差, 导致最终得到的算法模型 准确率差的问题。 为了解决上述问题, 本发明提供了一种用户信息获取方法, 包括: 移动终端通过自身的数据采集模块采集用户信息, 其中, 所述用户信息包括用户 特征信息和 /或用户运动信息; 所述移动终端将采集的所述用户信息发送给后台计算平台。 在本发明的一种实施例中, 所述用户信息包括所述用户特征信息时, 所述用户特 征信息包括用户语音信息和 /或用户面部信息。 在本发明的一种实施例中, 所述用户特征信息包括所述用户语音信息时, 所述移 动终端通过所述数据采集模块的语音数据采集子模块采集所述用户语音信息, 包括: 所述移动终端判断当前是否进入语音采集模式, 如是, 开启所述语音数据采集子 模块采集所述用户语音信息; 所述移动终端判断是否进入语音采集模式包括判断是否 处于通话状态和 /或是否处于外置放置状态; 或, 所述移动终端接收外部语音采集指令, 根据所述外部语音采集指令开启所述语音 数据采集子模块采集所述用户语音信息。 在本发明的一种实施例中, 所述用户特征信息包括所述用户面部信息时, 所述移 动终端通过所述数据采集模块的图像数据采集子模块采集所述用户面部信息, 包括: 所述移动终端判断其显示屏当前是否面向于用户, 如是, 则开启所述图像数据采 集子模块采集所述用户面部信息; 或, 所述判断移动终端其显示屏当前是否面向于用户且显示屏当前是否被点亮,如是, 则开启所述图像数据采集子模块采集所述用户面部信息; 或, 所述移动终端接收外部图像采集指令, 根据所述外部图像采集指令开启所述图像 数据采集子模块采集所述用户面部信息。 在本发明的一种实施例中, 所述移动终端开启所述语音数据采集子模块采集所述 用户语音信息包括: 所述移动终端将所述语音数据采集子模块采集的至少一段用户语音信息与预先存 储的用户语音绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所述用户语音信 息。 在本发明的一种实施例中, 所述移动终端将所述用户语音信息与所述用户语音绑 定信息进行比较之前,还包括:判断所述用户语音信息中是否包含有有效的人声数据。 在本发明的一种实施例中, 所述移动终端开启所述图像数据采集子模块采集所述 用户图像信息包括: 所述移动终端将所述图像数据采集子模块采集的至少一幅用户图像信息与预先存 储的用户图像绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所述用户图像信 息。 在本发明的一种实施例中, 所述用户信息包括所述用户特征信息时, 所述移动终 端在获取所述用户特征信息之前, 还包括: 所述移动终端判断当前使用的用户是否是 与之绑定的用户。 在本发明的一种实施例中, 所述用户信息包括所述用户运动信息时, 所述用户运 动信息包括用户步行速度信息和 /或用户路径信息。 在本发明的一种实施例中, 所述移动终端将采集的所述用户信息发送给所述后台 计算平台包括: 所述用户信息包括所述用户特征信息时, 提取所述用户特征信息包含的特征分量 数据; 将提取出的所述特征分量数据发送给所述后台计算平台。 在本发明的一种实施例中, 所述将提取出的所述特征分量数据发送给所述后台计 算平台包括: 将提取出的所述特征分量数据存入数据传送队列; 根据当前网络环境确定所述数据传送队列中的数据的发送规则; 根据所述规则将所述数据传送队列中的数据发送给所述后台计算平台。 为了解决上述问题, 本发明提供了一种移动终端, 包括处理模块、 数据采集模块 和发送模块; 所述处理模块设置为控制所述数据采集模块采集用户信息, 并通过所述发送模块 发送给后台计算平台, 其中, 所述用户信息包括用户特征信息和 /或用户运动信息。 在本发明的一种实施例中, 所述用户信息包括所述用户特征信息时, 所述用户特 征信息包括用户语音信息和 /或用户面部信息。 在本发明的一种实施例中, 所述数据采集模块包括语音数据采集子模块, 所述用 户特征信息包括所述用户语音信息时, 所述处理模块控制所述数据采集模块采集所述 用户语音信息包括: 所述处理模块判断所述移动终端当前是否进入语音采集模式, 如是, 开启所述语 音数据采集子模块采集所述用户语音信息; 所述处理模块判断所述移动终端是否进入 语音采集模式包括判断所述移动终端是否处于通话状态和 /或是否处于外置放置状态; 或, 所述处理模块接收外部语音采集指令, 根据所述外部语音采集指令开启所述语音 数据采集子模块采集所述用户语音信息。 在本发明的一种实施例中, 所述数据采集模块包括图像数据采集子模块, 所述用 户特征信息包括所述用户面部信息时, 所述处理模块控制所述数据采集模块采集所述 用户面部信息包括: 所述处理模块判断所述移动终端的显示屏当前是否面向于用户, 如是, 则开启所 述图像数据采集子模块采集所述用户面部信息; 或, 所述处理模块判断所述移动终端的显示屏当前是否面向于用户且所述显示屏当前 是否被点亮, 如是, 则开启所述图像数据采集子模块采集所述用户面部信息; 或, 所述处理模块接收外部图像采集指令, 根据所述外部图像采集指令开启所述图像 数据采集子模块采集所述用户面部信息。 在本发明的一种实施例中, 所述处理模块开启所述语音数据采集子模块采集所述 用户语音信息包括: 所述处理模块将所述语音数据采集子模块采集的至少一段用户语音信息与预先存 储的用户语音绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所述用户语音信 息。 在本发明的一种实施例中, 所述处理模块还设置为将所述用户语音信息与所述用 户语音绑定信息进行比较之前,判断所述用户语音信息中是否包含有有效的人声数据。 在本发明的一种实施例中, 所述处理模块开启所述图像数据采集子模块采集所述 用户图像信息包括: 所述处理模块将所述图像数据采集子模块采集的至少一幅用户图像信息与预先存 储的用户图像绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所述用户图像信 息。 在本发明的一种实施例中, 所述处理模块还设置为在所述用户信息包括所述用户 特征信息时, 控制所述数据采集模块采集所述用户特征信息之前, 判断当前使用的用 户是否是与所述移动终端绑定的用户。 在本发明的一种实施例中, 所述用户信息包括所述用户运动信息时, 所述用户运 动信息包括用户步行速度信息和 /或用户路径信息。 在本发明的一种实施例中, 所述处理模块通过所述发送模块将所述用户信息发送 给所述后台计算平台包括: 所述处理模块判断所述用户信息包括所述用户特征信息时, 提取所述用户特征信 息包含的特征分量数据; 将提取出的所述特征分量数据通过所述发送模块发送给所述 后台计算平台。 本发明的有益效果是: 本发明提供的用户信息获取方法及移动终端, 通过移动终端自身的数据采集模块 采集包括用户特征信息和 /或用户运动信息的用户信息, 并将采集到的用户信息发送给 后台计算平台(例如后台的 计算平台), 由后台计算平台根据该用户信怠处理后形成 一个有效的人体特征数据库。 由于本发明利用用户随身携带、 使用的移动终端 (例如 手机、 IPAD或电子书终端等等)获取用户信息, 而不是利用专门的采集设备在特定的 环境下获取用户信息, 因此获取的用户信息数据涵盖的范围会更为广泛, 代表性也更 好, 根据该用户信息最终得到的算法模型的准确率更高。 附图说明 图 1为本发明实施例 提供的一种通信系统示意图; 图 2为本发明实施例 提供的用户信息获取流程示意图; 图 3为本发明实施例 中自动模式下采集用户语音信息的流程示意图; 图 4为本发明实施例 中手动模式下采集用户语音信息的流程示意图; 图 5为本发明实施例 中自动模式下采集用户图像信息的流程示意图 图 6为本发明实施例一中自动模式下采集用户图像信息的流程示意图二; 图 7为本发明实施例一中手动模式下采集用户图像信息的流程示意图; 图 8为本发明实施例二提供的移动终端的结构示意图。 具体实施方式 下面通过具体实施方式结合附图对本发明作进一步详细说明。 实施例一: 目前, 大多用户身上都随时携带有各种移动终端, 例如智能手机、 IPAD、 PAD, 电子书等; 而这些移动终端大都搭载了数据采集模块, 所搭载的数据采集模块一般都 包括语音数据采集子模块 (例如麦克风)、 图像数据采集子模块 (例如摄像头)、 速度 采集子模块 (例如加速度传感器)、 计时子模块 (例如计时器)、 方位采集子模块 (例 如地磁传感器) 等等。 同时, 现有的移动终端基本都配置有各种通信模块, 包括无线 通信模块, 例如 WIFI模块、 3G通信模块、 4G通信模块等等。 因此, 现有移动终端 非常适合采集用户信息, 并可随着宽带移动通讯的发展, 将采集的大量用户信息及时 的上传至后台计算平台(例如后台的云计算平台等), 请参见图 1所示。后台计算平台 获取到这些用户信息后, 通过信息筛选、 模式识别算法处理后, 组合成一个在后台的 有效人体特征数据库。 该人体特征数据库可为建立和真实人对等的虚拟人提供人体特 征数据的支持。 利用用户随身携带的移动终端在用户平常的各种使用状态中采集的用 户信息所涵盖的范围也更广, 训练也更强、 代表性也更好, 因此可以使得模型调整准 确率更高。 下面本实施例对通过移动终端获取用户信息的具体过程进行详细说明。 请参见图 2所示, 该图所示为本实施例提供的移动终端采集用户信息的流程示意 图, 其包括: 步骤 201 : 移动终端通过自身的数据采集模块采集用户信息; 本实施例中的用户 信息包括用户特征信息和 /或用户运动信息; 本实施例中的用户特征信息是指用户自身的特征信息, 例如可以包括用户语音信 息和 /或用户面部信息; 本实施例中的用户运动信息是指用户携带移动终端时的步行速 度信息和 /或用户路径信息;此处的步行速度包括用户步行的平均步行速度信息和 /或用 户在不同的时间段的步行速度信息; 步骤 202: 移动终端将采集的用户信息发送给后台计算平台。 本实施例中, 移动终端采集用户信息的模式可以设置为自动模式和手动模式。 其 中, 在自动模式下, 移动终端可以自动检测自身当前的状态, 并根据检测结果决定是 否启动用户信息采集以及具体采集哪种用户信息。 在手动模式下, 移动终端则主要基 于外部的数据采集指令启动对应用户信息的采集过程。 下面对各种用户信息的采集过 程进行举例说明。 在自动模式下, 移动终端采集用户语音信息的过程请参见图 3所示, 包括: 步骤 301 : 移动终端判断当前是否处于语音采集模式, 具体包括判断移动终端是 否处于通话状态和 /或是否处于外置放置状态; 如是, 转至步骤 302; 否则, 重新检测; 该步骤中的通话状态可以是常规的语音和 /或视频通话, 也可以是利通第三方软件 (例如微信、 QQ、 陌陌) 进行的语音和 /或视频通话; 该步骤中的外置放置状态是指 移动终端被取出并被放置的状态, 例如桌面放置状态, 此时虽然用户可能未使用该移 动终端进行通话, 但仍可开启移动终端的语音数据采集子模块对用户平常说话的语音 信息进行采集; 本实施例中, 具体可采用移动终端的各种接近传感器判断移动终端当 前是否处于外置放置状态; 步骤 302: 开启移动终端的语音数据采集子模块采集用户语音信息; 直到移动终 端不处于语音采集模式(例如通话结束或移动终端被不再处于外置放置状态等等);此 处的语音数据采集子模块具体可为麦克风。 在手动模式下, 移动终端采集用户语音信息的过程请参见图 4所示, 包括: 步骤 401 : 移动终端接收外部语音采集指令; 步骤 402: 移动终端根据该外部语音采集指令开启语音数据采集子模块采集用户 语音信息; 直到接收到外部的语音采集结束指令, 或判断移动终端当前进行的通话结 束或当前不再处于外置放置状态等。 在上述步骤 302和步骤 402中, 移动终端开启语音数据采集子模块采集用户语音 信息的过程中, 出于对用户自身数据有效性的判决, 可在移动终端开启语音数据采集 子模块采集用户语音信息之前, 在移动终端中预先存储一段用户语音绑定信息, 也即 与该移动终端绑定的用户本人的录音。 然后在语音信息采集过程中, 移动终端将语音 数据采集子模块采集的至少一段用户语音信息与预先存储的用户语音绑定信息进行比 较, 判断二者是否匹配, 如匹配, 才存储该段用户语音信息; 否则, 丢弃该段用户语 音信息。 在本实施例中, 可根据具体的应用场景设定需要进行匹配的用户语音信息的 量。 例如, 可设定只需要对语音采集的前段语音数据进行匹配, 只要匹配到其中一段 语音数据与预先存储的用户语音绑定信息相匹配时, 则不再对后续采集的用户语音信 息进行匹配。 或为了提高语音数据采集的可靠性, 可在对采集的所有用户语音数据按 照上述过程进行匹配,仅存储与预先存储的用户语音绑定信息相匹配的用户语音信息, 而将不匹配的那部分数据丢弃。 在本实施例中, 移动终端将用户语音信息与预先存储的用户语音绑定信息进行比 较之前, 还可包括判断该用户语音信息中是否包含有有效的人声数据 (人声数据音频 频率在 60HZ-2KHZ之间); 如包含, 才进行后续的匹配工作; 否则, 将该段用户语音 信息直接丢弃。 由于移动终端所采集的用户语音信息包括用户说话的语音数据和背景 音数据; 本实施例可采用音频噪声抑制模块先将背景噪声去除, 然后通过音量阈值来 判断是否存在有效的人声数据。 本实施例中, 也可根据具体应用场景仅对语音采集的 前段语音数据进行上述判断, 只要判断其中一段语音数据包含有有效的人声数据, 则 不再对后续采集的语音数据进行判断。 上述对当前用户是否是与移动终端绑定的用户本人的判断过程是在语音数据的采 集过程中进行的。 在本实施中, 也可在上述步骤 302和步骤 402之前执行对用户是否 是与移动终端绑定的用户本人的判断。 例如可通过用户名密码等各种认证方式进行判 断。 例如, 可要求用户在使用该移动终端之前, 要求用户输入用户名密码, 只有输入 正确的用户名密码后才能认证通过, 此时则可判定当前的用户就是与移动终端绑定的 用户。 在自动模式下,移动终端采集用户面部信息的过程请参见图 5或图 6所示。其中, 图 5所示包括: 步骤 501 : 移动终端判断其显示屏当前是否面向于用户, 如是, 转置步骤 502, 否 贝 U, 继续判断; 步骤 502: 移动终端开启图像数据采集子模块采集用户面部信息; 此时的图像数 据采集子模块可为移动终端的前置摄像头。 图 6所示包括: 步骤 601 : 移动终端判断其显示屏当前是否面向于用户, 如是, 转置步骤 502, 否 贝 U, 继续判断; 步骤 602: 移动终端判断其显示屏当前是否被点亮, 如是, 转置步骤 603 ; 否则, 转置步骤 601或继续判断; 步骤 603 : 移动开启所述图像数据采集子模块采集用户面部信息; 此时的图像数 据采集子模块可为移动终端的前置摄像头。 在上述步骤 501和步骤 601中, 移动终端判断其显示器当前是否面向用户的方式 有多种, 可根据具体的情况进行选择。 例如, 对于设置陀螺仪的移动终端, 则可通过 陀螺仪判断移动终端的显示器当前是否面向用户; 还可结合加速度传感器和地磁传感 器来判断。 另外, 还可通过判断用户是否有接触或触摸显示器判断移动终端的显示器 当前是面向用户; 或者通过移动终端中的浏览器、 阅读器、 视频播放器等应用软件的 开启状态判断移动终端的显示器当前是面向用户。 上述步骤 602中, 具体可根据显示 屏的 LCD或 OLED等的工作状态判断显示屏当前是否被点亮。 在手动模式下, 移动终端采集用户面部信息的过程请参见图 7所示, 包括: 步骤 701 : 移动终端接收外部图像采集指令; 步骤 702: 移动终端根据该外部图像采集指令开启图像数据采集子模块采集用户 面部信息。 在上述步骤 502、 步骤 603和步骤 702之前, 还可包括以下步骤: 移动终端通过其光线检测子模块 (例如光线传感器) 判断当前环境光线是否满足 拍摄要求, 如满足, 才进行后续的拍摄步骤, 否则, 结束此次拍摄。 在上述步骤 502、 步骤 603和步骤 702中, 出于对用户自身数据有效性的判决, 可在移动终端开启图像数据采集子模块采集图像数据之前, 在移动终端中预先存储用 户图像绑定信息(可以是一张本用户的标准照),然后在移动终端开启图像数据采集子 模块采集用户图像信息的过程中, 移动终端可将图像数据采集子模块采集的至少一幅 用户图像信息与预先存储的用户图像绑定信息进行比较, 判断二者是否匹配, 如是, 则存储该幅用户图像信息; 否则, 丢弃该幅用户图像信息。 本实施例中, 可根据具体 的应用场景设定需要进行匹配的用户图像信息的量。 例如, 可设定只需要对图像采集 的前一幅或几幅图像数据进行匹配, 只要匹配到其中一幅图像数据与预先存储的用户 图像绑定信息相匹配时, 则不再对后续采集的用户语音信息进行匹配。 值得注意的是, 在本实施例中, 在对采集的用户图像数据进行存储时, 可先将前 一幅或前几幅与预先存储的用户图像绑定信息相匹配的用户图像数据完全保存, 对于 后续采集的用户图像数据, 则以前面保存的用户图像数据为基础样本, 仅提取后续用 户图像数据中的面部特征数据进行存储, 剩下的其他数据则可丢弃。 例如可仅存储脸 轮廓、 发髻、 眉毛、 鼻尖、 眼线、 嘴唇线等面部数据。 这种存储方式可以减少后续面 部数据的采集量。 上述对当前用户是否是与移动终端绑定的用户本人的判断过程是在语音数据的采 集过程中进行的。 在本实施中, 也可在上述步骤 502、 步骤 603和步骤 702之前执行 对用户是否是与移动终端绑定的用户本人的判断。 例如可通过用户名密码等各种认证 方式进行判断。例如, 可要求用户在使用该移动终端之前, 要求用户输入用户名密码, 只有输入正确的用户名密码后才能认证通过, 此时则可判定当前的用户就是与移动终 端绑定的用户。 在本实施例中, 对于用户运动信息的采集也可在自动模式下, 由移动终端自动检 测开启采集过程; 也可由用户通过向移动终端发送外部运动信息采集指令开启采集过 程。 其主要用来搜集用户日常的行动特征, 例如步行速度、 用户路径信息等。 后台计 算平台可结合用户的运动信息以及用户的语音特征信息和 /或用户图像信息作为用户 在不同环境状态下身体和情绪反应的依据。 例如, 当后台计算平台发现在晚间时刻, 用户的步速突然由均匀变位加速, 并且声频剧烈变化, 则可判定为用户可能遇到危险 且处于摆脱中。 在本实施例中, 具体可通过移动终端的加速度传感器、 地磁传感器和 /或陀螺仪, 并可进一步结合 GPS计算出用户的运动范围、 速度、 轨迹等关键数据。 对于获取到的 用户运动信息可直接进行存储。 在本实施例中, 对于移动终端所采集到的用户信息, 为了能及时、 准确的传送到 后台计算平台, 尽量减少数据传输量, 降低网络负荷, 本实施例对用户信息的传送具 体可采用以下方式: 对用户信息包括的用户特征信息, 提取所述用户特征信息包含的特征分量数据; 将提取出的特征分量数据发送给后台计算平台。 下面分别对用户语音信息和用户图像信息的特征分量的提取过程进行说明。 对于用户语音信息, 可以通过带通滤波器将语音信号外的噪声滤除, 然后送入信 息特征提取子模块, 由信息特征提取子模块从该语音信号中提取特征分量, 将其他冗 余数据丢掉; 然后由后台计算平台根据这些特征分量和预先存储的该用户的典型音频 信息进行重构。 此处的典型音频信息可以是在初始化时引导用户输入的, 例如可以引 导用户念若干关键发音的词、 句子等。 获取到用户语音信息的特征分量后, 可将这些 特征分量进行存储, 然后启动语音编码。 为了确保特征信息尽可能的完整, 本实施例 中的语音编码采用波形编码和参数编码结合的混合编码方式, 优点是: 混合编码包括 了若干语音特征参量又包括了部分波形编码信息, 达到了波形编码的高质量和参量编 码的低速率的优点。 压缩结束后, 将压缩后的语音数据存入数据传送队列。 本实施例中, 对于用户图像信息的特征分量的提取过程为: 通过信息特征提取子 模块将采集到的用户图线信息与预先存储的标准面部图像进行对比后获取特征分量, 然后对提取的特征分量采用 H.264标准进行压缩, 这种压缩方式的优点是: 和相关技 术中其他视频编码比较, 相同带宽下图像质量更优质, 能够极大限度保证面部特征信 息的完整。 而且 H.264的容错性很强, 解决了在不稳定网络环境下发生丢包等错误, 非常适合无线传输环境。 压缩结束后, 将压缩后的图像数据存入数据传送队列。 在本实施例中, 对用户信息包括的用户运动信息, 由于用户运动信息的数据量比 较少, 因此移动终端可以才采集之后直接实时的发送给后台计算平台。 也可以将其存 储在数据传送队列与语音数据和图像数据统一发送。 在本实施例中, 传送队列的主要作用是根据当前移动终端网络环境来确定数据的 发送规则。 具体可根据移动终端当前的无线工作状态来调度数据的发送, 这样可避免 给无线网络带来大的压力。 例如, 当移动终端检测到当前的网络环境为 WIFI环境时, 传送队列确定的数据发送规则是直接发送全部数据; 当移动终端检测到当前的网络环 境是 3G网络时, 传送队列确定的数据发送规则是将数据打包成小于等于 200KB的小 包, 全天分不同时间段发送, 也可以由用户控制发送时间。 避开对用户正常无线上网 的影响。 实施例二: 为了更好地理解本发明, 下面结合移动终端具体结构对本发明做进一步的说明。 请参见图 8所示, 该图所示为本实施例提供的移动终端结构示意图, 其包括: 处 理模块、 数据采集模块和发送模块; 处理模块设置为控制数据采集模块采集用户信息, 并通过发送模块发送给后台计 算平台; 所述用户信息包括用户特征信息和 /或用户运动信息。 本实施例中的用户特征信息是指用户自身的特征信息, 例如可以包括用户语音信 息和 /或用户面部信息; 本实施例中的用户运动信息是指用户携带移动终端时的步行速 度信息和 /或用户路径信息;此处的步行速度包括用户步行的平均步行速度信息和 /或用 户在不同的时间段的步行速度信息; 本实施例中, 移动终端采集用户信息的模式可以设置为自动模式和手动模块。 其 中, 在自动模式下, 移动终端可以自动检测自身当前的状态, 并根据检测结果决定是 否启动用户信息采集以及具体采集哪种用户信息。 在手动模式下, 移动终端则主要基 于外部的数据采集指令启动对应用户信息的采集过程。 下面对各种用户信息的采集过 程进行举例说明。 在自动模式下, 移动终端采集用户语音信息的过程包括: 移动终端的处理模块判断当前是否处于语音采集模式, 具体包括判断移动终端是 否处于通话状态和 /或是否处于外置放置状态; 如是, 开启移动终端的数据采集模块所 包括的语音数据采集子模块采集用户语音信息; 直到移动终端不处于语音采集模式。 此处的语音数据采集子模块具体可为麦克风。 本实施例中的通话状态可以是常规的语音和 /或视频通话, 也可以是利通第三方软 件 (例如微信、 QQ、 陌陌) 进行的语音和 /或视频通话; 本实施例中的外置放置状态 是指移动终端被取出并被放置的状态, 例如桌面放置状态, 此时虽然用户可能未使用 该移动终端进行通话, 但仍可开启移动终端的语音数据采集子模块对用户平常说话的 语音信息进行采集; 本实施例中, 具体可采用移动终端的各种接近传感器判断移动终 端当前是否处于外置放置状态。 在手动模式下, 移动终端采集用户语音信息的过程包括: 移动终端的处理模块接收外部语音采集指令, 该外部语音采集指令开启语音数据 采集子模块采集用户语音信息; 直到接收到外部的语音采集结束指令, 或判断移动终 端当前进行的通话结束或当前不再处于外置放置状态等。 在本实施例中, 出于对用户自身数据有效性的判决, 可在处理模块开启语音数据 采集子模块采集用户语音信息之前, 在移动终端中预先存储一段用户语音绑定信息, 也即与该移动终端绑定的用户本人的录音。 然后在语音信息采集过程中, 处理模块将 语音数据采集子模块采集的至少一段用户语音信息与预先存储的用户语音绑定信息进 行比较, 判断二者是否匹配, 如匹配, 才存储该段用户语音信息; 否则, 丢弃该段用 户语音信息。 在本实施例中, 可根据具体的应用场景设定需要进行匹配的用户语音信 息的量。 例如, 可设定只需要对语音采集的前段语音数据进行匹配, 只要匹配到其中 一段语音数据与预先存储的用户语音绑定信息相匹配时, 则不再对后续采集的用户语 音信息进行匹配。 或为了提高语音数据采集的可靠性, 可在对采集的所有用户语音数 据按照上述过程进行匹配, 仅存储与预先存储的用户语音绑定信息相匹配的用户语音 信息, 而将不匹配的那部分数据丢弃。 在本实施例中, 移动终端的处理模块将用户语音信息与预先存储的用户语音绑定 信息进行比较之前, 还设置为判断该用户语音信息中是否包含有有效的人声数据 (人 声数据音频频率在 60HZ-2KHZ之间); 如包含, 才进行后续的匹配工作; 否则, 将该 段用户语音信息直接丢弃。 由于移动终端所采集的用户语音信息包括用户说话的语音 数据和背景音数据; 本实施例中的处理模块可通过音频噪声抑制模块先将背景噪声去 除, 然后通过音量阈值来判断是否存在有效的人声数据。 本实施例中, 也可根据具体 应用场景仅对语音采集的前段语音数据进行上述判断, 只要判断其中一段语音数据包 含有有效的人声数据, 则不再对后续采集的语音数据进行判断。 在本实施中, 处理模块还可设置为在开启语音数据采集子模块采集语音数据前, 还设置为直接判断当前用户是否是与移动终端绑定的用户本人。 例如处理模块可通过 用户名密码等各种认证方式进行判断。 例如, 可要求用户在使用该移动终端之前, 要 求用户输入用户名密码, 只有输入正确的用户名密码后才能认证通过, 此时则可判定 当前的用户就是与移动终端绑定的用户。 在自动模式下, 移动终端采集用户面部信息的过程包括: 移动终端的处理模块判断移动终端的显示屏当前是否面向于用户, 如是, 开启数 据采集模块包括的图像数据采集子模块采集用户面部信息; 此时的图像数据采集子模 块可为移动终端的前置摄像头。 或移动终端的处理模块判断移动终端的显示屏当前是否面向于用户, 且该显示屏 当前是否被点亮, 如是, 开启数据采集模块包括的图像数据采集子模块采集用户面部 信息。 移动终端的处理模块判断其显示器当前是否面向用户的方式有多种, 可根据具体 的情况进行选择。 例如, 对于设置陀螺仪的移动终端, 处理模块则可通过陀螺仪判断 移动终端的显示器当前是否面向用户; 处理模块还可结合加速度传感器和地磁传感器 来判断。 另外, 处理模块还可通过判断用户是否有接触或触摸显示器判断移动终端的 显示器当前是面向用户; 或者处理模块通过移动终端中的浏览器、 阅读器、 视频播放 器等应用软件的开启状态判断移动终端的显示器当前是面向用户。 另外, 处理模块具 体可根据显示屏的 LCD或 OLED等的工作状态判断显示屏当前是否被点亮。 在手动模式下, 移动终端采集用户面部信息的过程包括: 移动终端的处理模块接收外部图像采集指令, 根据该外部图像采集指令开启图像 数据采集子模块采集用户面部信息。 处理模块还可设置为开启图像数据采集子模块之前, 通过移动终端的光线检测子 模块 (例如光线传感器) 判断当前环境光线是否满足拍摄要求, 如满足, 才开启图像 数据采集子模块进行后续的拍摄步骤, 否则, 结束此次拍摄。 出于对用户自身数据有效性的判决, 本实施例中还可在处理模块开启图像数据采 集子模块采集图像数据之前, 在移动终端中预先存储用户图像绑定信息 (可以是一张 本用户的标准照),然后在处理模块开启图像数据采集子模块采集用户图像信息的过程 中, 处理模块可将图像数据采集子模块采集的至少一幅用户图像信息与预先存储的用 户图像绑定信息进行比较, 判断二者是否匹配, 如是, 则存储该幅用户图像信息; 否 贝 U, 丢弃该幅用户图像信息。 本实施例中, 可根据具体的应用场景设定需要进行匹配 的用户图像信息的量。 例如, 可设定只需要对图像采集的前一幅或几幅图像数据进行 匹配, 只要匹配到其中一幅图像数据与预先存储的用户图像绑定信息相匹配时, 则不 再对后续采集的用户语音信息进行匹配。 值得注意的是, 在本实施例中, 处理模块在对采集的用户图像数据进行存储时, 可先将前一幅或前几幅与预先存储的用户图像绑定信息相匹配的用户图像数据完全保 存, 对于后续采集的用户图像数据, 则以前面保存的用户图像数据为基础样本, 仅提 取后续用户图像数据中的面部特征数据进行存储, 剩下的其他数据则可丢弃。 例如可 仅存储脸轮廓、 发髻、 眉毛、 鼻尖、 眼线、 嘴唇线等面部数据。 这种存储方式可以减 少后续面部数据的采集量。 上述对当前用户是否是与移动终端绑定的用户本人的判断过程是在语音数据的采 集过程中进行的。在本实施中,处理模块还可设置为在开启图像数据采集子模块之前, 执行对用户是否是与移动终端绑定的用户本人的判断。 例如处理模块可通过用户名密 码等各种认证方式进行判断。 例如, 可要求用户在使用该移动终端之前, 要求用户输 入用户名密码, 只有输入正确的用户名密码后才能认证通过, 此时则可判定当前的用 户就是与移动终端绑定的用户。 在本实施例中, 对于用户运动信息的采集也可在自动模式下, 由移动终端的处理 模块自动检测开启采集过程; 也可由用户通过向移动终端的处理模块发送外部运动信 息采集指令开启采集过程。 其主要用来搜集用户日常的行动特征, 例如步行速度、 用 户路径信息等。后台计算平台可结合用户的运动信息以及用户的语音特征信息和 /或用 户图像信息作为用户在不同环境状态下身体和情绪反应的依据。 例如, 当后台计算平 台发现在晚间时刻, 用户的步速突然由均匀变位加速, 并且声频剧烈变化, 则可判定 为用户可能遇到危险且处于摆脱中。 在本实施例中, 处理模块具体可通过移动终端的加速度传感器、 地磁传感器和 / 或陀螺仪, 并可进一步结合 GPS计算出用户的运动范围、 速度、 轨迹等关键数据。 对 于获取到的用户运动信息可直接进行存储。 在本实施例中, 对于移动终端所采集到的用户信息, 为了能及时、 准确的传送到 后台计算平台, 尽量减少数据传输量, 降低网络负荷, 本实施例对用户信息的传送具 体可采用以下方式: 处理模块对用户信息包括的用户特征信息, 通过其包括的信息特征提取模块提取 所述用户特征信息包含的特征分量数据; 将提取出的特征分量数据发送给后台计算平 台 在实际应用中, 处理模块可以是具有处理功能的处理器。 下面分别对处理模块对用户语音信息和用户图像信息的特征分量的提取过程进行 说明。 对于用户语音信息, 处理器可以通过带通滤波器将语音信号外的噪声滤除, 然后 通过其包括的信怠特征提取子模块从该语咅信弓中提取特征分量, 将其他冗余数据丢 掉; 然后由后台计算平台根据这些特征分量和预先存储的该用户的典型音频信息进行 重构。 此处的典型音频信息可以是在初始化时引导用户输入的, 例如可以引导用户念 若干关键发音的词、 句子等。 获取到用户语音信息的特征分量后, 可将这些特征分量 进行存储, 然后启动语音编码。 为了确保特征信息尽可能的完整, 本实施例中的语音 编码采用波形编码和参数编码结合的混合编码方式, 优点是: 混合编码包括了若干语 音特征参量又包括了部分波形编码信息, 达到了波形编码的高质量和参量编码的低速 率的优点。 压缩结束后, 将压缩后的语音数据存入数据传送队列。 本实施例中, 对于用户图像信息的特征分量的提取过程为: 处理器通过其包括的 信息特征提取子模块将采集到的用户图线信息与预先存储的标准面部图像进行对比后 获取特征分量, 然后对提取的特征分量采用 H.264标准进行压缩, 这种压缩方式的优 点是: 和相关技术中其他视频编码比较, 相同带宽下图像质量更优质, 能够极大限度 保证面部特征信息的完整。 而且 H.264的容错性很强, 解决了在不稳定网络环境下发 生丢包等错误, 非常适合无线传输环境。 压缩结束后, 将压缩后的图像数据存入数据 传送队列。 在本实施例中, 对用户信息包括的用户运动信息, 由于用户运动信息的数据量比 较少, 因此移动终端可以才采集之后直接实时的发送给后台计算平台。 也可以将其存 储在数据传送队列与语音数据和图像数据统一发送。 在本实施例中, 传送队列的主要作用是根据当前移动终端网络环境来确定数据的 发送规则。 具体可根据移动终端当前的无线工作状态来调度数据的发送, 这样可避免 给无线网络带来大的压力。 例如, 当处理器检测到移动终端当前的网络环境为 WIFI 环境时, 传送队列确定的数据发送规则是直接发送全部数据; 当处理器检测到移动终 端当前的网络环境是 3G网络时, 传送队列确定的数据发送规则是将数据打包成小于 等于 200KB的小包, 全天分不同时间段发送, 也可以由用户控制发送时间。 避开对用 户正常无线上网的影响。 以上内容是结合具体的实施方式对本发明所作的进一步详细说明, 不能认定本发 明的具体实施只局限于这些说明。 对于本发明所属技术领域的普通技术人员来说, 在 不脱离本发明构思的前提下, 还可以做出若干简单推演或替换, 都应当视为属于本发 明的保护范围。

Claims

权 利 要 求 书
1. 一种用户信息获取方法, 包括:
移动终端通过自身的数据采集模块采集用户信息, 其中, 所述用户信息包 括用户特征信息和 /或用户运动信息; 所述移动终端将采集的所述用户信息发送给后台计算平台。
2. 如权利要求 1所述的用户信息获取方法, 其中, 所述用户信息包括所述用户特 征信息时, 所述用户特征信息包括用户语音信息和 /或用户面部信息。
3. 如权利要求 2所述的用户信息获取方法, 其中, 所述用户特征信息包括所述用 户语音信息时, 所述移动终端通过所述数据采集模块的语音数据采集子模块采 集所述用户语音信息, 包括:
所述移动终端判断当前是否进入语音采集模式, 如是, 开启所述语音数据 采集子模块采集所述用户语音信息; 所述移动终端判断是否进入语音采集模式 包括判断是否处于通话状态和 /或是否处于外置放置状态;
或,
所述移动终端接收外部语音采集指令, 根据所述外部语音采集指令开启所 述语音数据釆集子模块釆集所述用户语音信息。
4. 如权利要求 2所述的用户信息获取方法, 其中, 所述用户特征信息包括所述用 户面部信息时, 所述移动终端通过所述数据采集模块的图像数据采集子模块采 集所述用户面部信息, 包括: 所述移动终端判断其显示屏当前是否面向于用户, 如是, 则开启所述图像 数据采集子模块采集所述用户面部信息;
或,
所述判断移动终端其显示屏当前是否面向于用户且显示屏当前是否被点 亮, 如是, 则开启所述图像数据采集子模块采集所述用户面部信息;
或,
所述移动终端接收外部图像采集指令, 根据所述外部图像采集指令开启所 述图像数据采集子模块采集所述用户面部信息。
5. 如权利要求 3所述的用户信息获取方法, 其中, 所述移动终端开启所述语音数 据采集子模块采集所述用户语音信息包括: 所述移动终端将所述语音数据采集子模块采集的至少一段用户语音信息与 预先存储的用户语音绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所 述用户语音信息。
6. 如权利要求 5所述的用户信息获取方法, 其中, 所述移动终端将所述用户语音 信息与所述用户语音绑定信息进行比较之前, 还包括:
判断所述用户语音信息中是否包含有有效的人声数据。
7. 如权利要求 4所述的用户信息获取方法, 其中, 所述移动终端开启所述图像数 据采集子模块采集所述用户图像信息包括:
所述移动终端将所述图像数据采集子模块采集的至少一幅用户图像信息与 预先存储的用户图像绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所 述用户图像信息。
8. 如权利要求 1-7任一项所述的用户信息获取方法, 其中, 所述用户信息包括所 述用户特征信息时, 所述移动终端在获取所述用户特征信息之前, 还包括: 所述移动终端判断当前使用的用户是否是与之绑定的用户。
9. 如权利耍求 1-7任 项所述的用户信怠获取方法, 其中, 所述用户信怠包括所 述用户运动信息时,所述用户运动信息包括用户步行速度信息和 /或用户路径信 息。
10. 如权利要求 1-7任一项所述的用户信息获取方法, 其中, 所述移动终端将采集的所述用户信息发送给所述后台计算平台包括: 所述用户信息包括所述用户特征信息时, 提取所述用户特征信息包含的特 征分量数据; 将提取出的所述特征分量数据发送给所述后台计算平台。
11. 如权利要求 10所述的用户信息获取方法,其中,将提取出的所述特征分量数据 发送给所述后台计算平台包括: 将提取出的所述特征分量数据存入数据传送队列; 根据当前网络环境确定所述数据传送队列中的数据的发送规则; 根据所述规则将所述数据传送队列中的数据发送给所述后台计算平台。
12. 一种移动终端, 包括处理模块、 数据采集模块和发送模块; 所述处理模块设置为控制所述数据采集模块采集用户信息, 并通过所述发 送模块发送给后台计算平台, 其中, 所述用户信息包括用户特征信息和 /或用户 运动信息。
13. 如权利要求 12所述的移动终端,其中,所述用户信息包括所述用户特征信息时, 所述用户特征信息包括用户语音信息和 /或用户面部信息。
14. 如权利要求 13所述的移动终端,其中,所述数据采集模块包括语音数据采集子 模块, 所述用户特征信息包括所述用户语音信息时, 所述处理模块控制所述数 据采集模块采集所述用户语音信息包括:
所述处理模块判断所述移动终端当前是否进入语音采集模式, 如是, 开启 所述语音数据采集子模块采集所述用户语音信息; 所述处理模块判断所述移动 终端是否进入所述语音采集模式包括判断所述移动终端是否处于通话状态和 / 或是否处于外置放置状态; 或,
所述处理模块接收外部语音采集指令, 根据所述外部语音采集指令开启所 述语音数据采集子模块采集所述用户语音信息。
15. 如权利要求 13所述的移动终端,其中,所述数据采集模块包括图像数据采集子 模块, 所述用户特征信息包括所述用户面部信息时, 所述处理模块控制所述数 据采集模块采集所述用户面部信息包括:
所述处理模块判断所述移动终端的显示屏当前是否面向于用户, 如是, 则 开启所述图像数据采集子模块采集所述用户面部信息;
或,
所述处理模块判断所述移动终端的显示屏当前是否面向于用户且所述显示 屏当前是否被点亮, 如是, 则开启所述图像数据采集子模块采集所述用户面部 信息;
或,
所述处理模块接收外部图像采集指令, 根据所述外部图像采集指令开启所 述图像数据采集子模块采集所述用户面部信息。
16. 如权利要求 14所述的移动终端,其中,所述处理模块开启所述语音数据采集子 模块采集所述用户语音信息包括: 所述处理模块将所述语音数据采集子模块采集的至少一段用户语音信息与 预先存储的用户语音绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所 述用户语音信息。
17. 如权利要求 16所述的移动终端,其中,所述处理模块还设置为将所述用户语音 信息与所述用户语音绑定信息进行比较之前, 判断所述用户语音信息中是否包 含有有效的人声数据。
18. 如权利要求 15所述的移动终端,其中,所述处理模块开启所述图像数据采集子 模块采集所述用户图像信息包括:
所述处理模块将所述图像数据采集子模块采集的至少一幅用户图像信息与 预先存储的用户图像绑定信息进行比较, 判断二者是否匹配, 如是, 则存储所 述用户图像信息。
19. 如权利要求 12-18任一项所述的移动终端, 其中, 所述处理模块还设置为在所 述用户信息包括所述用户特征信息时, 控制所述数据采集模块采集所述用户特 征信息之前, 判断当前使用的用户是否是与所述移动终端绑定的用户。
20. 如权利要求 12-18任一项所述的移动终端, 其中, 所述用户信息包括所述用户 运动信息时, 所述用户运动信息包括用户步行速度信息和 /或用户路径信息。
21. 如权利要求 12-18任一项所述的移动终端, 其中, 所述处理模块通过所述发送 模块将所述用户信息发送给所述后台计算平台包括:
所述处理模块判断所述用户信息包括所述用户特征信息时, 提取所述用户 特征信息包含的特征分量数据; 将提取出的所述特征分量数据通过所述发送模 块发送给所述后台计算平台。
PCT/CN2014/078089 2013-12-31 2014-05-22 用户信息获取方法及移动终端 WO2015100923A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310753136.5 2013-12-31
CN201310753136.5A CN104754112A (zh) 2013-12-31 2013-12-31 用户信息获取方法及移动终端

Publications (1)

Publication Number Publication Date
WO2015100923A1 true WO2015100923A1 (zh) 2015-07-09

Family

ID=53493094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/078089 WO2015100923A1 (zh) 2013-12-31 2014-05-22 用户信息获取方法及移动终端

Country Status (2)

Country Link
CN (1) CN104754112A (zh)
WO (1) WO2015100923A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106102140A (zh) * 2016-05-27 2016-11-09 北京灵龄科技有限责任公司 无线传感器的功耗优化方法及装置
CN106383648A (zh) * 2015-07-27 2017-02-08 青岛海信电器股份有限公司 一种智能终端语音显示的方法和装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412312A (zh) * 2016-10-19 2017-02-15 北京奇虎科技有限公司 自动唤醒智能终端摄像功能的方法、系统及智能终端
CN106648652A (zh) * 2016-12-15 2017-05-10 惠州Tcl移动通信有限公司 一种移动终端自动设置锁屏界面的方法及系统
CN107342079A (zh) * 2017-07-05 2017-11-10 谌勋 一种基于互联网的真实人声的采集系统
CN107957908A (zh) * 2017-11-20 2018-04-24 深圳创维数字技术有限公司 一种麦克风共享方法、装置、计算机设备及存储介质
CN109875463A (zh) * 2019-03-04 2019-06-14 深圳市银星智能科技股份有限公司 清洁机器人及其清洁方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102790732A (zh) * 2012-07-18 2012-11-21 上海量明科技发展有限公司 即时通信中状态匹配的方法、客户端及系统
CN103186326A (zh) * 2011-12-27 2013-07-03 联想(北京)有限公司 一种应用对象操作方法及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2011114620A1 (ja) * 2010-03-16 2013-06-27 日本電気株式会社 関心度計測システム
CN101895610B (zh) * 2010-08-03 2014-05-07 杭州华三通信技术有限公司 基于语音识别的话机通话方法和装置
CN102592116A (zh) * 2011-12-27 2012-07-18 Tcl集团股份有限公司 一种云计算应用方法、系统及终端设备、云计算平台
KR101917070B1 (ko) * 2012-06-20 2018-11-08 엘지전자 주식회사 이동 단말기, 서버, 시스템, 이동 단말기 및 서버의 제어 방법
CN102882936B (zh) * 2012-09-06 2015-11-25 百度在线网络技术(北京)有限公司 云推送的方法、系统和装置
CN103092348A (zh) * 2013-01-24 2013-05-08 北京捷讯华泰科技有限公司 一种基于用户行为的移动终端广告播放方法
CN103414720A (zh) * 2013-08-19 2013-11-27 苏州跨界软件科技有限公司 交互式3d语音服务方法
CN103428293A (zh) * 2013-08-19 2013-12-04 苏州跨界软件科技有限公司 交互式3d语音服务系统

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186326A (zh) * 2011-12-27 2013-07-03 联想(北京)有限公司 一种应用对象操作方法及电子设备
CN102790732A (zh) * 2012-07-18 2012-11-21 上海量明科技发展有限公司 即时通信中状态匹配的方法、客户端及系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383648A (zh) * 2015-07-27 2017-02-08 青岛海信电器股份有限公司 一种智能终端语音显示的方法和装置
CN106102140A (zh) * 2016-05-27 2016-11-09 北京灵龄科技有限责任公司 无线传感器的功耗优化方法及装置
CN106102140B (zh) * 2016-05-27 2022-03-22 集道成科技(北京)有限公司 无线传感器的功耗优化方法及装置

Also Published As

Publication number Publication date
CN104754112A (zh) 2015-07-01

Similar Documents

Publication Publication Date Title
WO2015100923A1 (zh) 用户信息获取方法及移动终端
WO2016049898A1 (zh) 身份认证的方法、装置及用户设备
CN107832784B (zh) 一种图像美化的方法和一种移动终端
JP2021516786A (ja) 複数人の音声を分離する方法、装置、およびコンピュータプログラム
CN105654033B (zh) 人脸图像验证方法和装置
CN110865705B (zh) 多模态融合的通讯方法、装置、头戴设备及存储介质
CN111131601B (zh) 一种音频控制方法、电子设备、芯片及计算机存储介质
WO2018133282A1 (zh) 一种动态识别的方法及终端设备
CN107623778B (zh) 来电接听方法及移动终端
WO2022193989A1 (zh) 电子设备的操作方法、装置和电子设备
CN110177242B (zh) 一种基于可穿戴设备的视频通话方法及可穿戴设备
CN109819167B (zh) 一种图像处理方法、装置和移动终端
CN108229420A (zh) 一种人脸识别方法、移动终端
WO2019024718A1 (zh) 防伪处理方法、防伪处理装置及电子设备
CN108459806A (zh) 终端控制方法、终端及计算机可读存储介质
CN110177240B (zh) 一种可穿戴设备的视频通话方法及可穿戴设备
CN109102813B (zh) 声纹识别方法、装置、电子设备和存储介质
CN112735388A (zh) 网络模型训练方法、语音识别处理方法及相关设备
CN111698600A (zh) 一种处理执行方法、装置及可读介质
CN110175254B (zh) 一种照片的分类存储方法及可穿戴设备
CN112367432B (zh) 一种基于双重校验的数据查看方法
CN108446665B (zh) 一种人脸识别方法和移动终端
CN108492311A (zh) 通过电子设备进行动作校正的方法及装置
EP3255927B1 (en) Method and device for accessing wi-fi network
CN110399780B (zh) 一种人脸检测方法、装置及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14877177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14877177

Country of ref document: EP

Kind code of ref document: A1