WO2019205974A1 - 电子设备的状态识别方法和电子设备 - Google Patents

电子设备的状态识别方法和电子设备 Download PDF

Info

Publication number
WO2019205974A1
WO2019205974A1 PCT/CN2019/082684 CN2019082684W WO2019205974A1 WO 2019205974 A1 WO2019205974 A1 WO 2019205974A1 CN 2019082684 W CN2019082684 W CN 2019082684W WO 2019205974 A1 WO2019205974 A1 WO 2019205974A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
voiceprint feature
sound signal
current
voiceprint
Prior art date
Application number
PCT/CN2019/082684
Other languages
English (en)
French (fr)
Inventor
王剑平
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2019205974A1 publication Critical patent/WO2019205974A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Definitions

  • the present disclosure relates to, but is not limited to, the field of voiceprint recognition.
  • An existing electronic device may have two or more display screens, and any two display screens on the electronic device may be connected through a connecting member such as a rotating shaft; here, any two displays of the electronic device
  • a connecting member such as a rotating shaft
  • any two displays of the electronic device When the angle between the screens is different, the electronic device can be considered to be in different presentation states; and for the identification of different presentation states on the electronic device, a digital Hall sensor can be mounted on the connecting component such as the rotating shaft, and the digital Hall sensor is utilized. The angle between the display screens is identified to determine the presentation state of the electronic device. Further, the display mode of the display screen of the electronic device can also be switched according to the presentation state of the electronic device.
  • the present disclosure provides a state recognition method for an electronic device, the electronic device including N display screens connected to each other by a connection component, and a microphone configured to collect a sound signal emitted by the connection component, where N is greater than 1. a natural number; the method includes: acquiring a current sound signal collected by a microphone, acquiring a voiceprint feature of the current sound signal; and each voice in the voiceprint feature of the current sound signal and a preset set of voiceprint features Matching features, each of the voiceprint feature sets corresponding to a presentation state of the electronic device, the presentation state of the electronic device being used to indicate a relative position between the N display screens; Determining, when the voiceprint feature of the current sound signal matches the i-th voiceprint feature in the preset voiceprint feature set, that the current presentation state of the electronic device is: corresponding to the i-th voiceprint feature A presentation state of an electronic device; i is an integer greater than or equal to one.
  • the present disclosure also provides an electronic device including a memory, a processor, N display screens connected to each other through a connecting component, and a microphone configured to collect a sound signal emitted by the connecting component, N is A natural number greater than one; wherein the memory is for storing a computer program; the processor is for executing a computer program stored in the memory to implement the steps of the methods described herein.
  • the present disclosure also provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the methods described herein.
  • FIG. 1 is a schematic diagram of three presentation states of a dual-screen mobile phone according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram showing the relationship between the angle and the magnetic flux of the digital Hall sensor before and after high temperature degaussing according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a rotating shaft of an electronic device according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart of a state recognition method of an electronic device according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a presentation state of an electronic device corresponding to three sample sound signals collected in advance according to an embodiment of the present disclosure
  • FIG. 6 is a flow chart of an exemplary method of constructing a voiceprint feature set according to an embodiment of the present disclosure
  • FIG. 7 is another flowchart of a state recognition method of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a voiceprint recognition system according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of another voiceprint recognition system according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of hardware of an electronic device according to an embodiment of the present disclosure.
  • An existing electronic device may have two or more display screens, and any two display screens on the electronic device may be connected through a connecting member such as a rotating shaft; here, any two displays of the electronic device
  • a connecting member such as a rotating shaft
  • any two displays of the electronic device When the angle between the screens is different, the electronic device can be considered to be in different presentation states; and for the identification of different presentation states on the electronic device, a digital Hall sensor can be mounted on the connecting component such as the rotating shaft, and the digital Hall sensor is utilized. The angle between the display screens is identified to determine the presentation state of the electronic device. Further, the display mode of the display screen of the electronic device can also be switched according to the presentation state of the electronic device.
  • digital Hall sensors are prone to some failure problems during use, such as high temperature degaussing, loose shafts, changes in the distance between magnets and digital Hall sensors caused by light drops, etc. These failure problems may cause magnetic flux detected by digital Hall sensors. A change occurs; as such, it may result in an inaccurate recognition of the presentation state of an electronic device provided with multiple screens.
  • one implementation manner is to identify the angle between the display screens by the digital Hall sensor, thereby determining the presentation state of the electronic device; The example is explained.
  • the two display screens of the dual-screen mobile phone can be connected with the rotating shaft. It can be understood that the dual-screen mobile phone can display a folded single-screen state, fully expand the dual-screen state, etc.; in actual application, the presentation state can be different according to the dual-screen mobile phone. Control the dual-screen phone to enter the corresponding display mode.
  • FIG. 1 is a schematic diagram of three presentation states of a dual-screen mobile phone according to an embodiment of the present disclosure.
  • 0° ⁇ angle ⁇ 180 indicates that the digital Hall sensor detects that the angle between the two display screens is greater than 0° and less than 180°, and the dual-screen mobile phone is in a non-fully expanded dual-screen state
  • the digital Hall sensor when switching the display mode of the dual-screen mobile phone, the digital Hall sensor can be used for judging; when the dual-screen mobile phone is shipped, the digital Hall sensor can be calibrated at 30° and 150°, respectively.
  • the trigger threshold As the trigger threshold of 0° and 180°; based on the typical angles of the three display screens shown in Fig. 1, in one example, the dual screen mobile phone can be set to the following four display modes: single A display mode, large A Display mode, A
  • the single A display mode indicates that the content is displayed on only one display screen, and the other display does not work;
  • the large A display mode indicates that the two display screens are displayed as one large display screen;
  • B display mode indicates two displays.
  • the screen displays different contents, for example, one display shows the interface of application A, and the other display shows the interface of application B;
  • a display mode indicates that the two displays display the same content, for example, both displays are displayed Apply the interface of A.
  • the dual-screen mobile phone can be controlled to enter a corresponding display mode according to the angle between the dual screens detected by the digital Hall sensing. For example, when the angle between the dual screens is 0°, only the allowable The dual-screen mobile phone works in single A display mode; when the angle between the dual screens is in the (30°, 180°) interval, the dual-screen mobile phone is allowed to work in the A
  • Digital Hall sensors may have various failure problems during use, which may cause abnormal changes in the magnetic flux at corresponding angles; for example, digital Hall sensors may cause abnormal changes in magnetic flux after high temperature degaussing.
  • FIG. 2 is a schematic diagram showing the relationship between the angle and the magnetic flux of the digital Hall sensor before and after high temperature degaussing according to the embodiment of the present disclosure.
  • the horizontal axis represents the angle between the two display screens, and the vertical axis represents the magnetic flux.
  • the magnetic flux corresponding to the same angle changes.
  • the digital Hall sensor fails, the corresponding display function may be abnormal.
  • the electronic equipment set by the digital Hall sensor must be repaired, for example, with a calibration angle (30°-150°). The angle is recalibrated; this increases the user's cost of use and lowers the user experience.
  • the present disclosure particularly provides a state recognition method, an electronic device, and a computer readable storage medium for an electronic device that substantially obviate one or more of the problems due to the limitations and disadvantages of the related art.
  • the present disclosure provides a state recognition method for an electronic device.
  • the electronic device may include N display screens connected to each other by a connecting member, and a microphone configured to collect a sound signal emitted by the connecting member, N being a natural number greater than one.
  • the electronic device may be a dual-screen electronic device, and when N is greater than 2, the electronic device is an electronic device having two display screens or more; that is, the embodiment of the present disclosure is not only suitable for a dual-screen electronic device And suitable for electronic devices above two displays.
  • the connecting member may be a rotating shaft or the like, that is, on the electronic device, the display screen may be hinged by a rotating shaft, and the specific structure of the rotating shaft is not limited in the embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of a rotating shaft of an electronic device according to an embodiment of the present disclosure.
  • the rotating shaft may include an upper cam 301 and a lower cam 302 that cooperate with each other, and may further include a spring 303 or the like. It can be understood that the convex portion of the upper cam of the rotating shaft cooperates with the groove of the lower cam.
  • the rotating shaft rotates, the sound of the sound emitted by the lower cam can be collected by the microphone; and the rotating shaft is between the double screens. Different angles can be emitted when the angles are different; thus, the angle between the display screens of the electronic devices can be detected by detecting the sound emitted by the rotating shaft, thereby determining the presentation state of the electronic screen.
  • the electronic device when the electronic device is a dual-screen mobile phone, it is possible to detect whether the dual-screen mobile phone is in a folded state or an expanded state by detecting a sound emitted by the rotating shaft (for example, a sound when the rotating shaft rebounds), thereby triggering switching of the display mode.
  • a sound emitted by the rotating shaft for example, a sound when the rotating shaft rebounds
  • FIG. 4 is a flowchart of a state recognition method of an electronic device according to an embodiment of the present disclosure. As shown in FIG. 4, in some embodiments, the process can include steps 401 through 403.
  • a current sound signal collected by the microphone is acquired, and a voiceprint feature of the current sound signal is acquired.
  • the microphone in order to improve the quality of the collected sound signal and improve the matching recognition rate, the microphone may be disposed in the vicinity of the connecting member, and in one embodiment, the sound collecting end of the microphone is directed toward the connecting member, The interference caused by environmental noise can be avoided to the utmost extent; in one example, the connecting member is a rotating shaft, and the angled microphone is used to face the rotating shaft, so that the microphone receives only the sound of the rotating shaft.
  • the sound collected by the microphone may be the collision sound of the upper cam and the lower cam, or may be a specific sound that the device such as a gear can emit.
  • the voiceprint feature of the current sound signal is matched with each of the preset voiceprint feature sets, each of the voiceprint feature sets and the electronic device
  • a presentation state corresponds to a presentation state of the electronic device for indicating a relative position between the N display screens.
  • the relative position between the N display screens may include: a relative position presented when any two of the N display screens are folded or unfolded; exemplarily, when N is equal to 2, the N
  • the relative position between the display screens can be: the relative position of the two display overlays or when deployed.
  • the sample sound signal collected by the microphone when the electronic device is in a plurality of different presentation states may be acquired, and the voiceprint feature of each sample sound signal is obtained. Thereafter, the voiceprint feature set is constructed using the acquired voiceprint features of the various sample sound signals (that is, extracting the voiceprint feature parameters in the sample sound signal).
  • the acquired voiceprint features of the various sample sound signals may be presented in the form of a voiceprint analog signal.
  • the microphone can be used to capture the sound signal emitted by the connecting component when the electronic device is in a plurality of different presentation states, thereby learning the sample sound signal sent by the connecting component when the electronic device is in a plurality of different presentation states; for example, connecting
  • the component is a hinge of a dual-screen mobile phone, you can learn the sound of the rotating shaft when the dual-screen mobile phone is unfolded and folded.
  • the acquired presentation states of the electronic devices corresponding to the various sample sound signals include: an angle between any two of the N display screens is P degrees, and the N display screens The angle between any two display screens is Q degree, where 0 ⁇ P ⁇ 180, 0 ⁇ Q ⁇ 180, and P and Q are two different values.
  • the angle between the two display screens can be collected for other different angles. (Not 0 degrees and 180 degrees) The sound from the connected parts.
  • the connecting member is a rotating shaft
  • the design of the upper and lower cams has certain tolerances for different rotating shafts, and needs to be calibrated on the production line; that is, when the angle between the two display screens is 0 degrees and 180 degrees.
  • the sound signal of the rotating shaft is extracted and encoded by the feature value, and the voiceprint template in different states is obtained.
  • the voiceprint feature set can be stored using a memory in the electronic device.
  • FIG. 6 is a flow diagram of an exemplary method of constructing a voiceprint feature set in accordance with an embodiment of the present disclosure. As shown in FIG. 6, the flow may include step A1 and step A2.
  • step A1 the sounds of the connecting members when the angles of the two display screens are 0 degrees and 180 degrees, respectively, are collected.
  • the voiceprint feature of the sound emitted by the connecting member when the angle between the two display screens is 0 degrees and 180 degrees is obtained by means of feature value extraction.
  • the resulting voiceprint feature may be a voiceprint analog signal.
  • the voiceprint feature of the sound emitted by the connecting component corresponds to the fully folded state of the two display screens.
  • the full folding state of the two display screens can also be set (may be The display mode corresponding to the 0 degree folding state; similarly, when the angle between the two display screens is 180 degrees, the voiceprint feature of the sound emitted by the connecting component corresponds to the fully expanded state of the two display screens. Further, It is also possible to set a display mode corresponding to the fully expanded state of the two display screens (which may be referred to as a 180-degree expanded state).
  • matching the two voiceprint features may be: comparing the two voiceprint features; in one example, two voiceprint features may be obtained by comparing the two voiceprint features. The similarity, when the similarity is greater than or equal to the first similarity threshold, the two voiceprint features can be considered to match; otherwise, the two voiceprint features are considered to be mismatched. In one example, the resulting comparison result can be presented as a digital signal.
  • the first similarity threshold may be set according to actual application requirements.
  • step 403 when the voiceprint feature of the current sound signal matches the i-th voiceprint feature in the preset voiceprint feature set, determining that the current presentation state of the electronic device is: A presentation state of the electronic device corresponding to the voiceprint feature; i is an integer greater than or equal to 1.
  • the preset voiceprint feature set may include three voiceprint features, which respectively correspond to the three presentation states of the electronic device shown in FIG. 5;
  • the current presentation state of the dual-screen mobile phone can be determined as: The presentation state of a dual-screen mobile phone corresponding to one type, the second type, or the third type of voiceprint feature (ie, a folded single screen state, a non-fully expanded dual screen state, or a fully expanded dual screen state).
  • the process may return to step 401.
  • steps 401 to 403 can all be implemented by a processor in the electronic device.
  • the electronic device corresponding to the current presentation state of the electronic device may be determined according to the correspondence between the presentation state of the electronic device and the operation mode of the electronic device.
  • the operation mode of the corresponding electronic device may be separately set in advance for various presentation states of the electronic device; in actual implementation, the memory may also be used to store the preset presentation state of the electronic device and the operation mode of the electronic device. Correspondence relationship.
  • the operation mode of the electronic device includes, but is not limited to, displaying in accordance with one display mode, launching an application (APP), starting a specific function of the electronic device, exiting the application, unlocking, and the like.
  • APP application
  • the kind of the application is not limited, and for example, the application may be a music player, a video application, a schedule management software, or the like.
  • the dual-screen mobile phone when the electronic device is a dual-screen mobile phone, when the current presentation state of the electronic device is a folded single-screen state, the dual-screen mobile phone is controlled to operate in the single A display mode; when the current program state of the electronic device is fully expanded In the screen state, the dual-screen mobile phone is controlled to operate in the A
  • the operating mode of the electronic device corresponding to the current presentation state of the electronic device may be determined by the processor in the electronic device according to the correspondence between the pre-set presentation state of the electronic device and the operation mode of the electronic device; The electronic device operates in accordance with the determined mode of operation of the electronic device.
  • the voiceprint feature of the current sound signal may be used to update the voiceprint feature corresponding to the present state of the current electronic device in the voiceprint feature set.
  • the real-time updating of the voiceprint feature in the voiceprint feature set can be realized, thereby improving the accuracy of the matching judgment.
  • real-time updating of the voiceprint features in the voiceprint feature set is achieved by increasing the self-learning process of the sample sound signal.
  • the voiceprint feature of the current sound signal matches the i-th voiceprint feature in the preset voiceprint feature set
  • the voiceprint feature of the current sound signal and the preset voiceprint feature set may be determined. Whether the i-th voiceprint feature is greater than the set value.
  • the set value may be set according to actual application requirements.
  • the set value may be 80%, 85%, 90%, or the like.
  • the step of updating the voiceprint feature can be performed by a processor in the electronic device.
  • FIG. 7 is another flowchart of a state recognition method of an electronic device according to an embodiment of the present disclosure. As shown in FIG. 7, the flow may include steps 701 to 706.
  • the voiceprint signal is input.
  • the voiceprint feature constructs a set of voiceprint features.
  • step 702 sound signal acquisition is performed using a microphone.
  • the current sound signal collected by the microphone is obtained.
  • the feature value vector is extracted.
  • the current sound signal collected by the microphone can be extracted by the feature value vector to obtain the voiceprint feature of the current sound signal.
  • step 702 and step 703 For the implementation of step 702 and step 703, reference may be made to the implementation manner of step 401 of FIG. 4, and details are not described herein again.
  • step 704 it is determined whether the voiceprint feature of the current sound signal matches any one of the preset voiceprint feature sets, and if so, step 705 is performed; otherwise, step 706 is performed.
  • the current display mode is controlled to switch to a display mode corresponding to the matched voiceprint feature.
  • step 706 the current display mode is maintained.
  • steps 701 to 706 may be implemented by a processor in the electronic device in combination with a device such as a microphone or a display screen.
  • the state recognition method of the electronic device described herein may be implemented by a voiceprint recognition system, and the voiceprint recognition system may be disposed on the electronic device.
  • FIG. 8 is a schematic structural diagram of a voiceprint recognition system according to an embodiment of the present disclosure.
  • the voiceprint recognition system may include: a voiceprint template entry module 801, a voiceprint signal matching module 802, and a mode state switching module 803.
  • the voiceprint template entry module 801 can be implemented by a device such as a microphone.
  • the voiceprint template entry module 801 is configured to obtain the voiceprint feature set by collecting a sound signal, and to collect the current sound signal.
  • the voiceprint signal matching module 802 is configured to acquire a voiceprint feature of the current sound signal; to match a voiceprint feature of the current sound signal with each of the preset voiceprint feature sets; Determining, when the voiceprint feature of the current sound signal is matched with the i-th voiceprint feature in the preset set of voiceprint features, the current presentation state of the electronic device is: an electronic device corresponding to the i-th voiceprint feature a state of presentation.
  • the mode state switching module 803 is configured to control the display screens of the electronic device to display according to the corresponding display mode according to the current presentation state of the electronic device.
  • the dual-screen mobile phone when the electronic device is a dual-screen mobile phone, when the processor determines that the voiceprint feature of the current sound signal matches the voiceprint feature corresponding to the folded single-screen state, the dual-screen mobile phone can be controlled according to the folded single-screen state.
  • the display mode is displayed; when the processor determines that the voiceprint feature of the current sound signal matches the voiceprint feature corresponding to the fully expanded dual-screen state, the dual-screen mobile phone can be controlled to display according to the display mode corresponding to the fully expanded dual-screen state.
  • the voiceprint template entry module 801, the voiceprint signal matching module 802, and the mode state switching module 803 have been described in the embodiment of the state recognition method of the above electronic device, and details are not described herein again.
  • both the voiceprint signal matching module 802 and the mode state switching module 803 can be implemented by a processor or the like in the electronic device.
  • FIG. 9 is a schematic structural diagram of another voiceprint recognition system according to an embodiment of the present disclosure.
  • the voiceprint recognition system may include: a voiceprint sounder unit 901, a voiceprint acquisition unit 902, a feature value extraction unit 903, a data storage unit 904, a voiceprint matching unit 905, a processor unit 906, and a template.
  • the voiceprint sounder unit 901 can be implemented using a connection component of the electronic device, and can emit an acoustic signal during use of the electronic device.
  • the voiceprint acquisition unit 902 can be implemented using a microphone of an electronic device configured to acquire a sample sound signal and a current sound signal.
  • the feature value extracting unit 903 is configured to obtain a voiceprint feature corresponding to the sample sound signal and a voiceprint feature corresponding to the current sound signal by performing feature value extraction on the collected sample sound signal and the current sound signal.
  • the data storage unit 904 can be implemented by using a memory of the electronic device and configured to store the voiceprint feature obtained by the feature value extraction unit.
  • the voiceprint matching unit 905 is configured to perform matching processing on the voiceprint feature of the current sound signal and the voiceprint feature of each sample sound signal, and send the matching result to the processor unit; here, the matching result may be a match success or a match failure. .
  • the processor unit 906 is configured to trigger the template update unit 907 and the mode state switching unit 908 when the matching result is a successful match.
  • the voiceprint feature corresponding to the present state of the current electronic device in the voiceprint feature set is updated.
  • the mode state switching unit 908 When the mode state switching unit 908 is configured to be triggered, the respective display screens of the control electronic device are displayed in accordance with the corresponding display mode according to the current presentation state of the electronic device.
  • the feature value extracting unit 903, the voiceprint matching unit 905, the processor unit 906, and the template updating unit 907 may be implemented by a processor in an electronic device, and the mode state switching unit 908 may be combined and displayed by a processor in the electronic device. Screen implementation.
  • the implementations of the voiceprint sounder unit 901, the voiceprint acquisition unit 902, the feature value extraction unit 903, the data storage unit 904, the voiceprint matching unit 905, the processor unit 906, the template update unit 907, and the mode state switching unit 908 have all been implemented. The description is made in the embodiment of the state recognition method of the above electronic device, and details are not described herein again.
  • FIG. 10 is a schematic structural diagram of hardware of an electronic device according to an embodiment of the present disclosure.
  • the electronic device may include a memory 1001, a processor 1002, N display screens 1003 connected to each other through a connecting component, and a microphone 1004 configured to collect a sound signal emitted by the connecting component, where N is a natural number greater than 1. .
  • the memory 1001 is configured to store a computer program.
  • the processor 1002 is configured to execute a computer program stored in the memory 1001 to: obtain a current sound signal collected by a microphone, acquire a voiceprint feature of the current sound signal; and The voiceprint feature is matched with each of the preset voiceprint feature sets, each of the voiceprint feature sets corresponding to a presentation state of the electronic device, the presentation of the electronic device a state for indicating a relative position between the N display screens; determining the electronic device when a voiceprint feature of the current sound signal matches an i-th voiceprint feature in a preset set of voiceprint features
  • the current presentation state is: a presentation state of the electronic device corresponding to the i-th voiceprint feature; i is an integer greater than or equal to 1.
  • the electronic device may further include a sounding device 1005 and other peripheral devices 1006.
  • the sounding device 1005 may be a connecting component described herein, and the other peripheral device 1006 may be any one connected to the electronic device.
  • other peripherals may be a mouse, a keyboard, a USB flash drive, or the like.
  • the memory 1001 may be a volatile memory, such as a random access memory (RAM), or a non-volatile memory, such as a read only memory.
  • RAM random access memory
  • ROM Read-Only Memory
  • flash memory hard disk (HDD, Hard Disk Drive) or solid state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and to the processor 1002 Provide instructions and data.
  • HDD Hard Disk Drive
  • SSD Solid-State Drive
  • the processor 1002 may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), or a Programmable Logic Device (PLD). At least one of a Programmable Logic Device, a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is to be understood that, for different devices, the electronic device for implementing the above-mentioned processor functions may be other, and the embodiment of the present disclosure is not specifically limited.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • CPU Central Processing Unit
  • controller a controller
  • microcontroller a microcontroller
  • the processor 1002 is further configured to execute a computer program stored in the memory, implementing the steps of: acquiring the microphone in the electronic device in a plurality of different manners before acquiring the sound signal collected by the microphone The sample sound signal collected in the state of the state is acquired, and the voiceprint feature of each sample sound signal is acquired; and the voiceprint feature set is constructed by using the acquired voiceprint features of the various sample sound signals.
  • the processor 1002 is further configured to execute a computer program stored in the memory, implementing the following steps: after determining the current presentation state of the electronic device, using the voiceprint feature of the current sound signal, updating the location A voiceprint feature corresponding to a presentation state of a current electronic device in the set of voiceprint features.
  • the processor 1002 is further configured to execute a computer program stored in the memory, implementing the steps of: the i-th sound in the voiceprint feature of the current sound signal and the preset voiceprint feature set When the similarity of the texture feature is greater than the set value, the voiceprint feature corresponding to the present state of the current electronic device in the voiceprint feature set is updated by using the voiceprint feature of the current sound signal.
  • the acquired presentation states of the electronic devices corresponding to the various sample sound signals include: an angle of any two of the N display screens is P degrees, any of the N display screens The angle between the two display screens is Q degree, where 0 ⁇ P ⁇ 180, 0 ⁇ Q ⁇ 180, and P and Q are two different values.
  • the sound collecting end of the microphone faces the connecting member.
  • the relative position between the N display screens includes a relative position exhibited when any two of the N display screens are folded or unfolded.
  • the processor 1002 is further configured to execute a computer program stored in the memory, and implement the following steps: after determining a presentation state of the current electronic device, according to a preset state of presentation of the electronic device and the electronic device Corresponding relationship of the operation mode, determining an operation mode of the electronic device corresponding to the current presentation state of the electronic device; controlling the electronic device to operate according to the determined operation mode of the electronic device.
  • the present disclosure also provides a computer readable storage medium.
  • the computer program instructions corresponding to the state recognition method of the electronic device may be stored on a storage medium such as an optical disk, a hard disk, a USB disk, or the like, corresponding to a state recognition method of the electronic device in the storage medium.
  • the steps of the state recognition method of any one of the electronic devices described herein are implemented when the computer program instructions are read or executed by an electronic device.
  • embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

本申请提供了一种电子设备的状态识别方法,所述方法包括:获取麦克风采集到的当前声音信号,获取所述当前声音信号的声纹特征;将所述当前声音信号的声纹特征与预先设置的声纹特征集合中的每种声纹特征进行匹配,所述声纹特征集合中的每种声纹特征与电子设备的一种呈现状态对应,所述电子设备的呈现状态用于表示所述N个显示屏之间的相对位置;在所述当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征匹配时,确定所述电子设备的当前呈现状态为:与所述第i种声纹特征对应的电子设备的一种呈现状态;i为大于或等于1的整数。本申请还提供了一种电子设备和计算机可读存储介质。

Description

电子设备的状态识别方法和电子设备 技术领域
本公开涉及但不限于声纹识别领域。
背景技术
现有的电子设备(例如,手机)可以具有两个或更多的显示屏,在电子设备上的任意两个显示屏可以通过转轴等连接部件进行连接;这里,在电子设备的任意两个显示屏之间的夹角不同时,可以认为电子设备处于不同的呈现状态;而对于电子设备上不同的呈现状态的识别,可以在上述连接部件如转轴上安装数字霍尔传感器,利用数字霍尔传感器识别显示屏之间的角度,从而确定电子设备的呈现状态,进一步地,还可以根据电子设备的呈现状态,进行电子设备的显示屏的显示模式的切换。
发明内容
一方面,本公开提供了一种电子设备的状态识别方法,所述电子设备包括通过连接部件相互连接的N个显示屏、以及配置为采集连接部件发出的声音信号的麦克风,N为大于1的自然数;所述方法包括:获取麦克风采集到的当前声音信号,获取所述当前声音信号的声纹特征;将所述当前声音信号的声纹特征与预先设置的声纹特征集合中的每种声纹特征进行匹配,所述声纹特征集合中的每种声纹特征与电子设备的一种呈现状态对应,所述电子设备的呈现状态用于表示所述N个显示屏之间的相对位置;在所述当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征匹配时,确定所述电子设备的当前呈现状态为:与所述第i种声纹特征对应的电子设备的一种呈现状态;i为大于或等于1的整数。
另一方面,本公开还提供了一种电子设备,所述电子设备包括存储器、处理器、通过连接部件相互连接的N个显示屏、以及配置为采集连接部件发出的声音信号的麦克风,N为大于1的自然数;其中, 所述存储器用于存储计算机程序;所述处理器用于执行所述存储器中存储的计算机程序,以实现本文所述的方法的步骤。
另一方面,本公开还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本文所述的方法的步骤。
附图说明
图1为本公开实施例的双屏手机的三种呈现状态的示意图;
图2为本公开实施例的数字霍尔传感器在高温消磁前后的角度与磁通量的关系示意图;
图3为本公开实施例中电子设备的转轴的一个结构示意图;
图4为本公开实施例的电子设备的状态识别方法的流程图;
图5为本公开实施例的预先采集的三种样本声音信号对应的电子设备的呈现状态的示意图;
图6为本公开实施例的一个示例性的构建声纹特征集合的流程图;
图7为本公开实施例的电子设备的状态识别方法的另一流程图;
图8为本公开实施例的一个声纹识别系统的结构示意图;
图9为本公开实施例的另一个声纹识别系统的结构示意图;
图10为本公开实施例的电子设备的硬件结构示意图。
具体实施方式
以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本公开,并不用于限定本公开。
现有的电子设备(例如,手机)可以具有两个或更多的显示屏,在电子设备上的任意两个显示屏可以通过转轴等连接部件进行连接;这里,在电子设备的任意两个显示屏之间的夹角不同时,可以认为电子设备处于不同的呈现状态;而对于电子设备上不同的呈现状态的识别,可以在上述连接部件如转轴上安装数字霍尔传感器,利用数字霍 尔传感器识别显示屏之间的角度,从而确定电子设备的呈现状态,进一步地,还可以根据电子设备的呈现状态,进行电子设备的显示屏的显示模式的切换。
然而,数字霍尔传感器在使用过程中容易出现一些失效问题,例如,高温消磁、转轴松动、轻摔导致的磁铁和数字霍尔传感器距离变化等,这些失效问题可能导致数字霍尔传感器检测的磁通量发生变化;如此,可能导致不能准确识别设置有多屏的电子设备的呈现状态。
如上所述,对于具有多屏的电子设备的呈现状态的识别方式,一种实现方式为通过数字霍尔传感器识别显示屏之间的角度,从而确定电子设备的呈现状态;下面以双屏手机为例进行说明。
双屏手机的两个显示屏可以同转轴进行连接,可以理解的是,双屏手机可以呈现折叠单屏状态、完全展开双屏状态等;在实际应用时,可以根据双屏手机的不同呈现状态,控制双屏手机进入对应的显示模式。
图1为本公开实施例的双屏手机的三种呈现状态的示意图。如图1所示,“夹角=0°”表示数字霍尔传感器检测到两个显示屏之间的角度为0°,双屏手机处于折叠单屏状态;“0°<夹角<180°”表示数字霍尔传感器检测到两个显示屏之间的角度大于0°且小于180°,双屏手机处于非完全展开双屏状态;“夹角=180°”表示数字霍尔传感器检测到两个显示屏之间的角度等于180°,双屏手机处于完全展开双屏状态。
这里,在进行双屏手机的显示模式的切换时,可以利用数字霍尔传感器进行判断;在双屏手机出厂时,可以针对数字霍尔传感器,在30°和150°两个角度进行校准,分别作为0°和180°的触发阈值;在图1所示的三种显示屏的典型角度的基础上,在一个示例中,双屏手机可以设置如下四种显示模式:单A显示模式、大A显示模式、A|B显示模式、A|A显示模式。
这里,单A显示模式表示只在一个显示屏显示内容,另一个显示屏不工作;大A显示模式表示将两个显示屏作为一个大的显示屏显示内容;A|B显示模式表示两个显示屏显示不同的内容,例如,一个 显示屏显示应用A的界面,另一个显示屏显示应用B的界面;A|A显示模式表示两个显示屏显示相同的内容,例如,两个显示屏均显示应用A的界面。
在一个示例中,可以根据数字霍尔传感检测到的双屏之间的角度,控制进行双屏手机进入相应的显示模式,例如,当双屏之间的夹角为0°时,仅允许双屏手机工作在单A显示模式;当双屏之间的夹角处于(30°,180°]区间时,允许双屏手机工作在A|A显示模式;当双屏之间的夹角处于(150°,180°]区间时,允许双屏手机工作在A|B和大A两种显示模式。
数字霍尔传感器在使用过程中可能出现各种失效问题,进而导致相应角度的磁通量发生异常变化;例如,数字霍尔传感器在高温消磁后可能导致磁通量发生异常变化。
图2为本公开实施例的数字霍尔传感器在高温消磁前后的角度与磁通量的关系示意图,如图2所示,横轴表示两个显示屏的夹角,纵轴表示磁通量,可以看出,在高温消磁前后,相同角度对应的磁通量发生了变化。
数字霍尔传感器检测的磁通量的异常变化,可能导致双屏手机不能准确记性显示模式的切换。例如,当数字霍尔传感器检测的0°磁通量小于30°标准的阈值时,会导致数字霍尔传感器“夹角=0°”状态无法触发,双屏手机折叠后无法自动切换到单A显示模式,导致功能失效;当数字霍尔传感器检测的180°磁通量小于150°标准的阈值时,会导致数字霍尔传感器“夹角=180°”状态无法触发,双屏手机展开后无法自动切换到A|B或大A的显示模式,导致功能失效。
可以看出,一旦数字霍尔传感器失效后,相应的显示功能可能异常,此时必须将设置由数字霍尔传感器的电子设备进行维修,例如,可以通过带有标定角度(30°-150°)的角度进行重新校准;如此,增加了用户的使用成本,并较低了用户体验。
因此,本公开特别提供了电子设备的状态识别方法、电子设备和计算机可读存储介质,其实质上避免了由于相关技术的局限和缺点所导致的问题中的一个或多个。
一方面,本公开提供了一种电子设备的状态识别方法。该电子设备可以包括通过连接部件相互连接的N个显示屏、以及配置为采集连接部件发出的声音信号的麦克风,N为大于1的自然数。
这里,当N等于2时,电子设备可以是双屏电子设备,当N大于2时,电子设备为具有两个显示屏以上的电子设备;也就是说,本公开实施例不仅适合双屏电子设备,而且适合两个显示屏以上的电子设备。
这里,连接部件可以是转轴等部件,也就是说,在电子设备上,显示屏可以通过转轴铰接而成,本公开实施例中并不对转轴的具体结构进行限定。
图3为本公开实施例中电子设备的转轴的一个结构示意图。如图3所示,该转轴可以包括相互配合的上凸轮301和下凸轮302,还可以包括弹簧303等器件。可以理解的是,转轴的上凸轮的凸出部与下凸轮的凹槽相互配合,在转轴转动时,发出的声音如上下凸轮的碰撞声可以被麦克风采集;并且,转轴在双屏之间的夹角不同时可以发出不同的声音;如此,可以通过检测转轴发出的声音,来检测电子设备的显示屏之间的夹角,从而确定电子屏的呈现状态。例如,当电子设备为双屏手机时,可以通过检测转轴发出的声音(例如转轴回弹时的声音),来检测双屏手机处于折叠状态还是展开状态,进而触发显示模式的切换。
图4为本公开实施例的电子设备的状态识别方法的流程图。如图4所示,在一些实施例中,该流程可以包括步骤401至403。
在步骤401处,获取麦克风采集到的当前声音信号,获取所述当前声音信号的声纹特征。
在实际实施时,为了提高采集到的声音信号的质量,以及提升匹配识别率,可以将麦克风设置在连接部件的附近,在一个实施例中,将麦克风的声音收集端朝向所述连接部件,如此,可以最大程度地避免环境噪声造成的干扰;在一个示例中,连接部件为转轴,利用带有角度的麦克风朝向转轴,使麦克风仅接收到转轴的声音。
示例性地,麦克风采集到的声音可以是所述上凸轮与所述下凸 轮的碰撞声,也可以是齿轮等器件可以发出的特定声音。
在步骤402处,将所述当前声音信号的声纹特征与预先设置的声纹特征集合中的每种声纹特征进行匹配,所述声纹特征集合中的每种声纹特征与电子设备的一种呈现状态对应,所述电子设备的呈现状态用于表示所述N个显示屏之间的相对位置。
这里,所述N个显示屏之间的相对位置可以包括:所述N个显示屏中任意两个显示屏折叠或展开时呈现的相对位置;示例性地,当N等于2时,所述N个显示屏之间的相对位置可以是:两个显示屏叠或展开时呈现的相对位置。
在实际实施,在获取麦克风采集到的声音信号前,还可以获取所述麦克风在所述电子设备处于多种不同的呈现状态时采集到的样本声音信号,获取每种样本声音信号的声纹特征;之后,利用所获取的各种样本声音信号的声纹特征(也就是说,提取样本声音信号中的声纹特征参数),构建声纹特征集合。在一个示例中,所获取的各种样本声音信号的声纹特征可以呈现为声纹模拟信号的形式。
也就是说,可以预先利用麦克风采集电子设备处于多种不同的呈现状态时连接部件发出的声音信号,从而学习到电子设备处于多种不同的呈现状态时连接部件发出的样本声音信号;例如,连接部件为双屏手机的转轴时,可以学习到双屏手机展开和折叠时转轴发出的声音。
在一个实施例中,所获取的各种样本声音信号对应的电子设备的呈现状态包括:所述N个显示屏中的任意两个显示屏的夹角为P度、所述N个显示屏中的任意两个显示屏的夹角为Q度,其中,0≤P≤180,0≤Q≤180,P和Q为两个不同的数值。
在一个示例中,P和Q分别取0和180,也就是说,预先采集两个显示屏的夹角为0度和180度时连接部件发出的声音;显然,当P和Q分别取0和180时,对应的两个显示屏分别处于完全折叠状态(对应图1中“夹角=0°”)以及完全展开状态(对应图1中“夹角=180°”),当两个显示屏的夹角为0度和180度,连接部件可以分别发出特定的声音。
在一个实施例中,在预先采集样本声音信号时,除了采集两个显示屏的夹角为0度和180度时连接部件发出的声音,还可以采集两个显示屏的夹角为其他不同角度(非0度和180度)连接部件发出的声音。
在一个示例中,在电子设备为双屏手机时,预先采集的三种样本声音信号对应的电子设备的呈现状态如图5所示,即,双屏手机的呈现状态可以包括:折叠单屏状态(对应图5中的“夹角=0°状态”)、非完全展开双屏状态(对应图5中的“0°<夹角<180°”)和完全展开双屏状态(对应图5中的“夹角=180°状态”)。
特别地,在所述连接部件为转轴时,对于不同转轴,上下凸轮的设计存在一定的公差,需要在生产线进行校准;也就是说,对于两个显示屏的夹角为0度和180度时转轴的声音信号通过特征值提取编码后,得到不同状态下的声纹模板。
在一个实施例中,在预先构建声纹特征集合后,可以利用电子设备中的存储器存储该声纹特征集合。
图6为本公开实施例的一个示例性的构建声纹特征集合的流程图。如图6所示,该流程可以包括步骤A1和步骤A2。
在步骤A1处,分别采集两个显示屏的夹角为0度和180度时连接部件发出的声音。
在步骤A2处,通过特征值提取的方式得出两个显示屏的夹角为0度和180度时连接部件发出的声音的声纹特征。
在步骤A2中,得出的声纹特征可以是声纹模拟信号。
这里,两个显示屏的夹角为0度时连接部件发出的声音的声纹特征与两个显示屏的完全折叠状态相对应,进一步地,还可以设置两个显示屏的完全折叠状态(可以称为0度折叠状态)对应的显示模式;同理,两个显示屏的夹角为180度时连接部件发出的声音的声纹特征与两个显示屏的完全展开状态相对应,进一步地,还可以设置两个显示屏的完全展开状态(可以称为180度展开状态)对应的显示模式。
在一个实施例中,将两种声纹特征进行匹配可以是:将两种声纹特征进行比较;在一个示例中,可以通过将两种声纹特征进行比较 运算,得出两种声纹特征的相似度,在相似度大于或等于第一相似度阈值时,可以认为这两种声纹特征匹配;否则,认为这两种声纹特征不匹配。在一个示例中,所得出的比较结果可以呈现为数字信号。
需要说明的是,所述第一相似度阈值可以根据实际应用需求设置。
在步骤403处,在所述当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征匹配时,确定所述电子设备的当前呈现状态为:与所述第i种声纹特征对应的电子设备的一种呈现状态;i为大于或等于1的整数。
示例性地,当电子设备为双屏手机时,预先设置的声纹特征集合可以包括3种声纹特征,这3种声纹特征分别对应图5所示的电子设备的三种呈现状态;如此,在确定当前声音信号的声纹特征与预先设置的声纹特征集合中的第1种、第2种或第3种声纹特征匹配时,可以确定双屏手机的当前呈现状态为:与第1种、第2种或第3种声纹特征对应的双屏手机的呈现状态(即折叠单屏状态、非完全展开双屏状态或完全展开双屏状态)。
需要说明的是,如果当前声音信号的声纹特征不与预先设置的声纹特征集合中的任意一种声纹特征不匹配,则可以返回至步骤401。
在实际应用中,步骤401至步骤403均可以由电子设备中的处理器实施。
在一个实施例中,在确定出当前电子设备的呈现状态后,还可以根据预先设置的电子设备的呈现状态与电子设备的操作方式的对应关系,确定出与电子设备的当前呈现状态对应的电子设备的操作方式;控制所述电子设备,按照确定出的电子设备的操作方式进行操作。
也就是说,可以针对电子设备的各种呈现状态,预先分别设置对应的电子设备的操作方式;在实际实施时,还可以利用存储器存储预先设置的电子设备的呈现状态与电子设备的操作方式的对应关系。
这里,电子设备的操作方式包括但不限于按照一个显示模式进行显示、启动应用程序(APP)、启动电子设备的特定功能、退出应用程序、进行解锁等等。这里,并不限制应用程序的种类,例如,应 用程序可以是音乐播放器、视频应用、日程管理软件等等。
在一个示例中,当电子设备为双屏手机时,当电子设备的当前呈现状态为折叠单屏状态时,控制双屏手机工作在单A显示模式;当电子设备的当前程序状态为完全展开双屏状态时,控制双屏手机工作在A|B显示模式或大A显示模式。
在实际应用中,可以由电子设备中的处理器根据预先设置的电子设备的呈现状态与电子设备的操作方式的对应关系,确定出与电子设备的当前呈现状态对应的电子设备的操作方式;控制所述电子设备,按照确定出的电子设备的操作方式进行操作。
可以理解的是,随着电子设备的连接部件的不断使用,连接部件的声音会逐渐发生变化,进而使得连接部件所发出的声音的声纹特征发生变化,因而,需要预先构建的声纹特征集合中,更新对应的声纹特征。
也就是说,在确定出电子设备的当前呈现状态后,可以利用所述当前声音信号的声纹特征,更新所述声纹特征集合中与当前电子设备的呈现状态对应的声纹特征,如此,可以实现声纹特征集合中的声纹特征的实时更新,进而可以提高匹配判断的准确率。
这里,通过增加样本声音信号的自学习过程,实现对声纹特征集合中的声纹特征的实时更新。
在一个示例中,在当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征匹配时,可以判断当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征是否大于设定值。
需要说明的是,所述设定值可以根据实际应用需求设置,例如,设定值可以是80%、85%、90%等。
在实际应用中,可以由电子设备中的处理器执行更新声纹特征的步骤。
图7为本公开实施例的电子设备的状态识别方法的另一流程图。如图7所示,该流程可以包括步骤701至706。
在步骤701处,声纹信号输入。
也就是说,获取所述麦克风在所述电子设备处于多种不同的呈现状态时采集到的样本声音信号,获取每种样本声音信号的声纹特征;之后,利用所获取的各种样本声音信号的声纹特征(也就是说,提取样本声音信号中的声纹特征参数),构建声纹特征集合。
在步骤702处,利用麦克风进行声音信号采集。
也就是说,获取麦克风采集到的当前声音信号。
在步骤703处,特征值向量提取。
本步骤中,可以对麦克风采集到的当前声音信号进行特征值向量提取,得出当前声音信号的声纹特征。
步骤702和步骤703的实现方式可以参照图4的步骤401的实现方式,这里不再赘述。
在步骤704处,判断当前声音信号的声纹特征是否与预先设置的声纹特征集合中的任意一种声纹特征匹配,如果是,则执行步骤705;否则,执行步骤706。
在步骤705处,控制当前显示模式切换至与匹配的声纹特征对应的显示模式。
在步骤706处,保持当前的显示模式不变。
在实际应用中,步骤701至步骤706可以由电子设备中的处理器结合麦克风、显示屏等器件实现。
在实际应用中,本文所述的电子设备的状态识别方法可以通过声纹识别系统实现,该声纹识别系统可以设置于所述电子设备上。
图8为本公开实施例的一个声纹识别系统的结构示意图。如图8所示,该声纹识别系统可以包括:声纹模板录入模块801、声纹信号匹配模块802和模式状态切换模块803。
声纹模板录入模块801可以通过麦克风等器件实现,声纹模板录入模块801配置为通过采集声音信号得到所述声纹特征集合,并用于采集所述当前声音信号。
声纹信号匹配模块802配置为获取所述当前声音信号的声纹特征;将所述当前声音信号的声纹特征与预先设置的声纹特征集合中的每种声纹特征进行匹配;在所述当前声音信号的声纹特征与预先设置 的声纹特征集合中的第i种声纹特征匹配时,确定所述电子设备的当前呈现状态为:与所述第i种声纹特征对应的电子设备的一种呈现状态。
模式状态切换模块803配置为根据电子设备的当前呈现状态,控制电子设备的各个显示屏按照对应的显示模式进行显示。
示例性地,以电子设备为双屏手机为例,当处理器确定当前声音信号的声纹特征与折叠单屏状态对应的声纹特征匹配时,可以控制双屏手机按照折叠单屏状态对应的显示模式进行显示;当处理器确定当前声音信号的声纹特征与完全展开双屏状态对应的声纹特征匹配时,可以控制双屏手机按照完全展开双屏状态对应的显示模式进行显示。
这里,声纹模板录入模块801、声纹信号匹配模块802和模式状态切换模块803已经在上述电子设备的状态识别方法的实施例中作出说明,这里不再赘述。
在实际应用中,声纹信号匹配模块802和模式状态切换模块803均可以由电子设备中的处理器等器件实现。
图9为本公开实施例的另一个声纹识别系统的结构示意图。如图9所示,该声纹识别系统可以包括:声纹发声器单元901、声纹采集单元902、特征值提取单元903、数据存储单元904、声纹匹配单元905、处理器单元906、模板更新单元907和模式状态切换单元908。
声纹发声器单元901可以利用电子设备的连接部件实现,可以在电子设备的使用过程中发出声音信号。
声纹采集单元902可以利用电子设备的麦克风实现,配置为采集样本声音信号及当前声音信号。
特征值提取单元903配置为通过对采集的样本声音信号和当前声音信号进行特征值提取,得出与样本声音信号对应的声纹特征和与当前声音信号对应的声纹特征。
数据存储单元904,可以利用电子设备的存储器实现,配置为存储特征值提取单元得出的声纹特征。
声纹匹配单元905配置为对当前声音信号的声纹特征与每个样 本声音信号的声纹特征进行匹配处理,并将匹配结果发送至处理器单元;这里,匹配结果可以是匹配成功或匹配失败。
处理器单元906配置为在匹配结果为匹配成功时,触发模板更新单元907和模式状态切换单元908。
模板更新单元907配置为被触发时,更新所述声纹特征集合中与当前电子设备的呈现状态对应的声纹特征。
模式状态切换单元908配置为被触发时,根据电子设备的当前呈现状态,控制电子设备的各个显示屏按照对应的显示模式进行显示。
实际应用中,特征值提取单元903、声纹匹配单元905、处理器单元906和模板更新单元907可以由电子设备中的处理器实现,模式状态切换单元908可以由电子设备中的处理器结合显示屏实现。
声纹发声器单元901、声纹采集单元902、特征值提取单元903、数据存储单元904、声纹匹配单元905、处理器单元906、模板更新单元907和模式状态切换单元908的实现方式均已经在上述电子设备的状态识别方法的实施例中作出说明,这里不再赘述。
另一方面,本公开还提供了一种电子设备。图10为本公开实施例的电子设备的硬件结构示意图。如图10所示,该电子设备可以包括存储器1001、处理器1002、通过连接部件相互连接的N个显示屏1003、以及配置为采集连接部件发出的声音信号的麦克风1004,N为大于1的自然数。
所述存储器1001配置为存储计算机程序。
所述处理器1002配置为执行所述存储器1001中存储的计算机程序,以实现以下步骤:获取麦克风采集到的当前声音信号,获取所述当前声音信号的声纹特征;将所述当前声音信号的声纹特征与预先设置的声纹特征集合中的每种声纹特征进行匹配,所述声纹特征集合中的每种声纹特征与电子设备的一种呈现状态对应,所述电子设备的呈现状态用于表示所述N个显示屏之间的相对位置;在所述当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征匹配时,确定所述电子设备的当前呈现状态为:与所述第i种声纹特征 对应的电子设备的一种呈现状态;i为大于或等于1的整数。
在一个实施例中,上述电子设备还可以包括发声装置1005以及其他外设1006,这里,发声装置1005可以是本文所述的连接部件,其他外设1006可以是与电子设备连接的任意一种外设,例如,其他外设可以是鼠标、键盘、U盘等。
在实际应用中,上述存储器1001可以是易失性存储器(volatile memory),例如随机存取存储器(RAM,Random-Access Memory);或者非易失性存储器(non-volatile memory),例如只读存储器(ROM,Read-Only Memory)、快闪存储器(flash memory)、硬盘(HDD,Hard Disk Drive)或固态硬盘(SSD,Solid-State Drive);或者上述种类的存储器的组合,并向处理器1002提供指令和数据。
上述处理器1002可以为特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processor)、数字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、中央处理器(CPU,Central Processing Unit)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本公开实施例不作具体限定。
示例性地,所述处理器1002还配置为执行所述存储器中存储的计算机程序,实现以下步骤:在获取麦克风采集到的声音信号前,获取所述麦克风在所述电子设备处于多种不同的呈现状态时采集到的样本声音信号,获取每种样本声音信号的声纹特征;利用所获取的各种样本声音信号的声纹特征,构建声纹特征集合。
示例性地,所述处理器1002还配置为执行所述存储器中存储的计算机程序,实现以下步骤:在确定出电子设备的当前呈现状态后,利用所述当前声音信号的声纹特征,更新所述声纹特征集合中与当前电子设备的呈现状态对应的声纹特征。
示例性地,所述处理器1002还配置为执行所述存储器中存储的计算机程序,实现以下步骤:在所述当前声音信号的声纹特征与预先 设置的声纹特征集合中的第i种声纹特征的相似度大于设定值时,利用所述当前声音信号的声纹特征,更新所述声纹特征集合中与当前电子设备的呈现状态对应的声纹特征。
示例性地,所获取的各种样本声音信号对应的电子设备的呈现状态包括:所述N个显示屏中的任意两个显示屏的夹角为P度、所述N个显示屏中的任意两个显示屏的夹角为Q度,其中,0≤P≤180,0≤Q≤180,P和Q为两个不同的数值。
示例性地,所述麦克风的声音收集端朝向所述连接部件。
示例性地,所述N个显示屏之间的相对位置包括:所述N个显示屏中任意两个显示屏折叠或展开时呈现的相对位置。
示例性地,所述处理器1002还配置为执行所述存储器中存储的计算机程序,实现以下步骤:在确定出当前电子设备的呈现状态后,根据预先设置的电子设备的呈现状态与电子设备的操作方式的对应关系,确定出与电子设备的当前呈现状态对应的电子设备的操作方式;控制所述电子设备,按照确定出的电子设备的操作方式进行操作。
另一方面,本公开还提供了一种计算机可读存储介质。在一个实施例中,一种电子设备的状态识别方法对应的计算机程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种电子设备的状态识别方法对应的计算机程序指令被一电子设备读取或被执行时,实现本文所述的任意一种电子设备的状态识别方法的步骤。
本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图 和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本公开的示例性实施例而已,并非用于限定本公开的保护范围。

Claims (10)

  1. 一种电子设备的状态识别方法,所述电子设备包括通过连接部件相互连接的N个显示屏、以及配置为采集连接部件发出的声音信号的麦克风,N为大于1的自然数;所述方法包括:
    获取麦克风采集到的当前声音信号,获取所述当前声音信号的声纹特征;
    将所述当前声音信号的声纹特征与预先设置的声纹特征集合中的每种声纹特征进行匹配,所述声纹特征集合中的每种声纹特征与电子设备的一种呈现状态对应,所述电子设备的呈现状态用于表示所述N个显示屏之间的相对位置;
    在所述当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征匹配时,确定所述电子设备的当前呈现状态为:与所述第i种声纹特征对应的电子设备的一种呈现状态;i为大于或等于1的整数。
  2. 根据权利要求1所述的方法,在获取麦克风采集到的声音信号前,还包括:
    获取所述麦克风在所述电子设备处于多种不同的呈现状态时采集到的样本声音信号,获取每种样本声音信号的声纹特征;
    利用所获取的各种样本声音信号的声纹特征,构建声纹特征集合。
  3. 根据权利要求1或2所述的方法,在确定出电子设备的当前呈现状态后,还包括:
    利用所述当前声音信号的声纹特征,更新所述声纹特征集合中与当前电子设备的呈现状态对应的声纹特征。
  4. 根据权利要求3所述的方法,其中,所述利用所述当前声音信号的声纹特征,更新所述声纹特征集合中与当前电子设备的呈现状 态对应的声纹特征,包括:
    在所述当前声音信号的声纹特征与预先设置的声纹特征集合中的第i种声纹特征的相似度大于设定值时,利用所述当前声音信号的声纹特征,更新所述声纹特征集合中与当前电子设备的呈现状态对应的声纹特征。
  5. 根据权利要求2所述的方法,其中,所获取的各种样本声音信号对应的电子设备的呈现状态包括:所述N个显示屏中的任意两个显示屏的夹角为P度、所述N个显示屏中的任意两个显示屏的夹角为Q度,其中,0≤P≤180,0≤Q≤180,P和Q为两个不同的数值。
  6. 根据权利要求1所述的方法,其中,所述麦克风的声音收集端朝向所述连接部件。
  7. 根据权利要求1所述的方法,其中,所述N个显示屏之间的相对位置包括:所述N个显示屏中任意两个显示屏折叠或展开时呈现的相对位置。
  8. 根据权利要求1所述的方法,在确定出当前电子设备的呈现状态后,还包括:
    根据预先设置的电子设备的呈现状态与电子设备的操作方式的对应关系,确定出与电子设备的当前呈现状态对应的电子设备的操作方式;
    控制所述电子设备,按照确定出的电子设备的操作方式进行操作。
  9. 一种电子设备,包括存储器、处理器、通过连接部件相互连接的N个显示屏、以及配置为采集连接部件发出的声音信号的麦克风,N为大于1的自然数;其中,
    所述存储器配置为存储计算机程序;
    所述处理器配置为执行所述存储器中存储的计算机程序,以实现权利要求1至8任一项所述的方法的步骤。
  10. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至8任一项所述的方法的步骤。
PCT/CN2019/082684 2018-04-28 2019-04-15 电子设备的状态识别方法和电子设备 WO2019205974A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810399030.2 2018-04-28
CN201810399030.2A CN110415711B (zh) 2018-04-28 2018-04-28 一种电子设备的状态识别方法和电子设备

Publications (1)

Publication Number Publication Date
WO2019205974A1 true WO2019205974A1 (zh) 2019-10-31

Family

ID=68293804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082684 WO2019205974A1 (zh) 2018-04-28 2019-04-15 电子设备的状态识别方法和电子设备

Country Status (2)

Country Link
CN (1) CN110415711B (zh)
WO (1) WO2019205974A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101063928A (zh) * 2006-04-28 2007-10-31 三星电子株式会社 控制便携设备的用户界面屏幕方向的方法和装置
CN103116455A (zh) * 2013-02-22 2013-05-22 珠海全志科技股份有限公司 电子阅读设备及其显示方法
CN103259908A (zh) * 2012-02-15 2013-08-21 联想(北京)有限公司 一种移动终端及其智能控制方法
CN106101309A (zh) * 2016-06-29 2016-11-09 努比亚技术有限公司 一种双面屏幕切换装置及方法、移动终端
CN107077314A (zh) * 2016-09-12 2017-08-18 深圳前海达闼云端智能科技有限公司 一种电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10613687B2 (en) * 2014-01-13 2020-04-07 Beijing Lenovo Software Ltd. Information processing method and electronic device
CN106131785A (zh) * 2016-06-30 2016-11-16 中兴通讯股份有限公司 一种实现定位的方法、装置及位置服务系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101063928A (zh) * 2006-04-28 2007-10-31 三星电子株式会社 控制便携设备的用户界面屏幕方向的方法和装置
CN103259908A (zh) * 2012-02-15 2013-08-21 联想(北京)有限公司 一种移动终端及其智能控制方法
CN103116455A (zh) * 2013-02-22 2013-05-22 珠海全志科技股份有限公司 电子阅读设备及其显示方法
CN106101309A (zh) * 2016-06-29 2016-11-09 努比亚技术有限公司 一种双面屏幕切换装置及方法、移动终端
CN107077314A (zh) * 2016-09-12 2017-08-18 深圳前海达闼云端智能科技有限公司 一种电子设备
WO2018045598A1 (zh) * 2016-09-12 2018-03-15 深圳前海达闼云端智能科技有限公司 一种电子设备

Also Published As

Publication number Publication date
CN110415711A (zh) 2019-11-05
CN110415711B (zh) 2023-05-26

Similar Documents

Publication Publication Date Title
KR102623272B1 (ko) 전자 장치 및 이의 제어 방법
US20210210071A1 (en) Methods and devices for selectively ignoring captured audio data
US10275022B2 (en) Audio-visual interaction with user devices
US10298412B2 (en) User-configurable interactive region monitoring
US20110248967A1 (en) Electronic reader with two displays and method of turning pages therefof
US20180088902A1 (en) Coordinating input on multiple local devices
WO2016078405A1 (zh) 调整对象属性信息的方法及装置
US20130154947A1 (en) Determining a preferred screen orientation based on known hand positions
WO2017005085A1 (zh) 一种数据压缩方法、装置及终端
CN107643922A (zh) 用于语音辅助的设备、方法及计算机可读存储介质
CN110427849B (zh) 人脸姿态确定方法、装置、存储介质和电子设备
US20160216944A1 (en) Interactive display system and method
DE102014117343B4 (de) Erfassen einer Pause in einer akustischen Eingabe in ein Gerät
US20130169688A1 (en) System for enlarging buttons on the touch screen
US10372576B2 (en) Simulation reproduction apparatus, simulation reproduction method, and computer readable medium
WO2019205974A1 (zh) 电子设备的状态识别方法和电子设备
CN111338563B (zh) 存储器的隐藏分区处理方法和装置
US10818298B2 (en) Audio processing
KR102537781B1 (ko) 전자 장치 및 이의 제어 방법
US11817097B2 (en) Electronic apparatus and assistant service providing method thereof
JP6886663B2 (ja) 動作指示生成システム、方法およびプログラム
US10861464B2 (en) Electronic apparatus having incremental enrollment unit and method thereof
TWI778428B (zh) 一種檢測記憶體安裝狀態的方法、裝置及系統
JP6786136B1 (ja) 情報処理方法、情報処理システム、プログラム
TWI741122B (zh) 電子裝置、拆機監測裝置及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791591

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19791591

Country of ref document: EP

Kind code of ref document: A1