US20190302223A1 - User identification method and user identification apparatus - Google Patents

User identification method and user identification apparatus Download PDF

Info

Publication number
US20190302223A1
US20190302223A1 US16/172,102 US201816172102A US2019302223A1 US 20190302223 A1 US20190302223 A1 US 20190302223A1 US 201816172102 A US201816172102 A US 201816172102A US 2019302223 A1 US2019302223 A1 US 2019302223A1
Authority
US
United States
Prior art keywords
type
angle
user
vehicle
microphone arrays
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/172,102
Other versions
US10451710B1 (en
Inventor
Hongyang Li
Xin Li
Xiangdong Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Assigned to BOE TECHNOLOGY GROUP CO., LTD. reassignment BOE TECHNOLOGY GROUP CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, HONGYANG, LI, XIN, YANG, XIANGDONG
Publication of US20190302223A1 publication Critical patent/US20190302223A1/en
Application granted granted Critical
Publication of US10451710B1 publication Critical patent/US10451710B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/005
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S2205/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S2205/01Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations specially adapted for specific applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/227Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic

Definitions

  • the present disclosure relates to the field of identification technology, and in particular to a user identification method and a user identification apparatus.
  • the current automobile may already receive and identify the voice of a user inside the automobile to perform a corresponding action.
  • the current automobile can only identify the user's voice, but cannot identify the type of the user who produces the voice, such as a driver or a passenger.
  • the present disclosure provides a user identification method and a user identification apparatus.
  • a user identification method is provided.
  • the user identification method is applicable to a vehicle including at least two microphone arrays, the respective microphone arrays being disposed at different positions of the vehicle, respectively.
  • the user identification method includes:
  • the at least two microphone arrays consist of two microphone arrays.
  • a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • the type comprises “driver” and “passenger”, wherein the identifying the type of the user based at least on the angle includes:
  • a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • the type comprises “driver” and “passenger”, wherein the identifying the type of the user based at least on the angle includes:
  • the user identification method includes:
  • the user identification method further includes:
  • the identifying the type of the user based on at least the angle includes:
  • a user identification apparatus applicable to a vehicle includes:
  • a memory storing instructions that, when executed by the processor, cause the processor to:
  • the at least two microphone arrays consist of two microphone arrays.
  • a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • the type includes “driver” and “passenger”, and the instructions, when executed by the processor, further cause the processor to:
  • a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • the type includes “driver” and “passenger”, and the instructions, when executed by the processor, further cause the processor to:
  • the instructions when executed by the processor, further cause the processor to:
  • the instructions when executed by the processor, further cause the processor to:
  • the at least two microphone arrays are a part of the user identification apparatus.
  • FIG. 1 is a schematic flow chart of a user identification method according to an embodiment of the present disclosure
  • FIGS. 2A to 2E are schematic diagrams of application scenarios of a user identification method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of an application scenario of another user identification method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic block diagram of a user identification apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is an exemplary hardware arrangement diagram of a user identification apparatus according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic flow chart of a user identification method according to an embodiment of the present disclosure.
  • the user identification method in the present embodiment may be applied to a vehicle, such as an automobile, an airplane, a ship, etc.
  • the following embodiments are mainly exemplarily described by taking the automobile as an example of the vehicle.
  • the vehicle includes at least two microphone arrays, the respective microphone arrays being disposed at different positions of the vehicle, respectively.
  • the user identification method includes steps S 1 to S 4 .
  • step S 1 a voice of a user within the vehicle is received through the at least two microphone arrays.
  • the number of the microphone arrays may be set as required, for example, may be set to two or three, or even more.
  • the following embodiments are mainly exemplarily illustrated in a case where two microphone arrays are provided.
  • step S 2 directions from the user to the microphone arrays are determined, respectively, according to the voice.
  • the microphone array may consist of a plurality of microphones. After receiving a sound, the microphone array may determine directions from a sound source to the microphone arrays, respectively, wherein the process of determining the directions from the sound source to the microphone arrays by the microphone array may be implemented by approaches in related technology, and will not be repeatedly described here.
  • step S 3 an angle between any two of the directions is calculated.
  • the microphone array may transmit the determined direction to a processor connected to each of the microphone arrays. Since the directions from the user to the microphone arrays are vectors, the processor may calculate, according to any two of the directions, the angle between the two directions.
  • step S 4 a type of the user is identified based at least on the angle.
  • a driver is typically located on a seat at a front row corresponding to a steering wheel
  • a passenger is located on a copilot seat and seats at a rear row.
  • the angles between the directions from the users at different positions to the two microphone arrays are different.
  • FIGS. 2A to 2E are schematic diagrams of application scenarios of a user identification method according to an embodiment of the present disclosure.
  • the automobile may be provided with two microphone arrays.
  • a first microphone array M 1 is disposed at the left side of the driver seat, and is in line with the driver seat and a copilot seat
  • a second microphone array M 2 is disposed straight ahead of the copilot seat
  • the driver D is located on the driver seat
  • passengers P 1 , P 2 , P 3 , and P 4 are located on the copilot seat and the rear row, respectively.
  • an angle ⁇ 1 between directions from the driver D on the driver seat to the first microphone array M 1 and to the second microphone array M 2 is greater than 90 degrees.
  • an angle ⁇ 2 between directions from the passenger P 1 on the copilot seat to the first microphone array M 1 and to the second microphone array M 2 is equal to 90 degrees.
  • an angle ⁇ 3 between directions from the passenger P 2 on the left of the rear row to the first microphone array M 1 and to the second microphone array M 2 is less than 90 degrees.
  • an angle ⁇ 4 between directions from the passenger P 3 on the middle of the rear row to the first microphone array M 1 and to the second microphone array M 2 is less than 90 degrees.
  • an angle ⁇ 5 between directions from the passenger P 4 on the right of the rear row to the first microphone array M 1 and to the second microphone array M 2 is less than 90 degrees.
  • the angles formed between the directions from the users located at different positions in the vehicle to the two microphone arrays are different, wherein the angle formed between the directions from the user located at the position of the driver to the respective two microphone arrays (e.g., greater than 90 degrees) is significantly distinguished from the angles formed between the directions from the users located at other positions to the two respective microphone arrays (e.g., less than or equal to 90 degrees).
  • the type of the user may be determined according to the angle between the directions from the user to the respective two microphone arrays.
  • determining the type of the user it may in turn be determined according to the determined type whether the user who issued the control instruction has a privilege to execute the control instruction, after the voice of the user is received and a control instruction is generated; and the control instruction is executed only when the user has the privilege, so as to ensure security of executing the action.
  • the embodiments of the present disclosure are not limited to identifying these two user types, i.e., “driver” and “passenger”, in the case where other types of users are included in the vehicle, such as a ticket seller other than the driver and the passengers, or a coach other than the driver and the passengers, it may also be identified by the embodiments of the present disclosure.
  • the at least two microphone arrays consist of two microphone arrays.
  • the above directions may be calculated by two microphone arrays, so that only two microphone arrays are provided, which helps to reduce an overall hardware cost.
  • the number of the microphone arrays may be adjusted as needed.
  • a first one of the two microphone arrays is disposed on a side of the driver seat of the vehicle that is away from the copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • the type includes “driver” and “passenger”, and wherein the identifying the type of the user based at least on the angle includes:
  • the position of the microphone array may be adjusted as needed, and the types of the user are not limited to the above-mentioned “driver” and “passenger”, and may be determined according to specific circumstances.
  • the first one of the two microphone arrays is disposed on a side of the driver seat of the vehicle that is away from the copilot seat, and the second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • the type includes “driver” and “passenger”, and wherein the identifying the type of the user based at least on the angle includes:
  • a threshold angle may be determined. For example, a midpoint of a line between a central position of the driver seat and a central position of the copilot seat may be firstly determined. a point, then an angle formed between directions from the midpoint to the first microphone array and the second microphone array is calculated as a threshold angle, the threshold angle being less than an angle formed between directions from the driver seat to the first microphone array and the second microphone array, and being greater than an angle formed between directions from the copilot seat to the first microphone array and the second microphone array.
  • the type is determined to be “driver” or “passenger” based on the angle between the two directions being greater than 90 degrees, or less than 90 degrees, or equal to 90 degrees. Since the user may not be at the central position of his corresponding seat when he utters the voice, for example, the head of the passenger on the copilot seat is deviated from the central position of the copilot seat, e.g., the head is deviated to the driver seat when he utters the voice, then the calculated angle between the two directions is also an obtuse angle, which thus may misidentify the passenger sitting on the copilot seat as the driver.
  • the type of the user is determined by comparing a relationship between the angle between the two directions and the threshold angle, that is, if the angle is greater than the threshold angle, the type of the user is determined to be “driver”; and if the angle is less than the threshold angle, the type of the user is determined to be “passenger”. Even if the head of the user is slightly deviated from the central position of the corresponding seat when the user utters the voice, the type of the user may still be determined accurately.
  • the head of the passenger on the copilot seat is slightly deviated to the driver seat when he utters the voice, then the angle between the two directions from the passenger to the first microphone array and the second microphone array is still calculated to be less than the threshold angle, as long as he does not deviate to the area between the midpoint as described above and the driver seat.
  • the type of the user is “passenger”, thereby the type of the user may be determined accurately.
  • the requirements for disposing the positions of the first microphone array and the second microphone array may be reduced appropriately, that is, it is not necessary to ensure that the first microphone array is in line with the driver seat and the copilot seat and the second microphone array is disposed straight ahead of the copilot seat. Instead, it can be ensured that the type of the user is determined accurately, as long as the first microphone array is disposed on the side of the driver seat that is away from the copilot seat (e.g., may be deviated forward or backward) and the second microphone array is disposed ahead of the copilot seat (e.g., may be deviated to the left and the right).
  • the arrangement of the first microphone array and the second microphone array may be simplified.
  • FIG. 3 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure. As shown in FIG. 3 , on a basis of the embodiment as shown in FIG. 1 , the user identification method further includes:
  • control instruction may include a control instruction for controlling the vehicle, such as controlling steering of the vehicle, gear switching, cruise control, navigation, driving recorder shooting, lights on/off, switching of a rear view camera, and the like; and may also include a control instruction for controlling auxiliary functions of the vehicle, such as adjusting a temperature of an air conditioner, song switching, seat angle adjustment, window adjustment, radio volume adjustment, etc.
  • a correspondence between the type “passenger” and the control instructions for controlling the auxiliary functions of the vehicle and a correspondence between the type “driver” and all of the control instructions may be stored in advance.
  • control instruction is a control instruction for controlling the auxiliary function of the automobile; if not, it is determined that the user has no privilege to execute the control instruction, thereby preventing the passenger from controlling the vehicle and causing interference to the driver, and ensuring the safety of the driver driving the vehicle.
  • FIG. 4 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure. As shown in FIG. 4 , on a basis of the embodiment as shown in FIG. 1 , the user identification method further includes:
  • step S 8 is ensured to be executed between steps S 1 and S 401 .
  • the identifying the type of the user based on at least the angle includes:
  • step S 401 of identifying the type of the user according to the angle and the intensity is a step S 401 of identifying the type of the user according to the angle and the intensity.
  • FIG. 5 is a schematic diagram of an application scenario of another user identification method according to an embodiment of the present disclosure.
  • the first microphone array M 1 is disposed on the left side of the driver position
  • the second microphone array M 2 is disposed on the right side of the copilot position, whereby the angle between the directions from the driver at the driver position to the respective two microphone arrays is a flat angle, that is, 180°, but the angle between the directions from the passenger at the copilot position to the respective two microphone arrays is also a flat angle.
  • the driver at the driver position and the passenger at the copilot position cannot be distinguished based on the angles only.
  • the driver at the driver position is closer to the first microphone array and is farther from the second microphone array
  • the passenger at the copilot position is closer to the second microphone array and is farther from the first microphone array.
  • the intensity of the voice received by the microphone array is inversely proportional to the distance from the voice source to the microphone array.
  • the angle when the angle is determined to be 180°, it may be further determined whether the intensity of the voice received by the first microphone array is larger, or the intensity of the voice received by the second microphone array is larger; if the intensity of the voice received by the first microphone array is larger, it may be determined that the type of the user is a driver; and if the intensity of the voice received by the second microphone array is larger, it may be determined that the type of the user is a passenger, thereby ensuring that the type of the user is determined accurately.
  • FIG. 4 is not limited to be applied to the scenario as shown in FIG. 5 , and the manner of disposing the microphone arrays in other cases is also applicable.
  • FIG. 6 is a schematic block diagram of a user identification apparatus according to an embodiment of the present disclosure.
  • the user identification apparatus is applicable to use in a vehicle, wherein the vehicle includes at least two microphone arrays, the respective microphone arrays are disposed at different positions of the vehicle, and the at least two microphone arrays are used for receiving the voice of the user within the vehicle.
  • the vehicle includes:
  • a direction determining module 1 configured to determine directions from the user to the microphone arrays, respectively, according to the voice
  • an angle calculation module 2 configured to calculate an angle between any two of the directions
  • a user identification module 3 configured to identify the type of the user based at least on the angle.
  • the at least two microphone arrays include two microphone arrays.
  • a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • the type includes “driver” and “passenger”, wherein the user identification module determines that the type is “driver” if the angle is greater than 90 degrees; and determines that the type is “passenger” if the angle is less than or equal to 90 degrees.
  • a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • the type includes “driver” and “passenger”, wherein the user identification module determines that the type is “driver” if the angle is greater than a threshold angle; and determines that the type is “passenger” if the angle is less than the threshold angle.
  • FIG. 7 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure. As shown in FIG. 7 , on a basis of the embodiment as shown in FIG. 6 , the user identification apparatus further includes:
  • an instruction generation module 4 configured to generate a control instruction according to the voice
  • an privilege determination module 5 configured to determine whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance;
  • an instruction execution module 6 configured to execute the control instruction if the user of the type has the privilege.
  • FIG. 8 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure. As shown in FIG. 8 , on a basis of the embodiment as shown in FIG. 6 , the method further includes:
  • an intensity determination module 7 configured to determine an intensity of the voice
  • the user identification module 3 is configured to identify the type of the user according to the angle and the intensity.
  • An embodiment of the present disclosure further provides an electronic device, the electronic device being disposed on a vehicle, and including:
  • the respective microphone arrays are disposed at different positions of the vehicle, respectively;
  • a processor configured to perform the steps of the user identification method as described in any of the above embodiments.
  • FIG. 9 is an exemplary hardware arrangement diagram of a user identification apparatus according to an embodiment of the present disclosure.
  • the electronic device 900 may include a processor 910 , a memory 920 , an input device 930 /output device 930 , and the like. It should be noted that the embodiment as shown in FIG. 9 is for illustrative purposes only, and thus does not limit the present disclosure in any way. In fact, the electronic device 900 may include more, fewer, or different modules, and may be separate devices or distributed devices distributed at multiple locations.
  • the electronic device 900 may include, but is not limited to, a satellite controller, an onboard computer, a personal computer (PC), a server, a server cluster, a computing cloud, a workstation, a terminal, a tablet, a laptop, a smart phone, a media player, a wearable device, and/or a home appliance (e.g., a television, a set top box, a DVD player), and the like.
  • a satellite controller an onboard computer
  • a server a server cluster
  • a computing cloud e.g., a workstation, a terminal, a tablet, a laptop, a smart phone, a media player, a wearable device, and/or a home appliance (e.g., a television, a set top box, a DVD player), and the like.
  • a home appliance e.g., a television, a set top box, a DVD player
  • the processor 910 may be a component responsible for the overall operation of the electronic device 900 , which may be communicatively connected to other various modules/components to receive data and/or instructions to be processed from other modules/components, and send processed data and/or instructions to other modules/components.
  • the processor 910 may be, for example, a general purpose processor, such as a central processing unit (CPU), a signal processor (DSP), an application processor (AP), and the like. In this case, it may perform one or more of the various steps of the user identification method according to the embodiment of the present disclosure as previously described under the instruction of the instructions/programs/codes stored in the memory 920 .
  • the processor 910 may also be, for example, a special purpose processor such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. In this case, it may specifically perform one or more of the steps of the user identification method according to the embodiment of the present disclosure as previously described in accordance with its circuit design. Moreover, the processor 910 may also be any combination of hardware, software and/or firmware. Moreover, although only one processor 910 is shown in FIG. 9 , the processor 910 may also include one or more processing units distributed at one or more locations in practice.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the memory 920 may be configured to temporarily or persistently store computer executable instructions that, when executed by the processor 910 , may cause the processor 910 to perform one or more of the various steps of the various methods described in the present disclosure.
  • the memory 920 may also be configured to temporarily or persistently store data related to these steps, such as voice data, threshold data, intensity data, and the like.
  • the memory 920 may include a volatile memory and/or a non-volatile memory.
  • the volatile memory may include, for example, but not limited to, dynamic random access memory (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), cache, and the like.
  • the non-volatile memory may include, for example, but not limited to, one-time programmable read only memory (OTPROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), mask ROM, flash ROM, flash memory (e.g., NAND flash, NOR flash, etc.), hard drive or solid state drive (SSD), high density flash (CF), secure digital (SD), micro SD, mini SD, Extreme Digital (xD), multimedia card (MMC), memory stick, etc.
  • the memory 920 may also be a remote storage device, such as a networked storage device (NAS) or the like.
  • the memory 920 may also include distributed storage devices distributed at multiple locations, such as cloud storage.
  • the input device 930 /output device 940 may be configured to receive input from outside and/or provide output to the outside. Although the input device 930 /output device 940 is shown as devices separate from each other in the embodiment as shown in FIG. 9 , it may actually be an input and output device in an integral form.
  • the input device 930 /output device 940 may include, but is not limited to, a keyboard, mouse, microphone, camera, display, touch screen display, printer, speaker, earphone, or any other device that may be used for input/output, and the like.
  • the input device 930 /output device 940 may also be an interface, such as a headphone interface, a microphone interface, a keyboard interface, a mouse interface, and the like, configured to be connected to the device as described above.
  • the electronic device 900 may be connected to an external input/output device through the interface and implement an input/output function.
  • the electronic device 900 may or may not include one or more of the microphone arrays as mentioned previously.
  • the electronic device 900 may also include other modules not shown in FIG. 9 , such as a communication module.
  • the communication module may be configured to enable the electronic device 900 to communicate with other electronic devices and exchange various data.
  • the communication module may be, for example, an Ethernet interface card, a USB module, a serial line interface card, a fiber interface card, a telephone line modem, an xDSL modem, a Wi-Fi module, a Bluetooth module, a 2G/3G/4G/5G communication module, and the like.
  • the communication module may also be regarded as a part of the input device 930 /output device 940 in the sense of data input/output.
  • the electronic device 900 may also include other modules including, for example, but not limited to: a power module, a GPS module, a sensor module (e.g., a proximity sensor, an illuminance sensor, an acceleration sensor, a fingerprint sensor, etc.), and the like.
  • modules are only examples of a part of modules that may be included in the electronic device 900 , and the electronic device according to the embodiments of the present disclosure is not limited thereto. In other words, an electronic device according to other embodiments of the present disclosure may include more modules, fewer modules, or different modules.
  • the electronic device 900 as illustrated in FIG. 9 may perform various steps of the various methods as described in connection with FIGS. 1, 3 , and/or 4 .
  • the memory 920 stores instructions that, when executed by the processor 910 , may cause the processor 910 to perform the various steps of the various methods as described in connection with FIGS. 1, 3 , and/or 4 .
  • the angles formed between the directions from the users located at different positions in the vehicle to the two microphone arrays are different, wherein the angle formed between the directions from the user located at the position of the driver to the respective two microphone arrays is significantly distinguished from the angles formed between the directions from the users located at other positions to the two respective microphone arrays.
  • the type of the user may be determined according to the angle between the directions from the user to the respective two microphone arrays.
  • determining the type of the user it may in turn be determined according to the determined type whether the user who issued the control instruction has a privilege to execute the control instruction, after the voice of the user is received and a control instruction is generated; and the control instruction is executed only when the user has the privilege, so as to ensure security of executing the action.
  • first and second are used for descriptive purposes only but cannot be construed as indicating or implying relative importance.
  • the term “multiple/plurality” refers to two or more, unless specifically defined otherwise.

Abstract

The present disclosure relates to a user identification method applicable to a vehicle, the vehicle including at least two microphone arrays, the respective microphone arrays being disposed at different positions of the vehicle, respectively. The user identification method includes: receiving a voice of a user within the vehicle through the at least two microphone arrays; determining directions from the user to the microphone arrays, respectively, according to the voice; calculating an angle between any two of the directions; and identifying a type of the user based at least on the angle.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S)
  • This application claims priority to Chinese Application No. 201810265531.1, entitled “USER IDENTIFICATION METHOD, USER IDENTIFICATION APPARATUS, AND ELECTRONIC DEVICE” and filed on Mar. 28, 2018, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • The present disclosure relates to the field of identification technology, and in particular to a user identification method and a user identification apparatus.
  • BACKGROUND
  • As the degree of intelligence of an automobile increases, the current automobile may already receive and identify the voice of a user inside the automobile to perform a corresponding action.
  • However, the current automobile can only identify the user's voice, but cannot identify the type of the user who produces the voice, such as a driver or a passenger.
  • SUMMARY
  • The present disclosure provides a user identification method and a user identification apparatus.
  • According to a first aspect of the embodiments of the present disclosure, a user identification method is provided. The user identification method is applicable to a vehicle including at least two microphone arrays, the respective microphone arrays being disposed at different positions of the vehicle, respectively. The user identification method includes:
  • receiving a voice of a user within the vehicle through the at least two microphone arrays;
  • determining directions from the user to the microphone arrays, respectively, according to the voice;
  • calculating an angle between any two of the directions; and
  • identifying the type of the user based at least on the angle.
  • Optionally, the at least two microphone arrays consist of two microphone arrays.
  • Optionally, a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • Optionally, the type comprises “driver” and “passenger”, wherein the identifying the type of the user based at least on the angle includes:
  • determining that the type is “driver” if the angle is greater than 90 degrees; and
  • determining that the type is “passenger” if the angle is less than or equal to 90 degrees.
  • Optionally, a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • Optionally, the type comprises “driver” and “passenger”, wherein the identifying the type of the user based at least on the angle includes:
  • determining that the type is “driver” if the angle is greater than a threshold angle; and
  • determining that the type is “passenger” if the angle is less than the threshold angle.
  • Optionally, the user identification method includes:
  • generating a control instruction according to the voice;
  • determining whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance;
  • and
  • executing the control instruction If the user of the type has the privilege.
  • Optionally, the user identification method further includes:
  • determining an intensity of the voice before identifying the type of the user based on at least the angle; and
  • the identifying the type of the user based on at least the angle includes:
  • identifying the type of the user based on the angle and the intensity.
  • According to a second aspect of the embodiments of the present disclosure, a user identification apparatus applicable to a vehicle is provided. The user identification apparatus includes:
  • a processor;
  • a memory storing instructions that, when executed by the processor, cause the processor to:
  • receive, from at least two microphone arrays included in the vehicle and disposed at different positions of the vehicle, respectively, a voice of a user within the vehicle detected by the at least two microphone arrays;
  • determine directions from the user to the microphone arrays, respectively, according to the voice;
  • calculate an angle between any two of the directions; and
  • identify a type of the user based at least on the angle.
  • Optionally, the at least two microphone arrays consist of two microphone arrays.
  • Optionally, a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • Optionally, the type includes “driver” and “passenger”, and the instructions, when executed by the processor, further cause the processor to:
  • determine that the type is “driver” if the angle is greater than 90 degrees; and
  • determine that the type is “passenger” if the angle is less than or equal to 90 degrees.
  • Optionally, a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • Optionally, the type includes “driver” and “passenger”, and the instructions, when executed by the processor, further cause the processor to:
  • determine that the type is “driver” if the angle is greater than a threshold angle; and
  • determine that the type is “passenger” if the angle is less than the threshold angle.
  • Optionally, the instructions, when executed by the processor, further cause the processor to:
  • generate a control instruction according to the voice;
  • determine whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance;
  • and
  • execute the control instruction If the user of the type has the privilege.
  • Optionally, the instructions, when executed by the processor, further cause the processor to:
  • determine an intensity of the voice; and
  • identify the type of the user based on the angle and the intensity.
  • Optionally, the at least two microphone arrays are a part of the user identification apparatus.
  • It should be appreciated that the above general description and the following detailed description are intended to be exemplary and illustrative but not restrictive.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings are incorporated in the specification and consist of a part of the specification, showing the embodiments in accordance with the present disclosure and used for explaining the principles of the present disclosure in conjunction with the specification.
  • FIG. 1 is a schematic flow chart of a user identification method according to an embodiment of the present disclosure;
  • FIGS. 2A to 2E are schematic diagrams of application scenarios of a user identification method according to an embodiment of the present disclosure;
  • FIG. 3 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure;
  • FIG. 4 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure;
  • FIG. 5 is a schematic diagram of an application scenario of another user identification method according to an embodiment of the present disclosure;
  • FIG. 6 is a schematic block diagram of a user identification apparatus according to an embodiment of the present disclosure;
  • FIG. 7 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure;
  • FIG. 8 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure; and
  • FIG. 9 is an exemplary hardware arrangement diagram of a user identification apparatus according to an embodiment of the present disclosure.
  • DETAILED DESCRIPTION
  • A part of exemplary embodiments of the present disclosure will be described in detail herein, examples of which are illustrated in the accompanying drawings. The following description refers to the same or similar elements in the different drawings, unless otherwise indicated. Implementations described in the following exemplary embodiments do not represent all implementations consistent with the present disclosure. Instead, they are merely examples of apparatuses and methods consistent with some aspects of the present disclosure as detailed in the appended claims.
  • FIG. 1 is a schematic flow chart of a user identification method according to an embodiment of the present disclosure. The user identification method in the present embodiment may be applied to a vehicle, such as an automobile, an airplane, a ship, etc. The following embodiments are mainly exemplarily described by taking the automobile as an example of the vehicle. The vehicle includes at least two microphone arrays, the respective microphone arrays being disposed at different positions of the vehicle, respectively.
  • As shown in FIG. 1, the user identification method includes steps S1 to S4.
  • In step S1, a voice of a user within the vehicle is received through the at least two microphone arrays.
  • In an embodiment, the number of the microphone arrays may be set as required, for example, may be set to two or three, or even more. The following embodiments are mainly exemplarily illustrated in a case where two microphone arrays are provided.
  • In step S2, directions from the user to the microphone arrays are determined, respectively, according to the voice.
  • In an embodiment, the microphone array may consist of a plurality of microphones. After receiving a sound, the microphone array may determine directions from a sound source to the microphone arrays, respectively, wherein the process of determining the directions from the sound source to the microphone arrays by the microphone array may be implemented by approaches in related technology, and will not be repeatedly described here.
  • In step S3, an angle between any two of the directions is calculated.
  • In an embodiment, the microphone array may transmit the determined direction to a processor connected to each of the microphone arrays. Since the directions from the user to the microphone arrays are vectors, the processor may calculate, according to any two of the directions, the angle between the two directions.
  • In step S4, a type of the user is identified based at least on the angle.
  • In an embodiment, since different types of users within the vehicle are typically located at different positions, for example, a driver is typically located on a seat at a front row corresponding to a steering wheel, and a passenger is located on a copilot seat and seats at a rear row. The angles between the directions from the users at different positions to the two microphone arrays are different.
  • FIGS. 2A to 2E are schematic diagrams of application scenarios of a user identification method according to an embodiment of the present disclosure. As shown in FIG. 2A to FIG. 2E, by taking a small-size automobile in which a driver seat is located at the left side of the automobile as an example, the automobile may be provided with two microphone arrays. For example, a first microphone array M1 is disposed at the left side of the driver seat, and is in line with the driver seat and a copilot seat, a second microphone array M2 is disposed straight ahead of the copilot seat, the driver D is located on the driver seat, and passengers P1, P2, P3, and P4 are located on the copilot seat and the rear row, respectively.
  • As shown in FIG. 2A, an angle α1 between directions from the driver D on the driver seat to the first microphone array M1 and to the second microphone array M2 is greater than 90 degrees.
  • As shown in FIG. 2B, an angle α2 between directions from the passenger P1 on the copilot seat to the first microphone array M1 and to the second microphone array M2 is equal to 90 degrees.
  • As shown in FIG. 2C, an angle α3 between directions from the passenger P2 on the left of the rear row to the first microphone array M1 and to the second microphone array M2 is less than 90 degrees.
  • As shown in FIG. 2D, an angle α4 between directions from the passenger P3 on the middle of the rear row to the first microphone array M1 and to the second microphone array M2 is less than 90 degrees.
  • As shown in FIG. 2E, an angle α5 between directions from the passenger P4 on the right of the rear row to the first microphone array M1 and to the second microphone array M2 is less than 90 degrees.
  • As seen from FIGS. 2A to 2E, the angles formed between the directions from the users located at different positions in the vehicle to the two microphone arrays are different, wherein the angle formed between the directions from the user located at the position of the driver to the respective two microphone arrays (e.g., greater than 90 degrees) is significantly distinguished from the angles formed between the directions from the users located at other positions to the two respective microphone arrays (e.g., less than or equal to 90 degrees). Thus, the type of the user may be determined according to the angle between the directions from the user to the respective two microphone arrays.
  • By determining the type of the user, it may in turn be determined according to the determined type whether the user who issued the control instruction has a privilege to execute the control instruction, after the voice of the user is received and a control instruction is generated; and the control instruction is executed only when the user has the privilege, so as to ensure security of executing the action.
  • It should be noted that the embodiments of the present disclosure are not limited to identifying these two user types, i.e., “driver” and “passenger”, in the case where other types of users are included in the vehicle, such as a ticket seller other than the driver and the passengers, or a coach other than the driver and the passengers, it may also be identified by the embodiments of the present disclosure.
  • Optionally, the at least two microphone arrays consist of two microphone arrays.
  • In an embodiment, the above directions may be calculated by two microphone arrays, so that only two microphone arrays are provided, which helps to reduce an overall hardware cost.
  • It should be noted that the number of the microphone arrays may be adjusted as needed.
  • Optionally, a first one of the two microphone arrays is disposed on a side of the driver seat of the vehicle that is away from the copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • The type includes “driver” and “passenger”, and wherein the identifying the type of the user based at least on the angle includes:
  • determining that the type is “driver” if the angle is greater than 90 degrees; and
  • determining that the type is “passenger” if the angle is less than or equal to 90 degrees.
  • It should be noted that the position of the microphone array may be adjusted as needed, and the types of the user are not limited to the above-mentioned “driver” and “passenger”, and may be determined according to specific circumstances.
  • Optionally, the first one of the two microphone arrays is disposed on a side of the driver seat of the vehicle that is away from the copilot seat, and the second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • The type includes “driver” and “passenger”, and wherein the identifying the type of the user based at least on the angle includes:
  • determining that the type is “driver” if the angle is greater than a threshold angle; and
  • determining that the type is “passenger” if the angle is less than the threshold angle.
  • In an embodiment, when the first microphone array and the second microphone array are disposed, a threshold angle may be determined. For example, a midpoint of a line between a central position of the driver seat and a central position of the copilot seat may be firstly determined. a point, then an angle formed between directions from the midpoint to the first microphone array and the second microphone array is calculated as a threshold angle, the threshold angle being less than an angle formed between directions from the driver seat to the first microphone array and the second microphone array, and being greater than an angle formed between directions from the copilot seat to the first microphone array and the second microphone array.
  • The type is determined to be “driver” or “passenger” based on the angle between the two directions being greater than 90 degrees, or less than 90 degrees, or equal to 90 degrees. Since the user may not be at the central position of his corresponding seat when he utters the voice, for example, the head of the passenger on the copilot seat is deviated from the central position of the copilot seat, e.g., the head is deviated to the driver seat when he utters the voice, then the calculated angle between the two directions is also an obtuse angle, which thus may misidentify the passenger sitting on the copilot seat as the driver.
  • According to the present embodiment, the type of the user is determined by comparing a relationship between the angle between the two directions and the threshold angle, that is, if the angle is greater than the threshold angle, the type of the user is determined to be “driver”; and if the angle is less than the threshold angle, the type of the user is determined to be “passenger”. Even if the head of the user is slightly deviated from the central position of the corresponding seat when the user utters the voice, the type of the user may still be determined accurately. For example, the head of the passenger on the copilot seat is slightly deviated to the driver seat when he utters the voice, then the angle between the two directions from the passenger to the first microphone array and the second microphone array is still calculated to be less than the threshold angle, as long as he does not deviate to the area between the midpoint as described above and the driver seat. Thus, according to the present embodiment, it may be determined that the type of the user is “passenger”, thereby the type of the user may be determined accurately.
  • Also, according to the present embodiment, the requirements for disposing the positions of the first microphone array and the second microphone array may be reduced appropriately, that is, it is not necessary to ensure that the first microphone array is in line with the driver seat and the copilot seat and the second microphone array is disposed straight ahead of the copilot seat. Instead, it can be ensured that the type of the user is determined accurately, as long as the first microphone array is disposed on the side of the driver seat that is away from the copilot seat (e.g., may be deviated forward or backward) and the second microphone array is disposed ahead of the copilot seat (e.g., may be deviated to the left and the right). Thus, the arrangement of the first microphone array and the second microphone array may be simplified.
  • FIG. 3 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure. As shown in FIG. 3, on a basis of the embodiment as shown in FIG. 1, the user identification method further includes:
  • step S5 of generating a control instruction according to the voice;
  • step S6 of determining whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance;
  • step S7 of executing the control instruction If the user of the type has the privilege.
  • In an embodiment, the control instruction may include a control instruction for controlling the vehicle, such as controlling steering of the vehicle, gear switching, cruise control, navigation, driving recorder shooting, lights on/off, switching of a rear view camera, and the like; and may also include a control instruction for controlling auxiliary functions of the vehicle, such as adjusting a temperature of an air conditioner, song switching, seat angle adjustment, window adjustment, radio volume adjustment, etc.
  • In general, only the driver has the privilege to execute the control instruction for controlling the vehicle, while the passenger may only execute the control instruction for controlling the auxiliary functions of the vehicle. Therefore, a correspondence between the type “passenger” and the control instructions for controlling the auxiliary functions of the vehicle and a correspondence between the type “driver” and all of the control instructions may be stored in advance. Based on this, when it is determined that the user is a driver, it is determined that the user has the privilege to execute any of the control instructions; and when it is determined that the user is a passenger, it is then determined whether the control instruction is a control instruction for controlling the auxiliary function of the automobile; if not, it is determined that the user has no privilege to execute the control instruction, thereby preventing the passenger from controlling the vehicle and causing interference to the driver, and ensuring the safety of the driver driving the vehicle.
  • FIG. 4 is a schematic flow chart of another user identification method according to an embodiment of the present disclosure. As shown in FIG. 4, on a basis of the embodiment as shown in FIG. 1, the user identification method further includes:
  • step S8 of determining an intensity of the voice before the type of the user is identified according to the angle; the step S8 may be performed after step S3 as shown in FIG. 4;
  • or the execution order may be adjusted as needed, as long as step S8 is ensured to be executed between steps S1 and S401.
  • The identifying the type of the user based on at least the angle includes:
  • step S401 of identifying the type of the user according to the angle and the intensity.
  • In an embodiment, the positions of the microphone arrays for receiving the voice of the user may be different from those in the embodiments as shown in FIGS. 2A-2E. FIG. 5 is a schematic diagram of an application scenario of another user identification method according to an embodiment of the present disclosure.
  • As shown in FIG. 5, the first microphone array M1 is disposed on the left side of the driver position, and the second microphone array M2 is disposed on the right side of the copilot position, whereby the angle between the directions from the driver at the driver position to the respective two microphone arrays is a flat angle, that is, 180°, but the angle between the directions from the passenger at the copilot position to the respective two microphone arrays is also a flat angle. The driver at the driver position and the passenger at the copilot position cannot be distinguished based on the angles only.
  • However, the driver at the driver position is closer to the first microphone array and is farther from the second microphone array, while the passenger at the copilot position is closer to the second microphone array and is farther from the first microphone array. Farther. The intensity of the voice received by the microphone array is inversely proportional to the distance from the voice source to the microphone array. Therefore, when the angle is determined to be 180°, it may be further determined whether the intensity of the voice received by the first microphone array is larger, or the intensity of the voice received by the second microphone array is larger; if the intensity of the voice received by the first microphone array is larger, it may be determined that the type of the user is a driver; and if the intensity of the voice received by the second microphone array is larger, it may be determined that the type of the user is a passenger, thereby ensuring that the type of the user is determined accurately.
  • It should be noted that the embodiment as shown in FIG. 4 is not limited to be applied to the scenario as shown in FIG. 5, and the manner of disposing the microphone arrays in other cases is also applicable.
  • FIG. 6 is a schematic block diagram of a user identification apparatus according to an embodiment of the present disclosure. As shown in FIG. 6, the user identification apparatus is applicable to use in a vehicle, wherein the vehicle includes at least two microphone arrays, the respective microphone arrays are disposed at different positions of the vehicle, and the at least two microphone arrays are used for receiving the voice of the user within the vehicle. The vehicle includes:
  • a direction determining module 1 configured to determine directions from the user to the microphone arrays, respectively, according to the voice;
  • an angle calculation module 2 configured to calculate an angle between any two of the directions;
  • a user identification module 3 configured to identify the type of the user based at least on the angle.
  • Optionally, the at least two microphone arrays include two microphone arrays.
  • Optionally, a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second one of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
  • The type includes “driver” and “passenger”, wherein the user identification module determines that the type is “driver” if the angle is greater than 90 degrees; and determines that the type is “passenger” if the angle is less than or equal to 90 degrees.
  • Optionally, a first one of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second one of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
  • The type includes “driver” and “passenger”, wherein the user identification module determines that the type is “driver” if the angle is greater than a threshold angle; and determines that the type is “passenger” if the angle is less than the threshold angle.
  • FIG. 7 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure. As shown in FIG. 7, on a basis of the embodiment as shown in FIG. 6, the user identification apparatus further includes:
  • an instruction generation module 4 configured to generate a control instruction according to the voice;
  • an privilege determination module 5 configured to determine whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance; and
  • an instruction execution module 6 configured to execute the control instruction if the user of the type has the privilege.
  • FIG. 8 is a schematic block diagram of another user identification apparatus according to an embodiment of the present disclosure. As shown in FIG. 8, on a basis of the embodiment as shown in FIG. 6, the method further includes:
  • an intensity determination module 7 configured to determine an intensity of the voice;
  • wherein the user identification module 3 is configured to identify the type of the user according to the angle and the intensity.
  • An embodiment of the present disclosure further provides an electronic device, the electronic device being disposed on a vehicle, and including:
  • at least two microphone arrays, wherein the respective microphone arrays are disposed at different positions of the vehicle, respectively; and
  • a processor configured to perform the steps of the user identification method as described in any of the above embodiments.
  • FIG. 9 is an exemplary hardware arrangement diagram of a user identification apparatus according to an embodiment of the present disclosure. As shown in FIG. 9, the electronic device 900 may include a processor 910, a memory 920, an input device 930/output device 930, and the like. It should be noted that the embodiment as shown in FIG. 9 is for illustrative purposes only, and thus does not limit the present disclosure in any way. In fact, the electronic device 900 may include more, fewer, or different modules, and may be separate devices or distributed devices distributed at multiple locations. For example, the electronic device 900 may include, but is not limited to, a satellite controller, an onboard computer, a personal computer (PC), a server, a server cluster, a computing cloud, a workstation, a terminal, a tablet, a laptop, a smart phone, a media player, a wearable device, and/or a home appliance (e.g., a television, a set top box, a DVD player), and the like.
  • The processor 910 may be a component responsible for the overall operation of the electronic device 900, which may be communicatively connected to other various modules/components to receive data and/or instructions to be processed from other modules/components, and send processed data and/or instructions to other modules/components. The processor 910 may be, for example, a general purpose processor, such as a central processing unit (CPU), a signal processor (DSP), an application processor (AP), and the like. In this case, it may perform one or more of the various steps of the user identification method according to the embodiment of the present disclosure as previously described under the instruction of the instructions/programs/codes stored in the memory 920. Moreover, the processor 910 may also be, for example, a special purpose processor such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. In this case, it may specifically perform one or more of the steps of the user identification method according to the embodiment of the present disclosure as previously described in accordance with its circuit design. Moreover, the processor 910 may also be any combination of hardware, software and/or firmware. Moreover, although only one processor 910 is shown in FIG. 9, the processor 910 may also include one or more processing units distributed at one or more locations in practice.
  • The memory 920 may be configured to temporarily or persistently store computer executable instructions that, when executed by the processor 910, may cause the processor 910 to perform one or more of the various steps of the various methods described in the present disclosure. In addition, the memory 920 may also be configured to temporarily or persistently store data related to these steps, such as voice data, threshold data, intensity data, and the like. The memory 920 may include a volatile memory and/or a non-volatile memory. The volatile memory may include, for example, but not limited to, dynamic random access memory (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), cache, and the like. The non-volatile memory may include, for example, but not limited to, one-time programmable read only memory (OTPROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), mask ROM, flash ROM, flash memory (e.g., NAND flash, NOR flash, etc.), hard drive or solid state drive (SSD), high density flash (CF), secure digital (SD), micro SD, mini SD, Extreme Digital (xD), multimedia card (MMC), memory stick, etc. In addition, the memory 920 may also be a remote storage device, such as a networked storage device (NAS) or the like. The memory 920 may also include distributed storage devices distributed at multiple locations, such as cloud storage.
  • The input device 930/output device 940 may be configured to receive input from outside and/or provide output to the outside. Although the input device 930/output device 940 is shown as devices separate from each other in the embodiment as shown in FIG. 9, it may actually be an input and output device in an integral form. For example, the input device 930/output device 940 may include, but is not limited to, a keyboard, mouse, microphone, camera, display, touch screen display, printer, speaker, earphone, or any other device that may be used for input/output, and the like. In addition, the input device 930/output device 940 may also be an interface, such as a headphone interface, a microphone interface, a keyboard interface, a mouse interface, and the like, configured to be connected to the device as described above. In this case, the electronic device 900 may be connected to an external input/output device through the interface and implement an input/output function. In other words, the electronic device 900 may or may not include one or more of the microphone arrays as mentioned previously.
  • In addition, the electronic device 900 may also include other modules not shown in FIG. 9, such as a communication module. The communication module may be configured to enable the electronic device 900 to communicate with other electronic devices and exchange various data. The communication module may be, for example, an Ethernet interface card, a USB module, a serial line interface card, a fiber interface card, a telephone line modem, an xDSL modem, a Wi-Fi module, a Bluetooth module, a 2G/3G/4G/5G communication module, and the like. The communication module may also be regarded as a part of the input device 930/output device 940 in the sense of data input/output.
  • In addition, the electronic device 900 may also include other modules including, for example, but not limited to: a power module, a GPS module, a sensor module (e.g., a proximity sensor, an illuminance sensor, an acceleration sensor, a fingerprint sensor, etc.), and the like.
  • However, it should be noted that the above-described modules are only examples of a part of modules that may be included in the electronic device 900, and the electronic device according to the embodiments of the present disclosure is not limited thereto. In other words, an electronic device according to other embodiments of the present disclosure may include more modules, fewer modules, or different modules.
  • In some embodiments, the electronic device 900 as illustrated in FIG. 9 may perform various steps of the various methods as described in connection with FIGS. 1, 3, and/or 4. In some embodiments, the memory 920 stores instructions that, when executed by the processor 910, may cause the processor 910 to perform the various steps of the various methods as described in connection with FIGS. 1, 3, and/or 4.
  • According to the above embodiments, the angles formed between the directions from the users located at different positions in the vehicle to the two microphone arrays are different, wherein the angle formed between the directions from the user located at the position of the driver to the respective two microphone arrays is significantly distinguished from the angles formed between the directions from the users located at other positions to the two respective microphone arrays. Thus, the type of the user may be determined according to the angle between the directions from the user to the respective two microphone arrays.
  • By determining the type of the user, it may in turn be determined according to the determined type whether the user who issued the control instruction has a privilege to execute the control instruction, after the voice of the user is received and a control instruction is generated; and the control instruction is executed only when the user has the privilege, so as to ensure security of executing the action.
  • In the present disclosure, the terms “first” and “second” are used for descriptive purposes only but cannot be construed as indicating or implying relative importance. The term “multiple/plurality” refers to two or more, unless specifically defined otherwise.
  • Other embodiments of the present disclosure will be readily apparent to the skilled in the art by considering the specification and practicing the present disclosure. The present disclosure is intended to cover any variations, uses, or adaptations of the present disclosure, which are in accordance with the general principles of the present disclosure and include common general knowledge or conventional technical means in the art that are not disclosed in the present disclosure. The specification and embodiments are to be considered as illustrative only, and the real scope and spirit of the present disclosure are defined by the appended claims.
  • It is to be understood that the present disclosure is not limited to the accurate structures which have been described and shown in the drawings, and may be modified and changed in any way within the scope of the present disclosure. The scope of the present disclosure is limited by the appended claims only.

Claims (17)

1. A user identification method applicable to a vehicle comprising at least two microphone arrays, the respective microphone arrays being disposed at different positions of the vehicle, respectively, the user identification method comprising:
receiving a voice of a user within the vehicle through the at least two microphone arrays;
determining directions from the user to the microphone arrays according to the voice;
calculating an angle between any two of the directions;
identifying a type of the user based at least on the angle; and.
determining an intensity of the voice before identifying the type of the user based on at least the angle;
wherein identifying the type of the user based on at least the angle comprises:
identifying the type of the user based on the angle and the intensity.
2. The user identification method of claim 1, wherein the at least two microphone arrays consist of two microphone arrays.
3. The user identification method of claim 2, wherein a first microphone array of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second microphone array of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
4. The user identification method of claim 3, wherein the type comprises one of “driver” and “passenger”, and wherein identifying the type of the user based at least on the angle comprises:
determining that the type is “driver” if the angle is greater than 90 degrees; and
determining that the type is “passenger” if the angle is less than or equal to 90 degrees.
5. The user identification method of claim 2, wherein a first microphone array of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second microphone array of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
6. The user identification method of claim 5, wherein the type comprises one of “driver” and “passenger”, and wherein identifying the type of the user based at least on the angle comprises:
determining that the type is “driver” if the angle is greater than a threshold angle; and
determining that the type is “passenger” if the angle is less than the threshold angle.
7. The user identification method of claim 1, further comprising:
generating a control instruction according to the voice;
determining whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance; and
executing the control instruction if the user of the type has the privilege.
8. (canceled)
9. A user identification apparatus applicable to a vehicle, the user identification apparatus comprising:
a processor;
a memory storing instructions that, when executed by the processor, cause the processor to:
receive, from at least two microphone arrays comprised in the vehicle and disposed at different positions of the vehicle, respectively, a voice of a user within the vehicle detected by the at least two microphone arrays;
determine directions from the user to the microphone arrays according to the voice;
calculate an angle between any two of the directions;
identify a type of the user based at least on the angle; and.
determine an intensity of the voice before identifying the type of the user based on at least the angle;
wherein identifying the type of the user based on at least the angle comprises:
identifying the type of the user based on the angle and the intensity.
10. The user identification apparatus of claim 9, wherein the at least two microphone arrays consist of two microphone arrays.
11. The user identification apparatus of claim 10, wherein a first microphone array of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and is in line with the driver seat and the copilot seat; and a second microphone array of the two microphone arrays is disposed straight ahead of the copilot seat of the vehicle.
12. The user identification apparatus of claim 11, wherein the type comprises one of “driver” and “passenger”, and the instructions, when executed by the processor, further cause the processor to:
determine that the type is “driver” if the angle is greater than 90 degrees; and
determine that the type is “passenger” if the angle is less than or equal to 90 degrees.
13. The user identification apparatus of claim 10, wherein a first microphone array of the two microphone arrays is disposed on a side of a driver seat of the vehicle that is away from a copilot seat, and a second microphone array of the two microphone arrays is disposed ahead of the copilot seat of the vehicle.
14. The user identification apparatus of claim 13, wherein the type comprises one of “driver” and “passenger”, and the instructions, when executed by the processor, further cause the processor to:
determine that the type is “driver” if the angle is greater than a threshold angle; and
determine that the type is “passenger” if the angle is less than the threshold angle.
15. The user identification apparatus of claim 9, wherein the instructions, when executed by the processor, further cause the processor to:
generate a control instruction according to the voice;
determine whether the user of the type has a privilege to execute the control instruction according to a correspondence between types and control instructions stored in advance; and
execute the control instruction If the user of the type has the privilege.
16. (canceled)
17. The user identification apparatus of claim 9, wherein the at least two microphone arrays are a part of the user identification apparatus.
US16/172,102 2018-03-28 2018-10-26 User identification method and user identification apparatus Active US10451710B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201810265531 2018-03-28
CN201810265531.1A CN108597508B (en) 2018-03-28 2018-03-28 User identification method, user identification device and electronic equipment
CN201810265531.1 2018-03-28

Publications (2)

Publication Number Publication Date
US20190302223A1 true US20190302223A1 (en) 2019-10-03
US10451710B1 US10451710B1 (en) 2019-10-22

Family

ID=63623825

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/172,102 Active US10451710B1 (en) 2018-03-28 2018-10-26 User identification method and user identification apparatus

Country Status (2)

Country Link
US (1) US10451710B1 (en)
CN (1) CN108597508B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959870A (en) * 2018-07-10 2018-12-07 北京奇艺世纪科技有限公司 A kind of user identification method and device
CN113870555A (en) * 2021-09-08 2021-12-31 南京静态交通产业技术研究院 Man-vehicle cooperative identification method based on mobile phone IMSI code and electronic license plate

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584871B (en) * 2018-12-04 2021-09-03 北京蓦然认知科技有限公司 User identity recognition method and device of voice command in vehicle
CN109781134A (en) * 2018-12-29 2019-05-21 百度在线网络技术(北京)有限公司 Navigation control method, device, engine end and storage medium
CN112017659A (en) * 2020-09-01 2020-12-01 北京百度网讯科技有限公司 Processing method, device and equipment for multi-sound zone voice signals and storage medium

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005029040A (en) * 2003-07-07 2005-02-03 Denso Corp Indicator angle adjuster and program for the same in vehicle
DE602005008005D1 (en) * 2005-02-23 2008-08-21 Harman Becker Automotive Sys Speech recognition system in a motor vehicle
JP3906230B2 (en) * 2005-03-11 2007-04-18 株式会社東芝 Acoustic signal processing apparatus, acoustic signal processing method, acoustic signal processing program, and computer-readable recording medium recording the acoustic signal processing program
JP4131978B2 (en) * 2006-02-24 2008-08-13 本田技研工業株式会社 Voice recognition device controller
US8214219B2 (en) * 2006-09-15 2012-07-03 Volkswagen Of America, Inc. Speech communications system for a vehicle and method of operating a speech communications system for a vehicle
US7747446B2 (en) * 2006-12-12 2010-06-29 Nuance Communications, Inc. Voice recognition interactive system with a confirmation capability
US20090055178A1 (en) * 2007-08-23 2009-02-26 Coon Bradley S System and method of controlling personalized settings in a vehicle
ATE554481T1 (en) * 2007-11-21 2012-05-15 Nuance Communications Inc TALKER LOCALIZATION
CN102854493B (en) * 2011-06-27 2014-07-16 无锡物联网产业研究院 Method for calibrating coordinate and angle values for positioning and tracking system for multiple sounding arrays
CN102819009B (en) * 2012-08-10 2014-10-01 香港生产力促进局 Driver sound localization system and method for automobile
CN103293543B (en) * 2013-06-03 2015-05-06 安徽富煌和利时科技股份有限公司 System utilizing GPS (global positioning system) positioning information for automatically prompting driving sections and prompt method
KR102033309B1 (en) * 2013-10-25 2019-10-17 현대모비스 주식회사 Apparatus and method for controlling beam forming microphones considering location of driver seat
CN103544959A (en) * 2013-10-25 2014-01-29 华南理工大学 Verbal system and method based on voice enhancement of wireless locating microphone array
US9431013B2 (en) * 2013-11-07 2016-08-30 Continental Automotive Systems, Inc. Co-talker nulling for automatic speech recognition systems
KR101491354B1 (en) * 2013-11-25 2015-02-06 현대자동차주식회사 Apparatus and Method for Recognize of Voice
US9900688B2 (en) * 2014-06-26 2018-02-20 Intel Corporation Beamforming audio with wearable device microphones
DE102015220400A1 (en) * 2014-12-11 2016-06-16 Hyundai Motor Company VOICE RECEIVING SYSTEM IN THE VEHICLE BY MEANS OF AUDIO BEAMFORMING AND METHOD OF CONTROLLING THE SAME
JP2016127300A (en) * 2014-12-26 2016-07-11 アイシン精機株式会社 Speech processing unit
CN105989707B (en) * 2015-02-16 2021-05-28 杭州快迪科技有限公司 Method for determining relative position of GPS equipment and target position
US20180249267A1 (en) * 2015-08-31 2018-08-30 Apple Inc. Passive microphone array localizer
CN105280183B (en) * 2015-09-10 2017-06-20 百度在线网络技术(北京)有限公司 voice interactive method and system
US20190037363A1 (en) * 2017-07-31 2019-01-31 GM Global Technology Operations LLC Vehicle based acoustic zoning system for smartphones
CN107682553B (en) * 2017-10-10 2020-06-23 Oppo广东移动通信有限公司 Call signal sending method and device, mobile terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959870A (en) * 2018-07-10 2018-12-07 北京奇艺世纪科技有限公司 A kind of user identification method and device
CN113870555A (en) * 2021-09-08 2021-12-31 南京静态交通产业技术研究院 Man-vehicle cooperative identification method based on mobile phone IMSI code and electronic license plate

Also Published As

Publication number Publication date
CN108597508A (en) 2018-09-28
US10451710B1 (en) 2019-10-22
CN108597508B (en) 2021-01-22

Similar Documents

Publication Publication Date Title
US10451710B1 (en) User identification method and user identification apparatus
KR102479072B1 (en) Method for Outputting Contents via Checking Passenger Terminal and Distraction
US9501693B2 (en) Real-time multiclass driver action recognition using random forests
US11386678B2 (en) Driver authentication for vehicle-sharing fleet
US9613459B2 (en) System and method for in-vehicle interaction
US10462281B2 (en) Technologies for user notification suppression
KR20190106916A (en) Acoustic control system, apparatus and method
CN108986819B (en) System and method for automatic speech recognition error detection for vehicles
CN109584871B (en) User identity recognition method and device of voice command in vehicle
JP2021190986A (en) Ultrasonic radar array, and obstacle detection method and system
CN112650977B (en) Method for protecting neural network model
US11511703B2 (en) Driver personalization for vehicle-sharing fleet
KR102395298B1 (en) Apparatus and method for controlling communication of vehicle
CN116018292A (en) System and method for object detection in an autonomous vehicle
US20140168058A1 (en) Apparatus and method for recognizing instruction using voice and gesture
US20230116572A1 (en) Autonomous vehicle, system for remotely controlling the same, and method thereof
US20230192084A1 (en) Autonomous vehicle, control system for sharing information with autonomous vehicle, and method thereof
US20160104417A1 (en) Messaging system for vehicle
CA3181843A1 (en) Methods and systems for monitoring driving automation
US20220130365A1 (en) Vehicle and control method thereof
US20220038909A1 (en) Systems and methods for bluetooth authentication using communication fingerprinting
US20210200627A1 (en) Integrity check device for safety sensitive data and electronic device including the same
KR20190104936A (en) Call quality improvement system, apparatus and method
KR102561458B1 (en) Voice recognition based vehicle control method and system therefor
US11804233B2 (en) Linearization of non-linearly transformed signals

Legal Events

Date Code Title Description
AS Assignment

Owner name: BOE TECHNOLOGY GROUP CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, HONGYANG;LI, XIN;YANG, XIANGDONG;REEL/FRAME:047328/0645

Effective date: 20180808

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4