CN115827075A - Device control method, device and storage medium - Google Patents

Device control method, device and storage medium Download PDF

Info

Publication number
CN115827075A
CN115827075A CN202211473388.8A CN202211473388A CN115827075A CN 115827075 A CN115827075 A CN 115827075A CN 202211473388 A CN202211473388 A CN 202211473388A CN 115827075 A CN115827075 A CN 115827075A
Authority
CN
China
Prior art keywords
ultrasonic
audio
user equipment
preset
response curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211473388.8A
Other languages
Chinese (zh)
Other versions
CN115827075B (en
Inventor
黄仁渭
程思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd, Xiaomi Automobile Technology Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202211473388.8A priority Critical patent/CN115827075B/en
Publication of CN115827075A publication Critical patent/CN115827075A/en
Application granted granted Critical
Publication of CN115827075B publication Critical patent/CN115827075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The present disclosure relates to a device control method, apparatus, device, and storage medium, the method comprising: responding to the collected audio data, performing awakening identification on the audio data, generating an awakening event of the user equipment under the condition that the preset awakening audio exists in the audio data, responding to the generation of the awakening event, performing ultrasonic sensing detection on the audio data to generate an ultrasonic detection result, and prohibiting the awakening of the user equipment under the condition that the ultrasonic sensing event exists in the user equipment according to the ultrasonic detection result. Therefore, the received audio data is subjected to ultrasonic perception detection through the user equipment, and when the ultrasonic perception event exists in the user equipment is determined according to the audio data, the user equipment is prohibited from waking up, so that the user equipment waking up in a vehicle-mounted environment is inhibited, the vehicle-mounted cooperative waking up disorder is avoided, and the equipment waking up experience of a user is improved.

Description

Device control method, device and storage medium
Technical Field
The present disclosure relates to the field of automatic driving, and in particular, to a method, an apparatus, a device, and a storage medium for controlling a device.
Background
When IOT (Internet of Things) equipment such as a vehicle-mounted terminal and a mobile terminal of an intelligent vehicle are combined on the vehicle for cooperative awakening, a plurality of intelligent devices can be awakened and responded based on awakening voice of a user, so that awakening disorder in a vehicle application scene is caused. Under the normal condition, other equipment except the vehicle-mounted terminal is restrained from being awakened, so that the vehicle-mounted terminal is enabled to perform awakening response preferentially, and the problem that the intelligent equipment is disorderly awakened in a vehicle application scene is avoided. In the related art, the suppression effect of the collaborative awakening suppression mode in the vehicle application scene is poor, so that the vehicle-mounted terminal cannot be ensured to be awakened preferentially, and the problem of disordered equipment awakening is caused.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a device control method, apparatus, device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an apparatus control method, applied to a user equipment, including:
responding to the collected audio data, and performing awakening identification on the audio data;
generating a wake-up event of the user equipment under the condition that the preset wake-up audio exists in the audio data;
performing ultrasonic perception detection on the audio data in response to the generation of the wake-up event to generate an ultrasonic detection result;
and under the condition that the ultrasonic sensing event exists in the user equipment according to the ultrasonic detection result, prohibiting the user equipment from waking up.
Optionally, the performing ultrasonic perception detection on the audio data to generate an ultrasonic detection result includes:
filtering the audio data according to a preset frequency range to generate an ultrasonic audio to be detected;
generating a frequency response curve to be detected according to the ultrasonic audio to be detected;
and comparing the frequency response curve to be detected with a preset frequency response curve to generate the ultrasonic detection result.
Optionally, the comparing the frequency response curve to be detected with a preset frequency response curve to generate the ultrasonic detection result includes:
generating a first ultrasonic detection result under the condition that the frequency response curve to be detected is matched with the preset frequency response curve, wherein the first ultrasonic detection result is used for indicating that the ultrasonic sensing event exists in the user equipment;
and generating a second ultrasonic detection result under the condition that the frequency response curve to be detected is not matched with the preset frequency response curve, wherein the second ultrasonic detection result is used for indicating that the ultrasonic perception event does not exist in the user equipment.
Optionally, the generating a to-be-detected frequency response curve according to the to-be-detected ultrasonic audio includes:
intercepting a plurality of ultrasonic sub-audios to be detected from the ultrasonic audios to be detected according to a preset interception rule;
superposing the multiple ultrasonic sub-audio frequencies to be detected to generate a target ultrasonic sub-audio frequency to be detected;
and generating the frequency response curve to be detected according to the target ultrasonic sub-audio to be detected.
Optionally, the comparing the frequency response curve to be detected with a preset frequency response curve to generate the ultrasonic detection result includes:
determining a first amplitude relation among the frequency audios in the frequency response curve to be detected, and determining a second amplitude relation among the frequency audios in the preset frequency response curve;
comparing the first amplitude relationship and the second amplitude relationship to generate the ultrasonic detection result.
Optionally, the method comprises:
and controlling the user equipment to perform awakening response under the condition that the ultrasonic sensing event does not exist in the user equipment according to the ultrasonic detection result.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus control method applied to an in-vehicle apparatus, including:
sending a preset ultrasonic audio to user equipment, wherein the preset ultrasonic audio is used for indicating that the user equipment is forbidden to be awakened;
and responding to the received preset awakening audio, and controlling the vehicle-mounted equipment to perform awakening response.
Optionally, the method comprises:
the preset ultrasonic audio is determined by the following formula:
Figure BDA0003954051970000031
the method comprises the steps of sampling discrete data, sampling the discrete data, and obtaining a data bit number, wherein k is an audio frequency, i is a discrete data sampling point, N is the total number of the discrete data sampling points, F is an initial frequency, and B is the data bit number.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus control device, applied to a user equipment, including:
the identification module is configured to respond to the collected audio data and carry out awakening identification on the audio data;
a first generation module configured to generate a wake event of the user equipment in case that it is determined that a preset wake audio exists in the audio data;
a second generation module configured to perform ultrasonic perception detection on the audio data in response to the generation of the wake event to generate an ultrasonic detection result;
an execution module configured to disable wake-up of the user equipment if it is determined from the ultrasound detection result that there is an ultrasound sensing event in the user equipment.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an apparatus control device applied to an in-vehicle apparatus, including:
a sending module configured to send a preset ultrasonic audio to a user equipment, wherein the preset ultrasonic audio is used for indicating that the user equipment is prohibited from being awakened;
and the execution module is configured to respond to the received preset awakening audio and control the vehicle-mounted equipment to respond to the awakening response.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a user equipment, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to carry out the steps of the method of any one of the first aspect of the disclosure when executing the executable instructions.
According to a sixth aspect of the embodiments of the present disclosure, there is provided an in-vehicle apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to, when executing the executable instructions, perform the steps of the method of any one of the second aspects of the present disclosure.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of any one of the first aspects of the present disclosure, or which, when executed by a processor, implement the steps of the method of any one of the second aspects of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
responding to the collected audio data, performing awakening identification on the audio data, generating an awakening event of the user equipment under the condition that preset awakening audio exists in the audio data, responding to the generation of the awakening event, performing ultrasonic perception detection on the audio data to generate an ultrasonic detection result, and forbidding awakening of the user equipment under the condition that the ultrasonic perception event exists in the user equipment according to the ultrasonic detection result. Therefore, the received audio data is subjected to ultrasonic perception detection through the user equipment, and when the ultrasonic perception event exists in the user equipment is determined according to the audio data, the user equipment is prohibited from waking up, so that the user equipment waking up in a vehicle-mounted environment is inhibited, the vehicle-mounted cooperative waking up disorder is avoided, and the equipment waking up experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a device control method according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of device control according to an exemplary embodiment.
Fig. 3 is an exemplary diagram illustrating a frequency response to be measured and a predetermined frequency response according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating a method of device control according to an exemplary embodiment.
Fig. 5 is an exemplary diagram illustrating a waveform of a fundamental wave according to one exemplary embodiment.
FIG. 6 is a flow chart illustrating a method of device control according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an appliance control device according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an appliance control device according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating a user device in accordance with an example embodiment.
FIG. 10 is a block diagram illustrating an in-vehicle device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart illustrating a device control method according to an exemplary embodiment, which is applied to a user equipment, as shown in fig. 1, and includes the following steps.
In step S101, in response to the collected audio data, wake-up recognition is performed on the audio data.
It should be noted that the present embodiment is applied to a user device, where a voice recognition module is loaded in the user device and is used to perform voice recognition on received audio data. For example, the user equipment may wake up the user equipment and control the user equipment to perform a wake-up response when it is determined that a wake-up audio sent by the user exists in the audio data by analyzing the collected audio data. For example, the user equipment collects audio data within a preset range of the environment, and after it is determined that a preset wake-up audio sent by the user exists in the audio data, the user equipment may be lit up, and a preset wake-up response audio is played out to prompt the user that the user equipment is woken up.
It should be noted that, the user equipment in this embodiment is placed in a vehicle, and a vehicle-mounted device is loaded on the vehicle, and the vehicle-mounted device has a wake-up response function, and can wake up the vehicle-mounted device based on a preset wake-up audio sent by a user. A plurality of user devices may be placed on the vehicle, for example, when a plurality of passengers are riding on the vehicle, the plurality of user devices may be cell phone terminals of the respective passengers. When a driver corresponding to the vehicle-mounted device needs to wake up the vehicle-mounted device, a preset wake-up audio is sent, and after the vehicle-mounted device and the user devices receive the preset wake-up audio, the user devices can wake up and respond based on the preset wake-up audio, so that wake-up disorder between the user devices and the vehicle-mounted device in a vehicle is caused. Therefore, in this embodiment, the wake-up responses of other user devices in the vehicle are suppressed, and the wake-up response responses of other user devices are prohibited, so that the wake-up priority of the vehicle-mounted device is improved, and the vehicle-mounted device is woken up based on the preset wake-up voice sent by the user in the vehicle.
For example, in this embodiment, the user equipment acquires audio in the environment in real time, generates corresponding audio data, performs wakeup identification on the audio data, and determines whether a preset wakeup audio sent by the user exists in the audio. For example, voice recognition analysis may be performed on the audio data to determine whether a preset voice exists in the audio data. The preset voice can be a wake-up voice which is input into the user equipment in advance by the user based on the use habit. The audio data can be subjected to character recognition analysis, semantic information corresponding to the audio data is determined, the semantic information is compared with preset semantic information, and whether the semantic information is matched with the preset semantic information is determined.
In step S102, in case that it is determined that the preset wake-up audio exists in the audio data, a wake-up event of the user equipment is generated.
For example, in this embodiment, the audio data is subjected to wake-up analysis through the above steps, and when it is determined that the preset wake-up audio exists in the audio data, a wake-up event is generated in the user equipment. It is worth mentioning that, in this embodiment, after the wake-up event is generated in the user equipment, the wake-up response is not immediately performed on the user equipment, the wake-up of the user equipment needs to be secondarily determined according to the wake-up event, and the user equipment is controlled to perform the wake-up response and the response by responding to the wake-up event when a preset condition is met.
In step S103, in response to the generation of the wake-up event, an ultrasonic sensing detection is performed on the audio data to generate an ultrasonic detection result.
It should be noted that, in this embodiment, when the vehicle meets a preset condition, the preset ultrasonic audio may be sent to the preset range through the vehicle-mounted device, where the preset range may be a boarding range inside the vehicle, and the preset condition may be a vehicle driving state. For example, when the vehicle-mounted device detects that the vehicle is in a driving state, the preset ultrasonic audio is sent to the space inside the vehicle, or when the vehicle-mounted device detects that the vehicle is started and the door of the vehicle is closed, the preset ultrasonic audio is sent to the riding space of the vehicle. It should be noted that the preset ultrasonic audio may be an audio signal of 20KHz to 30KHz, the preset ultrasonic audio may have a transmission cycle, and the onboard device sends the preset ultrasonic audio to a preset range according to the transmission cycle when it is determined that the preset condition is met, for example, the transmission cycle may be 1min, and the onboard device sends the preset ultrasonic audio to the preset range every 1 min.
In an example, in this embodiment, after a wake-up event is generated in the user equipment, the audio data is subjected to ultrasonic sensing detection, and an ultrasonic detection result is generated. The ultrasonic sensing detection is used for detecting whether preset ultrasonic audio sent by the vehicle-mounted equipment exists in the audio data or not and generating an ultrasonic detection result according to the detection result. The ultrasonic detection result is used for indicating whether an ultrasonic perception event exists in the user equipment, and the ultrasonic perception event is generated in the user equipment under the condition that the preset ultrasonic audio exists in the audio data is determined by performing ultrasonic perception detection on the audio data. It should be mentioned that, in this embodiment, the vehicle-mounted device sends the preset ultrasonic audio to the preset range of the vehicle-mounted space, and the user equipment determines whether the user equipment is located in the vehicle-mounted space by performing ultrasonic sensing detection on the audio data. And further judging the awakening event according to the ultrasonic perception detection result.
In step S104, in case it is determined from the ultrasound detection result that there is an ultrasound sensing event in the user equipment, the wake-up of the user equipment is disabled.
For example, in this embodiment, when the preset ultrasonic audio exists in the audio data, the user equipment may generate an ultrasonic perceptual event in the user equipment by analyzing the audio data. When it is determined that there is an ultrasound sensing event in the user equipment according to the ultrasound detection result, it indicates that the current environment of the user equipment is within a preset range of the vehicle-mounted space, and in order to avoid waking up multiple devices in the preset range based on the same preset wake-up audio frequency, the problem of device wake-up disorder is caused.
Optionally, after the step S104, the method includes:
and controlling the user equipment to perform awakening response under the condition that the ultrasonic sensing event does not exist in the user equipment according to the ultrasonic detection result.
For example, in this embodiment, when it is determined that there is no ultrasound sensing event in the user equipment according to the ultrasound detection result, it indicates that the user equipment is not within the preset range of the vehicle-mounted space, so that it is not necessary to suppress device wake-up of the user equipment, and the user equipment is controlled to perform a normal wake-up response based on the wake-up time.
According to the scheme, the audio data are awakened and identified in response to the collected audio data, an awakening event of the user equipment is generated under the condition that the preset awakening audio exists in the audio data, ultrasonic sensing detection is performed on the audio data in response to the generation of the awakening event so as to generate an ultrasonic detection result, and awakening of the user equipment is forbidden under the condition that the ultrasonic sensing event exists in the user equipment according to the ultrasonic detection result. Therefore, the received audio data is subjected to ultrasonic perception detection through the user equipment, and when the ultrasonic perception event exists in the user equipment is determined according to the audio data, the user equipment is prohibited from waking up, so that the user equipment waking up in a vehicle-mounted environment is inhibited, the vehicle-mounted cooperative waking up disorder is avoided, and the equipment waking up experience of a user is improved.
Fig. 2 is a flowchart illustrating a device control method, as shown in fig. 2, applied to a user equipment, according to an exemplary embodiment, the method includes the following steps.
In step S201, in response to the collected audio data, wake-up recognition is performed on the audio data.
For example, the audio data is identified in the same manner as in step S101 in this embodiment, and reference may be made to step S101, which is not described again.
In step S202, in case it is determined that the preset wake-up audio exists in the audio data, a wake-up event of the user equipment is generated.
For example, the method for generating the wake-up event in this embodiment is the same as that in step S102, and reference may be made to step S102, which is not described again.
In step S203, the audio data is filtered according to a preset frequency range to generate an ultrasonic audio to be tested.
For example, in this embodiment, a preset frequency range exists in a preset ultrasonic audio frequency sent by the vehicle-mounted device, and when the user equipment analyzes the collected audio data, in order to eliminate interference of other audio frequencies in the environment on an analysis result, the audio data needs to be filtered according to the preset frequency range of the preset ultrasonic audio frequency, so as to obtain the ultrasonic audio frequency to be detected in the preset frequency range in the audio data.
In step S204, a frequency response curve to be measured is generated according to the ultrasonic audio to be measured.
In an example, in this embodiment, a to-be-detected frequency response curve corresponding to the to-be-detected ultrasonic audio is generated according to a change relationship of audio intensity at each frequency of the to-be-detected ultrasonic audio, where a vertical coordinate of the to-be-detected frequency response curve is sound intensity of each frequency audio, and a horizontal coordinate of the to-be-detected frequency response curve is audio frequency.
Optionally, the step S204 includes:
intercepting a plurality of ultrasonic sub-audios to be detected from the ultrasonic audios to be detected according to a preset interception rule;
superposing a plurality of to-be-detected ultrasonic sub-audios to generate a target to-be-detected ultrasonic sub-audio;
and generating a frequency response curve to be tested according to the target to-be-tested ultrasonic sub-audio.
It should be noted that the audio data collected by the user equipment is an audio signal within a preset time range, after the audio data is filtered to generate the ultrasonic audio to be detected in the above manner, it takes a long time to convert the ultrasonic audio to be detected into a frequency response curve to be detected, and in a general case, the preset ultrasonic audio sent by the vehicle-mounted equipment is a regular audio signal, and the amplitude change rules of the frequency response curve on each frequency are the same in a short period. Illustratively, the user equipment acquires audio data in an environment within 1s, filters the audio data through the steps to generate an ultrasonic audio to be detected within 1s, and intercepts the audio data within a range of 0-0.1,0.1-0.2,0.2-0.3 from the ultrasonic audio to be detected according to a preset intercepting rule to generate a plurality of ultrasonic sub-audios to be detected. For example, the ultrasonic audio to be detected may be further intercepted according to the size of the received audio data, for example, the ultrasonic audio to be detected is intercepted every 2 frames, so as to generate a plurality of ultrasonic sub-audios to be detected in a preset number. This embodiment is not limited thereto.
And superposing the multiple ultrasonic sub-audios to be detected, and averaging the superposed multiple ultrasonic sub-audios to be detected to generate the target ultrasonic sub-audio to be detected. For example, the received ultrasonic audio to be detected may be subjected to superposition averaging every 2 frames to generate the target ultrasonic sub-audio to be detected, where the target ultrasonic sub-audio to be detected may be determined by the following pseudo-code.
Signal recv =Signal recv[i] +Signal recv[i+1] )/2
Wherein, signal recv Computing a function for the Signal, signal recv[i] The ultrasonic audio frequency to be measured of the ith frame, signal recv[i+1] The audio frequency is the ultrasonic audio frequency to be detected of the (i + 1) th frame.
In step S205, the frequency response curve to be measured is compared with a preset frequency response curve to generate an ultrasonic detection result.
It should be mentioned that, in this embodiment, according to the preset ultrasonic audio sent by the vehicle-mounted device, the preset ultrasonic audio is subjected to spectrum analysis to generate a corresponding preset frequency response curve, and the preset frequency response curve is loaded into the user device. After the frequency response curve to be detected is generated through the steps, the frequency response curve to be detected is compared with a preset frequency response curve, and an ultrasonic detection result is generated according to the similarity between the frequency response curve to be detected and the preset frequency response curve.
Optionally, the step S204 includes:
and under the condition that the frequency response curve to be detected is matched with the preset frequency response curve, generating a first ultrasonic detection result, wherein the first ultrasonic detection result is used for indicating that an ultrasonic perception event exists in the user equipment.
And under the condition that the frequency response curve to be detected is not matched with the preset frequency response curve, generating a second ultrasonic detection result, wherein the second ultrasonic detection result is used for indicating that no ultrasonic perception event exists in the user equipment.
For example, in this embodiment, a frequency response curve to be detected is compared with a preset frequency response curve, and when the frequency response curve to be detected is matched with the preset frequency response curve, a first ultrasonic detection result is generated, where the first ultrasonic detection result is used to indicate that an ultrasonic sensing event exists in the user equipment; and when the frequency response curve to be detected is not matched with the preset frequency response curve, generating a second ultrasonic detection result, wherein the second ultrasonic detection is used for indicating that no ultrasonic perception time exists in the user equipment. It is worth mentioning that according to the propagation characteristic of sound in air, the audio signal is gradually attenuated with increasing distance, and the farther the user equipment is away from the speaker of the vehicle-mounted device, the smaller the amplitude of the preset ultrasonic audio received by the user equipment is. However, the change rule between the frequency amplitudes is kept constant, so that whether the curves are matched or not can be determined by comparing the curve change rule of the frequency response curve to be detected with the curve change rule of the preset frequency response curve, and if the curve change rule of the frequency response curve to be detected is similar to the curve change rule of the preset frequency response curve, the frequency response curve to be detected is determined to be matched with the preset frequency response curve; and determining whether the curves are matched or not according to the change rule of the amplitude of each frequency in each curve, and determining that the frequency response curve to be detected is matched with the preset frequency response curve if the change rule of the amplitude of each frequency in the frequency response curve to be detected is the same as the change rule of the amplitude of each frequency in the preset frequency response curve.
Optionally, in another embodiment, the step S204 includes:
and determining a first amplitude relation among the frequency audios in the frequency response curve to be detected, and determining a second amplitude relation among the frequency audios in the preset frequency response curve.
The first amplitude relationship and the second amplitude relationship are compared to generate an ultrasonic test result.
In an example, in this embodiment, a to-be-detected frequency response curve of an ultrasonic audio to be detected and a preset frequency response curve of a preset ultrasonic audio are analyzed, a first amplitude relationship between frequency audios in the to-be-detected frequency response curve is determined, and a second amplitude relationship between frequency audios in the preset frequency response curve is determined, where the amplitude relationship is a variation relationship between each peak and each valley in the frequency response curve. For example, fig. 3 is an exemplary diagram of a frequency response curve to be measured and a preset frequency response curve according to an exemplary embodiment, as shown in fig. 3, in the frequency response curve to be measured, within a frequency range of 38Hz to 73Hz, amplitudes of respective frequencies exhibit periodic fluctuation, and fluctuation ranges of amplitudes corresponding to respective frequencies are the same, and a first amplitude relationship may be determined according to a ratio of amplitudes corresponding to respective peaks in the frequency response curve. And in the preset ultrasonic audio, determining a second amplitude relation according to the ratio of the corresponding amplitude of each peak between 38Hz and 73 Hz. And comparing the first amplitude relation with the second amplitude relation to generate an ultrasonic detection result.
In step S205, in case it is determined from the ultrasound detection result that there is an ultrasound sensing event in the user equipment, the wake-up of the user equipment is disabled.
For example, the manner of prohibiting the ue from being awakened in this embodiment is the same as that in step S104, and reference may be made to step S104, which is not described again.
By the mode, the user equipment is forbidden to be awakened in the vehicle-mounted use scene, the ultrasonic events in the audio data can be identified by the user equipment quickly and efficiently, awakening printing is achieved, better equipment awakening experience is brought to a user, and the problem of disordered awakening in vehicle-mounted cooperative awakening is solved.
Fig. 4 is a flowchart illustrating a device control method, as shown in fig. 4, applied to an in-vehicle device, according to an exemplary embodiment, the method including the following steps.
In step S301, a preset ultrasonic audio is sent to the user equipment, where the preset ultrasonic audio is used to instruct the user equipment to prohibit the user equipment from being awakened.
For example, in this embodiment, the vehicle-mounted device monitors the operating state of the vehicle, and sends a preset ultrasonic audio to other user devices in the vehicle-mounted space range after the vehicle meets a preset condition. The preset condition can be that the vehicle runs, and after the vehicle starts running, the vehicle-mounted equipment sends preset ultrasonic audio into the vehicle-mounted space based on a starting signal of the vehicle. And after receiving the preset ultrasonic audio, the user equipment controls the user equipment to be prohibited from being awakened.
Optionally, before the step S301, the method includes:
the preset ultrasound audio is determined by the following formula:
Figure BDA0003954051970000131
wherein k is the audio frequency, i is the discretization data sampling point, N is the total number of the discretization data sampling points, F is the initial frequency, and B is the data bit number.
Illustratively, in this embodiment, signal [ k ] [ i ] is a calculation function of the preset ultrasonic audio, where k represents audio of different frequencies, i represents discretized data sampling points, N represents the total number of discretized sampling points, F is the starting frequency of all audio, B is the number of data bits carried by each transmission, where 0 ≦ k ≦ 256,0 ≦ i ≦ N, where a sampling rate of 48k is adopted in connection with the actual usage scenario, N =1024, that is, 1024 sampling points exist per frame of data, F =400 should have a 18750HZ starting frequency at a sampling rate of 48k, and B =24, where 3 bytes of data are transmitted per frame. After the preset ultrasonic audio is generated through the pseudo code, the front 16 frequencies are overlapped and smoothed, and the average value is taken to generate the waveform of the fundamental wave. FIG. 5 is an exemplary graph illustrating a waveform of a fundamental wave that may be generated by odd-even superposition, smoothing, and averaging the first 16 frequency tones in the manner described above, as shown in FIG. 5, in accordance with one exemplary embodiment.
In step S302. And responding to the received preset awakening audio, and controlling the vehicle-mounted equipment to perform awakening response.
For example, in this embodiment, after the vehicle-mounted device receives a preset wake-up audio sent by a user, the vehicle-mounted device performs a wake-up response on the preset wake-up audio, for example, in a vehicle driving process, the vehicle-mounted device performs real-time collection on a user language in a vehicle-mounted space range, and after the user calls a preset name of the vehicle-mounted device, the vehicle-mounted device makes a response based on the received preset name, and plays the preset response audio.
Through the mode, the vehicle-mounted equipment can be awakened preferentially, other equipment is inhibited from being awakened, the problem of disordered awakening response in the vehicle-mounted environment is solved, and better user awakening experience is brought to a user.
Fig. 6 is a flow chart illustrating a method of controlling a device, as shown in fig. 6, according to an exemplary embodiment, including the following steps.
In step S401, after a user gets on a vehicle with a user device such as a mobile phone, the vehicle-mounted device starts to send a preset ultrasonic audio in response to voice wakeup of the user in the vehicle;
in step S402, a MIC (Microphone) module of the user equipment acquires audio data in real time, and sends the acquired audio data to a primary wake-up module and an ultrasonic sensing module of an ADSP (Advanced Digital Signal Processor) module of the user equipment;
in step S403, the ultrasonic sensing module performs ultrasonic sensing detection on the audio data, and synchronizes the detection result to the primary wake-up module;
in step S404, the primary wake-up module analyzes the audio data, generates a primary wake-up event after determining that a wake-up voice exists in the audio data, and synchronizes the primary wake-up event and the ultrasonic sensing event to a secondary wake-up module of an AP (Application processor) module;
in step S405, the secondary wake-up module synchronizes the wake-up event and the ultrasound sensing event to the cooperative wake-up service module;
in step S406, the cooperative wake-up service module makes a decision through the wake-up event and the ultrasound sensing event, and if the ultrasound sensing event is detected, the mobile phone wake-up is suppressed, and if the ultrasound sensing event is not detected, a wake-up response and a response are performed according to the wake-up event.
Fig. 7 is a block diagram illustrating an apparatus for controlling a device according to an exemplary embodiment, where, as shown in fig. 7, the apparatus 100 is applied to a user equipment, and the apparatus 100 includes: a recognition module 110, a first generation module 120, a second generation module 130, and an execution module 140.
The identification module 110 is configured to perform wake-up identification on the audio data in response to the collected audio data.
A first generating module 120 configured to generate a wake event of the user equipment in case it is determined that the preset wake audio is present in the audio data.
A second generating module 130 configured to perform an ultrasonic perception detection on the audio data in response to the generation of the wake event to generate an ultrasonic detection result.
An execution module 140 configured to disable wake-up of the user equipment in case it is determined from the ultrasound detection result that there is an ultrasound sensing event in the user equipment.
Optionally, the second generating module 130 includes:
the first generation submodule is configured to filter the audio data according to a preset frequency range so as to generate the ultrasonic audio to be detected;
the second generation submodule is configured to generate a frequency response curve to be detected according to the ultrasonic audio to be detected;
and the third generation submodule is configured to compare the frequency response curve to be detected with a preset frequency response curve so as to generate an ultrasonic detection result.
Optionally, a third generation submodule configured to:
under the condition that a frequency response curve to be detected is matched with a preset frequency response curve, generating a first ultrasonic detection result, wherein the first ultrasonic detection result is used for indicating that an ultrasonic perception event exists in user equipment;
and under the condition that the frequency response curve to be detected is not matched with the preset frequency response curve, generating a second ultrasonic detection result, wherein the second ultrasonic detection result is used for indicating that no ultrasonic perception event exists in the user equipment.
Optionally, a second generation submodule configured to:
intercepting a plurality of ultrasonic sub-audios to be detected from the ultrasonic audios to be detected according to a preset interception rule;
superposing a plurality of to-be-detected ultrasonic sub-audios to generate a target to-be-detected ultrasonic sub-audio;
and generating a frequency response curve to be tested according to the target to-be-tested ultrasonic sub-audio.
Optionally, a third generation submodule configured to:
determining a first amplitude relation among the frequency audios in the frequency response curve to be tested, and determining a second amplitude relation among the frequency audios in the preset frequency response curve;
the first amplitude relationship and the second amplitude relationship are compared to generate an ultrasonic test result.
Optionally, the apparatus 100 further comprises a control module configured to:
and controlling the user equipment to perform awakening response under the condition that the ultrasonic sensing event does not exist in the user equipment according to the ultrasonic detection result.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an apparatus for controlling a device according to an exemplary embodiment, and as shown in fig. 8, the apparatus 200 is applied to an in-vehicle device, and the apparatus 200 includes: a sending module 210 and an executing module 220.
A sending module 210 configured to send a preset ultrasonic audio to the user equipment, where the preset ultrasonic audio is used to indicate that the user equipment is prohibited from being woken up;
and the execution module 220 is configured to control the vehicle-mounted device to perform a wake response in response to the received preset wake audio.
Optionally, the apparatus 200 further comprises a generating module configured to:
the preset ultrasound audio is determined by the following formula:
Figure BDA0003954051970000161
wherein k is the audio frequency, i is the discretization data sampling point, N is the total number of the discretization data sampling points, F is the initial frequency, and B is the data bit number.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the device control method provided by the present disclosure.
Fig. 9 is a block diagram illustrating a user device in accordance with an example embodiment. For example, the user device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 9, user device 900 may include one or more of the following components: a first processing component 902, a first memory 904, a first power component 906, a first multimedia component 909, a first audio component 910, a first input/output interface 912, a first sensor component 914, and a first communication component 916.
The first processing component 902 generally controls overall operation of the user device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The first processing component 902 may include one or more first processors 920 to execute instructions to perform all or part of the steps of the device control method described above. Further, the first processing component 902 may include one or more modules that facilitate interaction between the first processing component 902 and other components. For example, the first processing component 902 may include a multimedia module to facilitate interaction between the first multimedia component 909 and the first processing component 902.
The first memory 904 is configured to store various types of data to support operations at the user equipment 900. Examples of such data include instructions for any application or method operating on user device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The first memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The first power component 906 provides power to the various components of the user device 900. The first power component 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the user device 900.
The first multimedia component 909 includes a screen that provides an output interface between the user device 900 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the first multimedia component 909 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the user device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The first audio component 910 is configured to output and/or input audio signals. For example, the first audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the user device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the first memory 904 or transmitted via the first communication component 916. In some embodiments, the first audio component 910 further includes a speaker for outputting audio signals.
The first input/output interface 912 provides an interface between the first processing component 902 and a peripheral interface module, which may be a keyboard, click wheel, button, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The first sensor component 914 includes one or more sensors for providing various aspects of state evaluation for the user device 900. For example, the first sensor component 914 may detect an open/closed state of the user device 900, the relative positioning of components, such as a display and keypad of the user device 900, the first sensor component 914 may also detect a change in the position of the user device 900 or a component of the user device 900, the presence or absence of user contact with the user device 900, the orientation or acceleration/deceleration of the user device 900, and a change in the temperature of the user device 900. The first sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The first sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the first sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The first communication component 916 is configured to facilitate communications between the user device 900 and other devices in a wired or wireless manner. User device 900 may access a wireless network based on a communication standard, such as WiFi,4G or 5G, or a combination thereof. In an exemplary embodiment, the first communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the first communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the user device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described device control methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the first memory 904 comprising instructions, executable by the first processor 920 of the user device 900 to perform the device control method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a set of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, system on Chip, or System on Chip), and the like. The integrated circuit or chip may be configured to execute executable instructions (or code) to implement the device control method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another device or apparatus, for example, where the integrated circuit or chip includes a processor, a memory, and an interface for communicating with other devices. The executable instructions may be stored in the memory, and when executed by the processor, implement the device control method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit the executable instructions to the processor for execution, so as to implement the device control method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned device control method when executed by the programmable apparatus.
FIG. 10 is a block diagram illustrating an in-vehicle device according to an exemplary embodiment. For example, the in-vehicle device 1000 may be a mobile phone, a computer, an in-vehicle center console, a personal digital assistant, or the like.
Referring to fig. 10, the in-vehicle apparatus 1000 may include one or more of the following components: a second processing component 1002, a second memory 1004, a second power component 1006, a second multimedia component 10010, a second audio component 1010, a second input/output interface 1012, a second sensor component 1014, and a second communication component 1016.
The second processing component 1002 generally controls the overall operation of the in-vehicle apparatus 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The second processing component 1002 may include one or more second processors 1020 to execute instructions to perform all or a portion of the steps of the device control method described above. Further, the second processing component 1002 may include one or more modules that facilitate interaction between the second processing component 1002 and other components. For example, the second processing component 1002 can include a multimedia module to facilitate interaction between the second multimedia component 10010 and the second processing component 1002.
The second memory 1004 is configured to store various types of data to support operations at the in-vehicle device 1000. Examples of such data include instructions for any application or method operating on the in-vehicle device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The secondary memory 1004 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The second power supply component 1006 provides power to the various components of the in-vehicle device 1000. The second power supply component 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the in-vehicle device 1000.
The second multimedia component 10010 comprises a screen providing an output interface between said in-vehicle device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the second multimedia component 10010 comprises a front-facing camera and/or a rear-facing camera. When the in-vehicle apparatus 1000 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The second audio component 1010 is configured to output and/or input audio signals. For example, the second audio component 1010 includes a Microphone (MIC) configured to receive an external audio signal when the in-vehicle apparatus 1000 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the second memory 1004 or transmitted via the second communication component 1016. In some embodiments, the second audio component 1010 further comprises a speaker for outputting audio signals.
The second input/output interface 1012 provides an interface between the second processing component 1002 and a peripheral interface module, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The second sensor assembly 1014 includes one or more sensors for providing various aspects of status assessment for the in-vehicle device 1000. For example, the second sensor assembly 1014 may detect an open/closed state of the in-vehicle apparatus 1000, relative positioning of components such as a display and a keypad of the in-vehicle apparatus 1000, a change in position of the in-vehicle apparatus 1000 or a component of the in-vehicle apparatus 1000, presence or absence of user contact with the in-vehicle apparatus 1000, orientation or acceleration/deceleration of the in-vehicle apparatus 1000, and a change in temperature of the in-vehicle apparatus 1000. The second sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The second sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the second sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The second communication component 1016 is configured to facilitate communication between the in-vehicle device 1000 and other devices in a wired or wireless manner. The in-vehicle device 1000 may access a wireless network based on a communication standard, such as WiFi,4G or 5G, or a combination thereof. In an exemplary embodiment, the second communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the second communication component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the in-vehicle apparatus 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described apparatus control method.
In an exemplary embodiment, a non-transitory computer-readable storage medium including instructions, such as the second memory 1004 including instructions, executable by the second processor 1020 of the in-vehicle device 1000 to perform the device control method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a collection of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, system on Chip, or System on Chip), and the like. The integrated circuit or chip may be configured to execute executable instructions (or code) to implement the device control method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another device or apparatus, such as an integrated circuit or chip that includes a processor, memory, and an interface for communicating with other devices. The executable instructions may be stored in the memory, and when executed by the processor, implement the device control method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit the executable instructions to the processor for execution, so as to implement the device control method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned device control method when executed by the programmable apparatus.
The vehicle-mounted device 1000 may be mounted on various types of vehicles, such as cars, trucks, motorcycles, buses, boats, airplanes, helicopters, recreational vehicles, trains, and the like, and the embodiment of the present disclosure is not particularly limited.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. An apparatus control method, applied to a user equipment, includes:
responding to the collected audio data, and performing awakening identification on the audio data;
generating a wake-up event of the user equipment under the condition that a preset wake-up audio exists in the audio data;
performing ultrasonic perception detection on the audio data in response to the generation of the wake-up event to generate an ultrasonic detection result;
and under the condition that the ultrasonic sensing event exists in the user equipment according to the ultrasonic detection result, prohibiting the user equipment from waking up.
2. The control method according to claim 1, wherein the performing ultrasonic perception detection on the audio data to generate an ultrasonic detection result comprises:
filtering the audio data according to a preset frequency range to generate an ultrasonic audio to be detected;
generating a frequency response curve to be detected according to the ultrasonic audio to be detected;
and comparing the frequency response curve to be detected with a preset frequency response curve to generate the ultrasonic detection result.
3. The control method according to claim 2, wherein the comparing the frequency response curve to be measured with a preset frequency response curve to generate the ultrasonic detection result comprises:
generating a first ultrasonic detection result under the condition that the frequency response curve to be detected is matched with the preset frequency response curve, wherein the first ultrasonic detection result is used for indicating that the ultrasonic sensing event exists in the user equipment;
and generating a second ultrasonic detection result under the condition that the frequency response curve to be detected is not matched with the preset frequency response curve, wherein the second ultrasonic detection result is used for indicating that the ultrasonic perception event does not exist in the user equipment.
4. The control method according to claim 2, wherein the generating a frequency response curve to be tested according to the ultrasonic audio to be tested comprises:
intercepting a plurality of ultrasonic sub-audios to be detected from the ultrasonic audios to be detected according to a preset interception rule;
superposing the multiple ultrasonic sub-audios to be detected to generate a target ultrasonic sub-audio to be detected;
and generating the frequency response curve to be detected according to the target ultrasonic sub-audio to be detected.
5. The control method according to claim 2, wherein the comparing the frequency response curve to be measured with a preset frequency response curve to generate the ultrasonic detection result comprises:
determining a first amplitude relation among the frequency audios in the frequency response curve to be detected, and determining a second amplitude relation among the frequency audios in the preset frequency response curve;
comparing the first amplitude relationship and the second amplitude relationship to generate the ultrasonic detection result.
6. The control method according to claim 1, characterized in that the method comprises:
and controlling the user equipment to perform awakening response under the condition that the ultrasonic sensing event does not exist in the user equipment according to the ultrasonic detection result.
7. An apparatus control method, applied to an in-vehicle apparatus, includes:
sending a preset ultrasonic audio to user equipment, wherein the preset ultrasonic audio is used for indicating that the user equipment is forbidden to be awakened;
and responding to the received preset awakening audio, and controlling the vehicle-mounted equipment to awaken and respond.
8. The control method according to claim 7, characterized in that the method comprises:
the preset ultrasonic audio is determined by the following formula:
Figure FDA0003954051960000021
the device comprises a base, a plurality of discrete data sampling points, a plurality of audio frequency sampling points, a plurality of starting frequency sampling points, a plurality of discrete data sampling points, a plurality of audio frequency sampling points, a plurality of data bit numbers and a plurality of data processing units, wherein k is audio frequency, i is a discrete data sampling point, N is the total number of the discrete data sampling points, F is starting frequency, and B is the data bit number.
9. An apparatus control device, applied to a user equipment, includes:
the identification module is configured to respond to the collected audio data and carry out awakening identification on the audio data;
a first generation module configured to generate a wake event of the user equipment in case that it is determined that a preset wake audio exists in the audio data;
a second generation module configured to perform ultrasonic perception detection on the audio data in response to the generation of the wake-up event to generate an ultrasonic detection result;
an execution module configured to disable wake-up of the user equipment if it is determined from the ultrasound detection result that there is an ultrasound sensing event in the user equipment.
10. An apparatus control device, applied to an in-vehicle apparatus, comprising:
a sending module configured to send a preset ultrasonic audio to a user equipment, wherein the preset ultrasonic audio is used for indicating that the user equipment is prohibited from being awakened;
and the execution module is configured to respond to the received preset awakening audio and control the vehicle-mounted equipment to respond to the awakening response.
11. A user device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to carry out the steps of the method of any one of claims 1 to 6 when executing the executable instructions.
12. An in-vehicle apparatus, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to carry out the steps of the method of any one of claims 7-8 when executing the executable instructions.
13. A computer readable storage medium having stored thereon computer program instructions, wherein the program instructions, when executed by a processor, implement the steps of the method of any of claims 1-6, or wherein the program instructions, when executed by a processor, implement the steps of the method of any of claims 7-8.
CN202211473388.8A 2022-11-21 2022-11-21 Equipment control method, device, equipment and storage medium Active CN115827075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211473388.8A CN115827075B (en) 2022-11-21 2022-11-21 Equipment control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211473388.8A CN115827075B (en) 2022-11-21 2022-11-21 Equipment control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115827075A true CN115827075A (en) 2023-03-21
CN115827075B CN115827075B (en) 2024-02-23

Family

ID=85530550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211473388.8A Active CN115827075B (en) 2022-11-21 2022-11-21 Equipment control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115827075B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109391528A (en) * 2018-08-31 2019-02-26 百度在线网络技术(北京)有限公司 Awakening method, device, equipment and the storage medium of speech-sound intelligent equipment
CN112002320A (en) * 2020-08-10 2020-11-27 北京小米移动软件有限公司 Voice wake-up method and device, electronic equipment and storage medium
CN112331214A (en) * 2020-08-13 2021-02-05 北京京东尚科信息技术有限公司 Equipment awakening method and device
CN112489650A (en) * 2020-11-26 2021-03-12 北京小米松果电子有限公司 Wake-up control method and device, storage medium and terminal
US20210225374A1 (en) * 2020-12-23 2021-07-22 Intel Corporation Method and system of environment-sensitive wake-on-voice initiation using ultrasound

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109391528A (en) * 2018-08-31 2019-02-26 百度在线网络技术(北京)有限公司 Awakening method, device, equipment and the storage medium of speech-sound intelligent equipment
CN112002320A (en) * 2020-08-10 2020-11-27 北京小米移动软件有限公司 Voice wake-up method and device, electronic equipment and storage medium
CN112331214A (en) * 2020-08-13 2021-02-05 北京京东尚科信息技术有限公司 Equipment awakening method and device
CN112489650A (en) * 2020-11-26 2021-03-12 北京小米松果电子有限公司 Wake-up control method and device, storage medium and terminal
US20210225374A1 (en) * 2020-12-23 2021-07-22 Intel Corporation Method and system of environment-sensitive wake-on-voice initiation using ultrasound

Also Published As

Publication number Publication date
CN115827075B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
KR102006562B1 (en) METHOD AND DEVICE FOR MESSAGE PROCESSING
CN109204231B (en) Vehicle unlocking method and device
CN110341627B (en) Method and device for controlling behavior in vehicle
CN110588307B (en) Control method and device for automobile window and skylight and storage medium
CN107246881B (en) Navigation reminding method, device and terminal
US20210274432A1 (en) Method and device for reporting resource sensing result, user equipment, and storage medium
CN112185388B (en) Speech recognition method, device, equipment and computer readable storage medium
CN111580773B (en) Information processing method, device and storage medium
CN111243554A (en) Screen brightness adjusting method, screen brightness adjusting device and storage medium
CN109815679B (en) Authority management method and mobile terminal
RU2643528C2 (en) Method and device for calling
CN111862972A (en) Voice interaction service method, device, equipment and storage medium
CN115827075B (en) Equipment control method, device, equipment and storage medium
CN111292493A (en) Vibration reminding method, device, electronic equipment and system
CN115810252A (en) Fatigue driving early warning method, device, equipment, system and storage medium
CN111968680A (en) Voice processing method, device and storage medium
CN113313857A (en) Vehicle door control method, device and medium
CN111629045A (en) Device wake-up method, system, apparatus, device and storage medium
CN115881125B (en) Vehicle-mounted multitone region voice interaction method and device, electronic equipment and storage medium
US20240080893A1 (en) Access method and apparatus for unlicensed channel, device, and storage medium
CN114420156A (en) Audio processing method, device and storage medium
CN111798863B (en) Method and device for eliminating echo, electronic equipment and readable storage medium
CN112363917B (en) Application program debugging exception processing method and device, electronic equipment and medium
CN114553324A (en) Method and device for eliminating resonant frequency interference, mobile terminal and storage medium
CN110928745B (en) Deep sleep state abnormality detection method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant