CN115733918A - Flight mode switching method and device, electronic equipment and storage medium - Google Patents

Flight mode switching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115733918A
CN115733918A CN202111021216.2A CN202111021216A CN115733918A CN 115733918 A CN115733918 A CN 115733918A CN 202111021216 A CN202111021216 A CN 202111021216A CN 115733918 A CN115733918 A CN 115733918A
Authority
CN
China
Prior art keywords
information
sound
sound information
responding
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111021216.2A
Other languages
Chinese (zh)
Inventor
杨晓星
刘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202111021216.2A priority Critical patent/CN115733918A/en
Publication of CN115733918A publication Critical patent/CN115733918A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure relates to a switching method, a switching device, electronic equipment and a storage medium of flight modes, wherein the method comprises the following steps: sending the first sound information to a server; receiving an identification result based on the first sound information sent by the server; responding to the recognition result containing preset semantic information, and acquiring environment information; responding to the environment information meeting a first preset condition, and controlling the electronic equipment to be switched to a flight mode; the first preset condition is used for representing that the airplane is in a take-off state. In the method of the present disclosure, at least two recognition processes are employed, such as recognition of the first sound information and the environmental information; and the scene needing to be opened is automatically identified by the way of matching the local server with the cloud server, and the flight mode is automatically switched to. The safety of the airplane in the running process is guaranteed while the user experience is improved, and the low power consumption of the electronic equipment is kept to the maximum extent.

Description

Flight mode switching method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to a method and an apparatus for switching a flight mode, an electronic device, and a storage medium.
Background
In a scene of taking an airplane, in order to ensure safety of the airplane in a driving process, electronic equipment carried by a user, such as a mobile phone, needs to be switched to a flight mode or switched off.
In the related art, due to the dependence of users on electronic devices, the phenomenon of forgetting to turn on the flight mode or turning off the aircraft still occurs usually before the aircraft takes off or in the take-off stage. The situation easily affects the driving safety of the airplane, and further affects the life safety of users.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method and an apparatus for switching a flight mode, an electronic device, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, a method for switching a flight mode is provided, which is applied to an electronic device, and the method includes:
sending the first sound information to a server;
receiving an identification result based on the first sound information sent by the server;
responding to the recognition result containing preset semantic information, and acquiring environment information;
responding to the environment information meeting a first preset condition, and controlling the electronic equipment to be switched to a flight mode; the first preset condition is used for representing that the airplane is in a take-off state.
In some embodiments, the sending the first voice message to the server includes:
in the identification state, responding to the fact that the collected primary sound pressure signal reaches a first threshold value, and obtaining initial sound information collected by a sound pressure detection module;
responding to the fact that the initial sound information meets a second preset condition, and acquiring the first sound information; the second preset condition is used for representing that the user is in an environment to be flown;
and sending the first sound information to the server.
In some embodiments, the method further comprises:
determining characteristic information contained in the initial sound information;
and responding to the voice characteristics of the voice message characterized by the characteristic information and non-preset users, and determining that the initial voice information meets the second preset condition.
In some embodiments, the method further comprises:
acquiring the position information of the electronic equipment;
and entering the identification state in response to the position information meeting a position condition.
In some embodiments, the obtaining environmental information in response to the recognition result containing preset semantic information includes:
detecting a secondary sound pressure signal in response to the recognition result containing preset semantic information;
in response to the secondary acoustic pressure signal reaching a second threshold, acquiring the environmental information.
In some embodiments, the environmental information includes: the mobile information and the second sound information;
the obtaining the environmental information includes:
acquiring mobile information;
and responding to the mobile information reaching the corresponding parameter threshold value, and acquiring second sound information.
In some embodiments, the method further comprises:
determining that the environmental information satisfies the first preset condition in response to the second sound information matching preset engine sound information.
In some embodiments, the control electronics switch to a flight mode, including:
determining a current state of the electronic device;
responding to the current state being a standby state, and switching to a flight mode;
and responding to the fact that the current state is the working state, and controlling the electronic equipment to be switched to a flight mode after receiving a preset operation instruction.
According to a second aspect of the embodiments of the present disclosure, a switching device for flight modes is provided, which is applied to an electronic device, and includes:
the sending module is used for sending the first sound information to the server;
the receiving module is used for receiving an identification result which is sent by the server and is based on the first sound information;
the acquisition module is used for responding to the recognition result containing preset semantic information and acquiring environment information;
the control module is used for responding to the environment information meeting a first preset condition and controlling the electronic equipment to be switched to a flight mode; the first preset condition is used for representing that the airplane is in a take-off state.
In some embodiments, the sending module is to:
in the recognition state, responding to the fact that the collected primary sound pressure signal reaches a first threshold value, and acquiring initial sound information;
responding to the fact that the initial sound information meets a second preset condition, and acquiring the first sound information; the second preset condition is used for representing that the user is in an environment to be flown;
and sending the first sound information to the server.
In some embodiments, the apparatus further comprises: a determination module to:
determining characteristic information contained in the initial sound information;
and responding to the voice characteristics of the voice message characterized by the characteristic information and non-preset users, and determining that the initial voice information meets the second preset condition.
In some embodiments, the obtaining module is further configured to:
detecting a secondary sound pressure signal in response to the recognition result containing preset semantic information;
in response to the secondary acoustic pressure signal reaching a second threshold, acquiring the environmental information.
In some embodiments, the environmental information includes: the mobile information and the second sound information;
the acquisition module is further configured to:
acquiring mobile information;
and responding to the mobile information reaching the corresponding parameter threshold value, and acquiring second sound information.
In some embodiments, the apparatus further comprises: a determination module, the determination module further to:
determining that the environmental information satisfies the first preset condition in response to the second sound information matching preset engine sound information.
According to a third aspect of an embodiment of the present disclosure, there is provided an electronic device, including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of switching flight modes as defined in any one of the above.
According to a fourth aspect of embodiments of the present disclosure, a non-transitory computer-readable storage medium is presented, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of switching flight modes as described in any one of the above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the method of the present disclosure, at least two recognition processes are employed, such as recognition of the first sound information and the environmental information; and the scene of the flight mode needing to be opened is automatically identified by matching the local and cloud servers, and the flight mode is automatically switched. The accuracy of automatic switch to flight mode is improved, user experience is promoted, meanwhile, safety in the aircraft driving process is guaranteed, and low power consumption of the electronic equipment is kept to the maximum extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flow chart of a method shown according to a first exemplary embodiment.
Fig. 2 is a flow chart of a method shown according to a second exemplary embodiment.
Fig. 3 is a flow chart of a method shown according to a third exemplary embodiment.
Fig. 4 is a flow chart illustrating a method according to a fourth exemplary embodiment.
Fig. 5 is a flow chart of a method shown according to a fifth exemplary embodiment.
FIG. 6 is a schematic diagram illustrating modules in an electronic device according to an example embodiment.
Fig. 7 is a block diagram illustrating an apparatus according to an example embodiment.
FIG. 8 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In order to solve the problems in the related art, an embodiment of the present disclosure provides a method for switching a flight mode, which is applied to an electronic device, and the method includes: and sending the first sound information to a server. And receiving the identification result based on the first sound information sent by the server. And responding to the recognition result containing preset semantic information, and acquiring environment information. Responding to the environment information meeting a first preset condition, and controlling the electronic equipment to be switched to a flight mode; the first preset condition is used for representing that the airplane is in a take-off state. In the method of the present disclosure, at least two recognition processes are employed, such as recognition of the first sound information and the environmental information; and the scene needing to be opened is automatically identified by the way of matching the local server with the cloud server, and the flight mode is automatically switched to. The accuracy of automatic switch to flight mode is improved, user experience is promoted, meanwhile, the safety of the aircraft in the driving process is guaranteed, and the low power consumption of the electronic equipment is kept to the maximum extent.
In an exemplary embodiment, the method for switching flight modes according to the present embodiment is applied to an electronic device. Wherein, electronic equipment for example can be terminal equipment such as cell-phone, panel computer, notebook computer, intelligent wearing equipment.
As shown in fig. 1, the method of the present embodiment may include the following steps:
s110, sending the first sound information to a server.
And S120, receiving the identification result based on the first sound information sent by the server.
And S130, responding to the recognition result containing preset semantic information, and acquiring environment information.
And S140, responding to the environment information meeting a first preset condition, and controlling the electronic equipment to switch to the flight mode.
In step S110, the first sound information may be sound information collected by the electronic device, for example, the sound information includes broadcast voice of a worker in an airplane and safety broadcast content in the airplane. The server or cloud server is in communication connection with the electronic device and can receive first sound information sent by the electronic device. The first sound information collected by the electronic device can meet a certain time requirement, such as 60 seconds, so that the server has sufficient identification time and content.
In step S120, the server may analyze the first sound information after receiving the first sound information. For example, semantic recognition, voice type recognition, voice information matching, and the like are performed on the first voice information.
The server may generate the recognition result based on an analysis of the first sound information. And transmitting the recognition result to the electronic device. At least semantic recognition information, i.e. the meaning characterized by the first sound information, is included in the recognition result. When the server identifies the first sound information, the keyword in the first sound information can be extracted, and the semantic information is determined by combining the keyword. Semantic recognition in the server can be performed based on semantic recognition algorithms such as an Artificial Intelligence (AI) model and a neural network model, and the more sufficient the training sample is, the more accurate the semantic recognition is. The step relates to semantic recognition of the first sound information, and the server performs semantic recognition with high difficulty, so that efficiency is improved, and low power consumption of the electronic equipment is guaranteed.
In step S130, after receiving the identification result sent by the server, the electronic device may analyze the identification result. Semantic identification information contained therein is acquired.
The processor of the electronic device identifies whether the semantic identification information is preset semantic information, where the preset semantic information may refer to, for example: the semantics of keywords including take-off, power-off, flight mode and the like with meanings indicating that a user turns off or on the flight mode.
When the processor determines that the recognition result contains the preset semantic information, the environmental information acquired by the acquisition component or the sensing component in the electronic equipment can be acquired. The environment information may be, for example, speed information, sound information, altitude information, and the like.
When the recognition result determined by the processor does not include the preset semantic information, the electronic device may continue to monitor, for example, repeatedly perform step S110.
In step S140, a first preset condition is used to indicate that the aircraft is in a takeoff state. According to the acquired environment information, the environment information can be further identified and judged.
In one example, the processor of the electronic device may control the identification or determination of the environmental information to further confirm whether the aircraft is in a standby state. The accuracy in the process of automatically identifying or switching the flight mode is improved.
When the environment information meets a first preset condition, for example, the speed information collected by the electronic equipment is greater than a speed threshold value, and/or the sound information is greater than a sound threshold value, it indicates that the airplane may be in a takeoff state. The processor may control switching to the flight mode.
In this example, the process of identifying and determining the environmental information is performed locally on the electronic device, and this identification does not involve a difficult identification process such as semantic identification, so that the efficiency is high locally (the process of interacting with the server is omitted), and the power consumption of the electronic device is not too high.
In another example, the electronic device may send the collected environment information meeting the duration requirement (e.g., 60 seconds) to the server, and the server performs recognition analysis on part or all of the environment information according to the received environment information to obtain a corresponding environment recognition result, and sends the environment recognition result to the electronic device. And when the electronic equipment determines that the environment information meets the first preset condition according to the environment recognition result, switching to a flight mode.
In this example, the process of identifying and determining the environment information may be performed on the server side. The low power consumption state of the electronic equipment is kept to the maximum extent.
When the environmental information does not meet the first preset condition, the situation that the airplane is in the state of waiting for takeoff cannot be determined yet. The electronic device may continue to monitor, such as repeatedly performing step S110.
The method of the embodiment involves at least two recognition processes, such as the recognition of the first sound information by the server and the recognition of the environment information by the processor or the server. The processor is matched with the cloud server locally, the scene needing to be opened in the flight mode is accurately identified, the flight mode is automatically switched to the flight mode, user experience is improved, and low power consumption of the electronic equipment is kept.
In an exemplary embodiment, as shown in fig. 2, step S110 in this embodiment may include the following steps:
and S1101, in the identification state, responding to the primary sound pressure signal reaching a first threshold value, and acquiring initial sound information acquired by a sound pressure detection module.
And S1102, responding to the fact that the initial sound information meets a second preset condition, and acquiring first sound information.
And S1103, sending the first voice message to a server.
In this embodiment, before sending the first sound information to the server, the electronic device further performs identification and determination. I.e. local identification is performed on the basis of the electronic device itself, before the server obtains the identification result.
In step S1101, the recognition state may refer to a flight pattern recognition service state. In the recognition state, a sound pressure detection module in the electronic device is activated.
The first threshold represents a sound pressure threshold of the human voice for filtering out other noise interference. The sound pressure detection module can acquire sound pressure (an acoustic feature, in pascal) in the environment in real time after being started, and the acquired sound pressure information may be generated by environmental noise such as equipment operation in the environment or object collision in the environment, and may also be generated by human voice. When the primary sound pressure signal sensed by the sound pressure detection module reaches a first threshold value, the primary sound pressure signal is possibly the voice, and the distance between the voice and the electronic equipment is short, the sound pressure detection module is triggered to report the subsequently acquired sound information. And when the primary sound pressure signal does not reach the first threshold value, the collected sound signal is possibly environmental noise, or the voice of a person is far away from the electronic equipment, and the sound signal does not trigger the collection of the first sound information.
As shown in fig. 6, the processor (AP) first controls to wake up an ADSP (digital signal processing chip), and receives the initial sound information collected by the sound pressure detection module through the ADSP, where the collected initial sound information may include a voice of a person in a nearby environment, and the voice of the person may be a broadcast sound in an airport environment or a broadcast sound of a worker in an airplane. In this step, the initial sound information collected by the sound pressure detection module may be encoded by a Codec and then transmitted to the ADSP, which transmits the initial sound information to the processor (AP). A Processor (AP) acquires initial sound information.
In step S1102, the processor may identify the initial sound information according to the acquired initial sound information. Such as identifying whether the original voice message is a human voice, such as a voice message, or simply ambient noise.
The second preset condition may be used to characterize the user being in an environment to be flown. For example, when the initial voice message can be matched with the pre-stored relevant voice of the announcer, the user is in the environment to be flown. When the second preset condition is met, the processor can acquire the first sound information by controlling a recording module (such as a mic) of the electronic equipment.
When the initial sound information does not satisfy the second preset condition, it indicates that the collected sound is user sound or environmental noise or unsafe broadcast content, and at this moment, it cannot be determined that the user is in the environment to be flown, and the electronic device may repeatedly execute step S1101.
In step S1103, the processor sends the first sound information acquired from the sound recording module to the server, so that the server can identify semantic information in the first sound information.
In an exemplary embodiment, in the present embodiment, before performing step S1102, a method for determining whether the initial sound information satisfies the second preset condition may include the following steps:
s1101-1, characteristic information contained in the initial sound information is determined.
S1101-2, responding to the voice message represented by the feature information and the voice feature of the non-preset user, and determining that the initial voice information meets a second preset condition.
Wherein, in step S1101-1, the processor may determine that the characteristic information in the initial sound information is exceeded by controlling the ADSP. The characteristic information may include, for example, a sound type and a voiceprint characteristic. The sound type may include speech, ambient noise without speech, noise in the scene, etc.; voiceprint features are sonic spectral information in sound.
In step S1101-2, if the sound type represented by the feature information is a voice, that is, a voice message, and the voiceprint feature in the voice message is different from the voiceprint feature of the preset user, that is, the feature information is not the sound feature of the preset user, it indicates that the initial sound information acquired by the electronic device at this time may be a broadcast sound of a staff in the aircraft. That is, the user is in an environment that may be in a flight at this time, and it is determined that the second preset condition is met at this time. The preset user may refer to an owner user of the electronic device.
The embodiment is recognition judgment performed locally in the electronic device. After step S1101, the present embodiment is executed, and step S1102 is executed after step S1101-2 of the present embodiment.
In an exemplary embodiment, before step S1101, as shown in fig. 3, the following steps may be further included in this embodiment:
s1001, position information of the electronic equipment is acquired.
And S1002, responding to the position information meeting the position condition, and entering an identification state.
In step S1001, a positioning sensor or a positioning application in the electronic device may determine the location information of the electronic device in real time. The processor may acquire location information collected by a location sensor or a location application.
In step S1002, the location condition is set as an airport location. When the position information acquired by the processor is consistent with the airport position set by the position condition, the user is indicated to be possibly at the airport. The identification state, i.e. the flight pattern recognition service state, can thus be entered.
And when the position information does not accord with the airport position, indicating that the user is not at the airport and does not enter the identification state. Step S1001 may be repeatedly performed.
The embodiment is recognition judgment performed locally in the electronic device. After step S1002 of the present embodiment, step S1101 may be performed.
In an exemplary embodiment, as shown in fig. 4, step S130 in this embodiment may include the following steps:
and S1301, responding to the recognition result containing preset semantic information, and detecting a secondary sound pressure signal.
And S1302, responding to the secondary sound pressure signal reaching a second threshold value, and acquiring environment information.
In step S1301, the preset voice message may be a semantic meaning indicating that the user turns off or turns on the flight mode. When the processor determines that the recognition result sent by the server contains preset semantic information which indicates that the user is in the airplane environment at the moment, the worker broadcasts or advises the user to shut down or open the flight mode.
In this scenario, the method of the present disclosure further verifies the identification in conjunction with the user's mind state. The processor controls the sound pressure detection module to detect second sound pressure information.
In step S1302, a second threshold value is greater than the first threshold value, the second threshold value being used to characterize a reference sound pressure value of an engine in the aircraft. When the second sound pressure information reaches a second threshold value, the electronic equipment detects the sound of the aircraft engine. Therefore, the environmental information collected by the sensing part can be acquired. When the second sound pressure information does not reach the second threshold value, which indicates that the detected sound is not the engine sound, the airplane may not take off, and the electronic device may repeatedly execute step S1301.
In this embodiment, the environment information may include: movement information and second sound information.
The method for acquiring the environment information and judging whether the first preset condition is met according to the environment information may include the following steps:
s1302-1, acquiring the movement information. In this step, the movement information may include speed information, acceleration information, displacement information, tilt angle information, and the like of the electronic device. Sensors such as an acceleration sensor and a gyroscope integrated in the electronic equipment can acquire movement information of the electronic equipment in real time and can obtain an inclination angle of the electronic equipment relative to the horizontal direction. The processor acquires the movement information acquired by the sensor.
S1302-2, responding to the mobile information reaching the corresponding parameter threshold, and obtaining second sound information. In this step, the parameter threshold corresponds to the information represented by the movement information, for example, when the movement information is speed information, the parameter threshold is a speed threshold; when the movement information is tilt angle information, the parameter threshold is a tilt angle threshold.
When the movement information reaches the corresponding parameter threshold value, the airplane is indicated to be possibly in a takeoff state or a takeoff phase. The processor further judges the verification and controls to acquire the second sound information. In conjunction with the current aircraft state, the second acoustic information may capture the sounds of the aircraft engine. When the movement information does not reach the corresponding parameter threshold value, indicating that the airplane may not take off yet, the electronic device may continue to monitor the movement information without acquiring the second sound information temporarily.
And S1302-3, responding to the second sound information and the preset engine sound information for matching, for example, comparing the second sound information with a preset engine sound information sample, and when the matching degree reaches a threshold value, indicating that the second sound information and the preset engine sound information are matched. And determining that the environment information meets a first preset condition. In this step, the processor may locally identify whether the second sound information matches the preset engine sound information on the basis that the movement information reaches the corresponding parameter threshold. If the first preset condition is matched with the second preset condition, the collected second sound information is the sound of the engine, and the airplane is determined to meet the first preset condition, namely the airplane is in a take-off state. If not, the aircraft can not be represented in the take-off state, the second sound information can be continuously collected, and identification and matching are carried out.
After step S1302-3, step S140 may be performed.
In an exemplary embodiment, as shown in fig. 5, step S140 in this embodiment may include the following steps:
and S1401, determining the current state of the electronic equipment.
And S1402, responding to the current state being a standby state, and switching to an airplane mode.
And S1403, responding to the fact that the current state is the working state, and controlling the electronic equipment to be switched to the flight mode after receiving a preset operation instruction.
In step S1401, the processor may further determine whether the electronic device is in an operating state, such as whether the user is in a state necessary to operate the mobile phone. The processor may determine the current state by monitoring foreground running programs. Or the processor controls a prompt box of "opening the flight mode" to be displayed on the interface, and determines the current state in combination with a touch instruction of the user, for example, when the user selects to ignore the message based on the prompt box, the current state may be a working state.
In step S1402, the reason why the flight mode is not turned on in the standby state, i.e., the user does not use the mobile phone in the current state, may be forgotten. The processor may thus control the switching to flight mode. In the step, under the scene that the user forgets, the flight mode can be automatically switched to in time, and the safety of the flight machine is guaranteed.
In step S1403, the user may be handling the event in an emergency in the working state, so the processor may switch to the flight mode at an appropriate timing in conjunction with a preset operation instruction of the user. The preset operation instruction is, for example, a user pressing a "power off" key. In this step, on the basis of furthest promoting user experience, automatically switch to flight mode, ensure quick-witted safety.
And after the flight mode is switched, the identification state is quitted, namely the flight mode identification service state is quitted, and the sound pressure detection module is closed.
With the combination of the above embodiments, in the method of the present disclosure, the user experience is fully examined through multiple identifications and judgments, and the flight mode is automatically switched to when the aircraft is in the takeoff state through multiple verifications.
In addition, in the embodiment, the flight mode is automatically switched in a manner that the electronic device and the cloud server are matched for identification. The voice information is locally collected on the electronic equipment, only simple voice information matching or type judgment is locally carried out, complex semantic recognition is carried out on the cloud server, recognition efficiency is improved, and meanwhile power consumption of the electronic equipment is reduced to the greatest extent.
In an exemplary embodiment, the present disclosure also provides a device for switching flight modes, which is applied to an electronic device. As shown in fig. 7, the apparatus of the present embodiment includes: a sending module 110, a receiving module 120, an obtaining module 130, and a control module 140. The apparatus of the present embodiment is used to implement the method as shown in fig. 1. The sending module 110 is configured to send the first sound information to the server. The receiving module 120 is configured to receive the identification result based on the first sound information sent by the server. The obtaining module 130 is configured to obtain the environment information in response to the recognition result containing the preset semantic information. The control module 140 is configured to control the electronic device to switch to the flight mode in response to the environmental information satisfying a first preset condition; the first preset condition is used for representing that the airplane is in a take-off state.
In an exemplary embodiment, still referring to fig. 7, the apparatus of the present embodiment comprises: a sending module 110, a receiving module 120, an obtaining module 130, and a control module 140. The apparatus of the present embodiment is used to implement the method as shown in fig. 2. Wherein, the sending module 110 is configured to: in the recognition state, responding to the fact that the collected primary sound pressure signal reaches a first threshold value, and acquiring initial sound information; responding to the initial sound information meeting a second preset condition, and acquiring first sound information; the second preset condition is used for representing that the user is in an environment to be flown; and sending the first sound information to a server.
The apparatus in this embodiment further comprises: a determination module to: determining characteristic information contained in the initial sound information; and responding to the voice characteristics of the non-preset users of the voice message represented by the characteristic information, and determining that the initial voice information meets a second preset condition.
In one exemplary embodiment, still referring to fig. 7, the apparatus of the present embodiment comprises: a sending module 110, a receiving module 120, an obtaining module 130, and a control module 140. The apparatus of the present embodiment is used to implement the method as shown in fig. 3. Wherein, the obtaining module 130 is further configured to: acquiring position information of the electronic equipment; and entering an identification state in response to the position information meeting the position condition.
In one exemplary embodiment, still referring to fig. 7, the apparatus of the present embodiment comprises: a sending module 110, a receiving module 120, an obtaining module 130, and a control module 140. The apparatus of the present embodiment is used to implement the method as shown in fig. 4. Wherein, the obtaining module 130 is further configured to: detecting a secondary sound pressure signal in response to the recognition result containing preset semantic information; in response to the secondary sound pressure signal reaching a second threshold, environmental information is acquired.
In this embodiment, the environment information includes: movement information and second sound information. The obtaining module 130 is further configured to: acquiring movement information acquired by a sensor; and acquiring second sound information in response to the mobile information reaching the corresponding parameter threshold.
In this embodiment, the apparatus further includes: a determination module, the determination module further configured to: in response to the second sound information matching the preset engine sound information, it is determined that the environmental information satisfies a first preset condition.
In one exemplary embodiment, still referring to fig. 7, the apparatus of the present embodiment comprises: a sending module 110, a receiving module 120, an obtaining module 130, and a control module 140. The apparatus of the present embodiment is used to implement the method as shown in fig. 5. Wherein the control module 140 is configured to: determining a current state of the electronic device; responding to the current state as a standby state, and switching to a flight mode; and responding to the current state as a working state, and controlling the electronic equipment to be switched to a flight mode after receiving a preset operation instruction.
Fig. 8 is a block diagram of an electronic device. The present disclosure also provides for an electronic device, for example, the device 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 506 provides power to the various components of device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 500 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the device 500, the sensor assembly 514 may also detect a change in the position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, orientation or acceleration/deceleration of the device 500, and a change in the temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communications between the device 500 and other devices in a wired or wireless manner. The device 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
A non-transitory computer readable storage medium, such as the memory 504 including instructions executable by the processor 520 of the device 500 to perform the method, is provided in another exemplary embodiment of the disclosure. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The instructions in the storage medium, when executed by a processor of the electronic device, enable the electronic device to perform the above-described method.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (16)

1. A method for switching flight modes is applied to electronic equipment and comprises the following steps:
sending the first sound information to a server;
receiving an identification result based on the first sound information sent by the server;
responding to the recognition result containing preset semantic information, and acquiring environment information;
controlling the electronic equipment to switch to a flight mode in response to the environmental information meeting a first preset condition; the first preset condition is used for representing that the airplane is in a take-off state.
2. The handover method according to claim 1, wherein the sending the first voice message to the server comprises:
in the identification state, responding to the acquired primary sound pressure signal reaching a first threshold value, and acquiring initial sound information;
responding to the fact that the initial sound information meets a second preset condition, and acquiring the first sound information; the second preset condition is used for representing that the user is in an environment to be flown;
and sending the first sound information to the server.
3. The handover method according to claim 2, wherein the method further comprises:
determining characteristic information contained in the initial sound information;
and responding to the voice characteristics of the voice message characterized by the characteristic information and non-preset users, and determining that the initial voice information meets the second preset condition.
4. The handover method according to claim 2, wherein the method further comprises:
acquiring the position information of the electronic equipment;
and entering the identification state in response to the position information meeting a position condition.
5. The handover method according to claim 1, wherein the obtaining environmental information in response to the recognition result containing the predetermined semantic information comprises:
detecting a secondary sound pressure signal in response to the recognition result containing preset semantic information;
in response to the secondary acoustic pressure signal reaching a second threshold, the environmental information is acquired.
6. The handover method according to claim 5, wherein the environment information comprises: the mobile information and the second sound information;
the acquiring the environmental information includes:
acquiring mobile information;
and responding to the mobile information reaching the corresponding parameter threshold value, and acquiring second sound information.
7. The handover method according to claim 6, wherein the method further comprises:
determining that the environmental information satisfies the first preset condition in response to the second sound information matching preset engine sound information.
8. The switching method according to any one of claims 1 to 7, wherein the control electronics switches to flight mode, comprising:
determining a current state of the electronic device;
responding to the current state as a standby state, and switching to a flight mode;
and responding to the fact that the current state is the working state, and controlling the electronic equipment to be switched to a flight mode after receiving a preset operation instruction.
9. A flight mode switching device is applied to electronic equipment and comprises:
the sending module is used for sending the first sound information to the server;
the receiving module is used for receiving the identification result which is sent by the server and is based on the first sound information;
the acquisition module is used for responding to the recognition result containing preset semantic information and acquiring environment information;
the control module is used for responding to the environment information meeting a first preset condition and controlling the electronic equipment to be switched to a flight mode; the first preset condition is used for representing that the airplane is in a take-off state.
10. The switching device of claim 9, wherein the sending module is configured to:
in the identification state, responding to the acquired primary sound pressure signal reaching a first threshold value, and acquiring initial sound information;
responding to the fact that the initial sound information meets a second preset condition, and acquiring the first sound information; the second preset condition is used for representing that the user is in an environment to be flown;
and sending the first sound information to the server.
11. The switching device of claim 10, further comprising: a determination module to:
determining characteristic information contained in the initial sound information;
and responding to the voice characteristics of the voice message which are represented by the characteristic information and are not preset by the user, and determining that the initial voice information meets the second preset condition.
12. The switching device of claim 9, wherein the obtaining module is further configured to:
detecting a secondary sound pressure signal in response to the recognition result containing preset semantic information;
in response to the secondary acoustic pressure signal reaching a second threshold, acquiring the environmental information.
13. The handover apparatus according to claim 12, wherein the environment information comprises: movement information and second sound information;
the acquisition module is further configured to:
acquiring mobile information;
and responding to the mobile information reaching the corresponding parameter threshold value, and acquiring second sound information.
14. The switching device of claim 13, further comprising: a determination module, the determination module further to:
determining that the environmental information satisfies the first preset condition in response to the second sound information matching preset engine sound information.
15. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of switching flight modes of any one of claims 1 to 8.
16. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of switching flight modes of any one of claims 1 to 8.
CN202111021216.2A 2021-09-01 2021-09-01 Flight mode switching method and device, electronic equipment and storage medium Pending CN115733918A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021216.2A CN115733918A (en) 2021-09-01 2021-09-01 Flight mode switching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021216.2A CN115733918A (en) 2021-09-01 2021-09-01 Flight mode switching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115733918A true CN115733918A (en) 2023-03-03

Family

ID=85292129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111021216.2A Pending CN115733918A (en) 2021-09-01 2021-09-01 Flight mode switching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115733918A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220312268A1 (en) * 2021-03-29 2022-09-29 Sony Group Corporation Wireless communication control based on shared data
CN117215769A (en) * 2023-08-08 2023-12-12 北京小米机器人技术有限公司 Robot task awakening method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104902081A (en) * 2015-04-30 2015-09-09 广东欧珀移动通信有限公司 Control method of flight mode and mobile terminal
CN105208216A (en) * 2015-10-22 2015-12-30 小米科技有限责任公司 Control method and device for terminal, and terminal
CN109889668A (en) * 2019-03-04 2019-06-14 Oppo广东移动通信有限公司 Communication pattern configuration method, device, electronic equipment and storage medium
CN110428835A (en) * 2019-08-22 2019-11-08 深圳市优必选科技股份有限公司 A kind of adjusting method of speech ciphering equipment, device, storage medium and speech ciphering equipment
CN111031172A (en) * 2019-11-25 2020-04-17 维沃移动通信有限公司 Method for controlling flight mode and electronic equipment
CN113129876A (en) * 2019-12-30 2021-07-16 Oppo广东移动通信有限公司 Network searching method and device, electronic equipment and storage medium
CN113176870A (en) * 2021-06-29 2021-07-27 深圳小米通讯技术有限公司 Volume adjustment method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104902081A (en) * 2015-04-30 2015-09-09 广东欧珀移动通信有限公司 Control method of flight mode and mobile terminal
CN105208216A (en) * 2015-10-22 2015-12-30 小米科技有限责任公司 Control method and device for terminal, and terminal
CN109889668A (en) * 2019-03-04 2019-06-14 Oppo广东移动通信有限公司 Communication pattern configuration method, device, electronic equipment and storage medium
CN110428835A (en) * 2019-08-22 2019-11-08 深圳市优必选科技股份有限公司 A kind of adjusting method of speech ciphering equipment, device, storage medium and speech ciphering equipment
CN111031172A (en) * 2019-11-25 2020-04-17 维沃移动通信有限公司 Method for controlling flight mode and electronic equipment
CN113129876A (en) * 2019-12-30 2021-07-16 Oppo广东移动通信有限公司 Network searching method and device, electronic equipment and storage medium
CN113176870A (en) * 2021-06-29 2021-07-27 深圳小米通讯技术有限公司 Volume adjustment method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220312268A1 (en) * 2021-03-29 2022-09-29 Sony Group Corporation Wireless communication control based on shared data
US11856456B2 (en) * 2021-03-29 2023-12-26 Sony Group Corporation Wireless communication control based on shared data
CN117215769A (en) * 2023-08-08 2023-12-12 北京小米机器人技术有限公司 Robot task awakening method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107919123B (en) Multi-voice assistant control method, device and computer readable storage medium
WO2017219506A1 (en) Method and device for acquiring movement trajectory
US10610152B2 (en) Sleep state detection method, apparatus and system
EP3136710A1 (en) Method and apparatus for controlling photography of unmanned aerial vehicle
EP3933570A1 (en) Method and apparatus for controlling a voice assistant, and computer-readable storage medium
CN110730115B (en) Voice control method and device, terminal and storage medium
CN106527682B (en) Method and device for switching environment pictures
WO2013163098A1 (en) Systems and methods for controlling output of content based on human recognition data detection
CN111063354B (en) Man-machine interaction method and device
US10122916B2 (en) Object monitoring method and device
CN105868709B (en) Method and device for closing fingerprint identification function
CN107666536B (en) Method and device for searching terminal
CN115733918A (en) Flight mode switching method and device, electronic equipment and storage medium
CN109145679A (en) A kind of method, apparatus and system issuing warning information
CN107734303B (en) Video identification method and device
CN111951787A (en) Voice output method, device, storage medium and electronic equipment
CN111127846A (en) Door-knocking reminding method, door-knocking reminding device and electronic equipment
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN106774902B (en) Application locking method and device
CN113225431B (en) Display screen control method and device and storage medium
CN113903146A (en) SOS method, electronic device and computer readable storage medium
CN107783635B (en) Method and device for controlling display of public display equipment and terminal
CN112489650A (en) Wake-up control method and device, storage medium and terminal
CN110928589A (en) Information processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination