CN112954107A - Contextual model adjusting method and device - Google Patents

Contextual model adjusting method and device Download PDF

Info

Publication number
CN112954107A
CN112954107A CN202110105236.1A CN202110105236A CN112954107A CN 112954107 A CN112954107 A CN 112954107A CN 202110105236 A CN202110105236 A CN 202110105236A CN 112954107 A CN112954107 A CN 112954107A
Authority
CN
China
Prior art keywords
wave signal
sound wave
determining
electronic equipment
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110105236.1A
Other languages
Chinese (zh)
Inventor
肖亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Software Technology Co Ltd
Original Assignee
Vivo Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Software Technology Co Ltd filed Critical Vivo Software Technology Co Ltd
Priority to CN202110105236.1A priority Critical patent/CN112954107A/en
Publication of CN112954107A publication Critical patent/CN112954107A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Telephone Function (AREA)

Abstract

The application discloses a contextual model adjusting method and device, and belongs to the field of communication. The method comprises the following steps: the electronic equipment controls a loudspeaker to send out a first sound wave signal and controls a receiver to receive the first sound wave signal; determining a propagation rate of the first acoustic signal; determining a transmission medium in which the electronic equipment is positioned according to the propagation rate; and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium. The embodiment of the application solves the problem that the existing contextual model cannot be suitable for a special use scene in the prior art.

Description

Contextual model adjusting method and device
Technical Field
The application belongs to the field of communication, and particularly relates to a contextual model adjusting method and device.
Background
With the rapid development of mobile communication technology, various mobile electronic devices and non-mobile electronic devices have become indispensable tools in various aspects of people's lives. The functions of various Application programs (APPs) of the electronic equipment are gradually improved, and the functions do not only play a role in communication, but also provide various intelligent services for users, so that great convenience is brought to the work and life of the users.
Due to the fact that the use frequency of the electronic equipment is high, the electronic equipment is more in use scene types, such as an underwater use scene, an overhead use scene and other non-common use scenes; however, in the prior art, the electronic device usually only aims at specific contextual models set for outdoor, indoor or meeting scenes and the like, and the existing contextual models cannot be suitable for special use scenes, such as underwater use scenes.
Disclosure of Invention
An embodiment of the present application provides a method and an apparatus for adjusting a contextual model, which solve the problem that the existing contextual model cannot be applied to a special usage scenario in the prior art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for adjusting a contextual model, where the method includes:
the electronic equipment controls a loudspeaker to send out a first sound wave signal and controls a receiver to receive the first sound wave signal;
determining a propagation rate of the first acoustic signal;
determining a transmission medium in which the electronic equipment is positioned according to the propagation rate;
and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium.
In a second aspect, an embodiment of the present application further provides a contextual model adjusting apparatus, where the contextual model adjusting apparatus includes:
the control module is used for controlling the loudspeaker to send out a first sound wave signal and controlling the receiver to receive the first sound wave signal by the electronic equipment;
a velocity determination module for determining a propagation velocity of the first acoustic signal;
the medium determining module is used for determining a transmission medium where the electronic equipment is located according to the propagation rate;
and the adjusting module is used for adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium.
In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes a memory, a processor, and a program or an instruction stored on the memory and executable on the processor, and the processor executes the program or the instruction to implement the steps in the scene mode adjustment method described above.
In a fourth aspect, the present application further provides a readable storage medium, on which a program or instructions are stored, where the program or instructions, when executed by a processor, implement the steps in the scene mode adjustment method as described above.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the electronic equipment controls the loudspeaker to send out a first sound wave signal and controls the receiver to receive the first sound wave signal; determining a propagation rate of the first acoustic signal; determining a transmission medium in which the electronic equipment is positioned according to the propagation rate; and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium, so that the contextual model is matched with the transmission medium, and the convenience of a user for operating the electronic equipment is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating a method for adjusting a profile according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a first example provided by an embodiment of the present application;
FIG. 3 shows one of the schematic diagrams of a second example provided by an embodiment of the present application;
fig. 4 shows a second schematic diagram of a second example provided by an embodiment of the present application;
fig. 5 is a flowchart illustrating a third example provided by an embodiment of the present application;
fig. 6 is a block diagram of a profile adjustment apparatus according to an embodiment of the present application;
fig. 7 shows a block diagram of an electronic device provided by an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes the scene mode adjustment method provided by the embodiments of the present application in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides a method for adjusting a contextual model, which may be optionally applied to electronic devices including various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of Mobile Stations (MSs), Terminal devices (Terminal devices), and the like.
The method comprises the following steps:
step 101, an electronic device controls a loudspeaker to emit a first sound wave signal, and controls a receiver to receive the first sound wave signal.
Wherein, the first sound wave signal may be a sound wave signal with a frequency within a preset range, for example, a sound wave greater than 1800HZ (hertz); the preset range may be determined according to a power range of a speaker of the electronic device; for example, the speaker range of electronic devices is typically between 50HZ and 13 kHZ.
As a first example, referring to fig. 2, the electronic device controls the speaker S2 to emit a first sound wave signal and controls the earpiece S1 to receive the first sound wave signal.
Step 102, determining a propagation rate of the first acoustic signal.
After the earpiece receives the first acoustic signal, the electronics determine a rate of propagation of the first acoustic signal between the speaker and the earpiece. The propagation rate may be determined according to a distance between a speaker and an earpiece of the electronic device and a propagation time, the distance being a straight-line distance between the speaker S2 and the earpiece S1 in fig. 2, and the propagation time may be recorded by the electronic device, for example, after the speaker emits the first sound wave signal, the current time T1 is recorded, and when the earpiece receives the first sound wave signal, the current time T2 is recorded, and the propagation time is determined according to a time difference between T2 and T1.
And 103, determining a transmission medium where the electronic equipment is located according to the propagation rate.
A transmission medium such as air, water, etc., or rarefied air in the air; alternatively, the correspondence between the propagation rate and each transmission medium may be predetermined, and after the propagation rate is obtained, the transmission medium corresponding to the propagation rate is determined according to the correspondence.
And 104, adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium.
After the transmission medium is determined, adjusting the contextual model of the electronic equipment to a target contextual model; the target contextual model is a contextual model corresponding to the transmission medium; for example, when the transmission medium is water, the user may carry the electronic device to swim, and at this time, in the target contextual model, the user may preset an APP (function), and set the APP which is not needed to be used underwater as default and not to be displayed, so that the number of APPs in the display desktop is reduced, and the convenience of the user in operating the APP underwater is improved; as cA second example, referring to fig. 3, scenario 1 is cA target scenario of the electronic device in the air, where APP- cA to APP-K exist on the desktop; referring to FIG. 4, scenario 2 is a target scenario in water, and at this time, the display desktop only displays APP-A, APP-C, APP-G and APP-I.
In the embodiment of the application, the electronic equipment controls the loudspeaker to send out a first sound wave signal and controls the receiver to receive the first sound wave signal; determining a propagation rate of the first acoustic signal; determining a transmission medium in which the electronic equipment is positioned according to the propagation rate; and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium, so that the contextual model is matched with the transmission medium, and the convenience of a user for operating the electronic equipment is improved. The embodiment of the application solves the problem that the existing contextual model cannot be suitable for a special use scene in the prior art.
In an alternative embodiment, the electronic device controlling a speaker to emit a first sound wave signal and controlling an earpiece to receive the first sound wave signal comprises:
the electronic equipment controls a loudspeaker to emit a first sound wave signal, and determines a first frequency of the first sound wave signal;
controlling a receiver to receive a sound wave signal and acquiring a second frequency of the sound wave signal;
and if the error between the first frequency and the second frequency is within a preset error threshold range, determining that the sound wave signal received by the receiver is the first sound wave signal.
The electronic equipment controls the loudspeaker to emit a first sound wave signal and detects a first frequency band of the first sound wave signal; then controlling the receiver to receive the sound wave signal and detect a second frequency of the received sound wave signal, wherein the sound wave signal received by the receiver may or may not be the first sound wave signal; and if the error between the first frequency and the second frequency is within a preset error threshold range, determining that the sound wave signal received by the receiver is the first sound wave signal.
Alternatively, the preset error threshold may range from 0 to N%, where N is a natural number. Namely, if the error between the first frequency and the second frequency is not greater than N%, the error is within a preset error threshold range.
In an optional embodiment, after acquiring the second frequency of the acoustic wave signal, the method further comprises:
and if the error between the first frequency and the second frequency is not within the preset error threshold range, controlling the receiver to continuously receive the sound wave signal. If the error between the second frequency and the first frequency of the sound wave signal received by the receiver exceeds a preset error threshold range, for example, is greater than N%, the sound wave signal received by the receiver is not the first sound wave signal, and at this time, the receiver is controlled to continue receiving the sound wave signal to obtain the first sound wave signal.
In an alternative embodiment, the determining the propagation rate of the first acoustic signal comprises:
determining a propagation time and a propagation path length of the first acoustic signal;
wherein the propagation time is a difference between an emission time of the first sound wave signal emitted by the speaker and a receiving time of the first sound wave signal received by the receiver; for example, after the speaker emits the first sound wave signal, the current time T1 is recorded, and the current time T2 is recorded when the receiver receives the first sound wave signal, and the propagation time is determined according to the time difference between T2 and T1.
The propagation path length is a physical distance of the speaker and the earpiece on the electronic device; the propagation path length is, as in fig. 2, the straight-line distance between the speaker S2 and the earpiece S1;
determining a propagation rate of the first acoustic signal from the propagation path length and the propagation time; wherein the propagation path length is divided by the propagation time, i.e. the propagation velocity of the first acoustic signal.
In an optional embodiment, after adjusting the profile of the electronic device to a target profile corresponding to the transmission medium, the method includes:
if the target contextual model is an underwater mode, controlling the electronic equipment to execute at least one of a first operation and a second operation; the first operation comprises controlling a display desktop to display a preset underwater application icon; optionally, the preset underwater application icon may be set by the user, as shown in fig. 4, the non-underwater application icon is not displayed, the number of application icons on the display desktop may be reduced, and the user may find the target application program quickly.
The second operation comprises expanding the response range of the touch operation in the touch screen of the electronic equipment, and the touch operation is difficult to perform in water compared with the touch operation in air, so that the touch error is large; for example, when the touch screen has water, the operation sensitivity is greatly reduced; therefore, the response range of the touch operation in the touch screen of the electronic equipment is expanded, so that a user can operate the electronic equipment conveniently.
As a third example, referring to fig. 5, fig. 5 shows an application process of applying the contextual model adjusting method provided in the embodiment of the present application, where, taking a transmission medium as water as an example, the method mainly includes the following steps:
step 501, a loudspeaker of the electronic device sends a first sound wave signal, and the sending time T1 is recorded.
In step 502, the receiver of the electronic device receives the sound wave signal and records time T2.
Step 503, determining whether the received acoustic wave signal is a first acoustic wave signal:
if yes, go to step 504; otherwise, return to step 502.
Step 504, calculating a transmission rate of the first acoustic signal.
And 505, judging a transmission medium according to the transmission rate.
In step 506, if the transmission medium is water, the contextual model of the electronic device is adjusted to an underwater mode.
In the embodiment of the application, the electronic equipment controls the loudspeaker to send out a first sound wave signal and controls the receiver to receive the first sound wave signal; determining a propagation rate of the first acoustic signal; determining a transmission medium in which the electronic equipment is positioned according to the propagation rate; and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium, so that the contextual model is matched with the transmission medium, and the convenience of a user for operating the electronic equipment is improved.
The contextual model adjusting method provided by the embodiment of the present application is described above, and the contextual model adjusting device provided by the embodiment of the present application will be described below with reference to the accompanying drawings.
It should be noted that, in the method for adjusting a contextual model provided in the embodiment of the present application, the execution main body may be a contextual model adjusting device, or a control module used for executing the contextual model adjusting method in the contextual model adjusting device. In the embodiment of the present application, a method for executing a contextual model by a contextual model adjusting device is taken as an example, and the contextual model adjusting method provided in the embodiment of the present application is described.
Referring to fig. 6, an embodiment of the present application further provides a contextual model adjusting apparatus 600, including:
the control module 601 is configured to control the speaker to emit a first sound wave signal and control the receiver to receive the first sound wave signal.
Wherein, the first sound wave signal may be a sound wave signal with a frequency within a preset range, for example, a sound wave greater than 1800HZ (hertz); the preset range may be determined according to a power range of a speaker of the electronic device; for example, the speaker range of electronic devices is typically between 50HZ and 13 kHZ.
As a first example, referring to fig. 2, the electronic device controls the speaker S2 to emit a first sound wave signal and controls the earpiece S1 to receive the first sound wave signal.
A velocity determination module 602 configured to determine a propagation velocity of the first acoustic signal.
After the earpiece receives the first acoustic signal, the electronics determine a rate of propagation of the first acoustic signal between the speaker and the earpiece. The propagation rate may be determined according to a distance between a speaker and an earpiece of the electronic device and a propagation time, the distance being a straight-line distance between the speaker S2 and the earpiece S1 in fig. 2, and the propagation time may be recorded by the electronic device, for example, after the speaker emits the first sound wave signal, the current time T1 is recorded, and when the earpiece receives the first sound wave signal, the current time T2 is recorded, and the propagation time is determined according to a time difference between T2 and T1.
A medium determining module 603, configured to determine a transmission medium in which the electronic device is located according to the propagation rate.
A transmission medium such as air, water, etc., or rarefied air in the air; alternatively, the correspondence between the propagation rate and each transmission medium may be predetermined, and after the propagation rate is obtained, the transmission medium corresponding to the propagation rate is determined according to the correspondence.
An adjusting module 604, configured to adjust the contextual model of the electronic device to a target contextual model corresponding to the transmission medium.
After the transmission medium is determined, adjusting the contextual model of the electronic equipment to a target contextual model; the target contextual model is a contextual model corresponding to the transmission medium; for example, when the transmission medium is water, the user may carry the electronic device to swim, and at this time, in the target contextual model, the user may preset an APP (function), and set the APP which is not needed to be used underwater as default and not to be displayed, so that the number of APPs in the display desktop is reduced, and the convenience of the user in operating the APP underwater is improved; as cA second example, referring to fig. 3, scenario 1 is cA target scenario of the electronic device in the air, where APP- cA to APP-K exist on the desktop; referring to FIG. 4, scenario 2 is a target scenario in water, and at this time, the display desktop only displays APP-A, APP-C, APP-G and APP-I.
In an alternative embodiment, the control module 601 includes:
the frequency determination submodule is used for controlling the loudspeaker to emit a first sound wave signal by the electronic equipment and determining a first frequency of the first sound wave signal;
the control submodule is used for controlling the receiver to receive the sound wave signal and acquiring a second frequency of the sound wave signal;
and the signal determining submodule is used for determining that the sound wave signal received by the receiver is the first sound wave signal if the error between the first frequency and the second frequency is within a preset error threshold range.
In an optional embodiment, the apparatus 600 further comprises:
and the receiving module is used for controlling the receiver to continuously receive the sound wave signal if the error between the first frequency and the second frequency is not within the preset error threshold range.
In an optional embodiment, the rate determining module 602 includes:
a first determination submodule for determining a propagation time and a propagation path length of the first acoustic wave signal;
wherein the propagation time is a difference between an emission time of the first sound wave signal emitted by the speaker and a receiving time of the first sound wave signal received by the receiver; the propagation path length is a physical distance of the speaker and the earpiece on the electronic device;
and the second determining submodule is used for determining the propagation speed of the first sound wave signal according to the propagation path length and the propagation time.
In an alternative embodiment, the apparatus 600 comprises:
the execution module is used for controlling the electronic equipment to execute at least one of a first operation and a second operation if the target contextual model is an underwater mode; the first operation comprises controlling a display desktop to display a preset underwater application icon; the second operation comprises expanding the response range of the touch operation in the touch screen of the electronic equipment.
In the embodiment of the application, the control module 601 controls the speaker to emit a first sound wave signal, and controls the receiver to receive the first sound wave signal; a velocity determination module 602 determines a propagation velocity of the first acoustic signal; the medium determining module 603 determines a transmission medium in which the electronic device is located according to the propagation rate; the adjusting module 604 adjusts the contextual model of the electronic device to a target contextual model corresponding to the transmission medium, so that the contextual model is matched with the transmission medium, and convenience of a user in operating the electronic device is improved.
The profile adjusting apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The profile adjustment apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The contextual model adjusting device provided in the embodiment of the present application can implement each process implemented by the contextual model adjusting device in the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 710, a memory 709, and a program or an instruction stored in the memory 709 and capable of running on the processor 710, where the program or the instruction is executed by the processor 710 to implement each process of the foregoing method for adjusting a contextual model, and can achieve the same technical effect, and details are not described here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a hardware structure diagram of an electronic device 700 for implementing various embodiments of the present application;
the electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 710 is configured to control the speaker to emit a first sound wave signal and control the receiver to receive the first sound wave signal by the electronic device;
determining a propagation rate of the first acoustic signal;
determining a transmission medium in which the electronic equipment is positioned according to the propagation rate;
and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium.
Optionally, the processor 710 is configured to control the speaker to emit a first sound wave signal, and determine a first frequency of the first sound wave signal;
controlling a receiver to receive a sound wave signal and acquiring a second frequency of the sound wave signal;
and if the error between the first frequency and the second frequency is within a preset error threshold range, determining that the sound wave signal received by the receiver is the first sound wave signal.
Optionally, the processor 710 is configured to control the earphone to continue to receive the sound wave signal if the error between the first frequency and the second frequency is not within the preset error threshold range.
Optionally, a processor 710 for determining a propagation time and a propagation path length of the first acoustic signal;
wherein the propagation time is a difference between an emission time of the first sound wave signal emitted by the speaker and a receiving time of the first sound wave signal received by the receiver; the propagation path length is a physical distance of the speaker and the earpiece on the electronic device;
determining a propagation rate of the first acoustic signal based on the propagation path length and the propagation time.
Optionally, the processor 710 is configured to control the electronic device to perform at least one of a first operation and a second operation if the target contextual model is an underwater mode; the first operation comprises controlling a display desktop to display a preset underwater application icon; the second operation comprises expanding the response range of the touch operation in the touch screen of the electronic equipment.
In the embodiment of the application, the electronic equipment controls the loudspeaker to send out a first sound wave signal and controls the receiver to receive the first sound wave signal; determining a propagation rate of the first acoustic signal; determining a transmission medium in which the electronic equipment is positioned according to the propagation rate; and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium, so that the contextual model is matched with the transmission medium, and the convenience of a user for operating the electronic equipment is improved.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned method for adjusting a contextual model, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method for adjusting a contextual model, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for adjusting a scene mode, the method comprising:
the electronic equipment controls a loudspeaker to send out a first sound wave signal and controls a receiver to receive the first sound wave signal;
determining a propagation rate of the first acoustic signal;
determining a transmission medium in which the electronic equipment is positioned according to the propagation rate;
and adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium.
2. The method of claim 1, wherein the controlling a speaker to emit a first sound wave signal and controlling an earphone to receive the first sound wave signal by the electronic device comprises:
the electronic equipment controls a loudspeaker to emit a first sound wave signal, and determines a first frequency of the first sound wave signal;
controlling a receiver to receive a sound wave signal and acquiring a second frequency of the sound wave signal;
and if the error between the first frequency and the second frequency is within a preset error threshold range, determining that the sound wave signal received by the receiver is the first sound wave signal.
3. The method of adjusting a scene mode according to claim 2, wherein after the acquiring the second frequency of the acoustic wave signal, the method further comprises:
and if the error between the first frequency and the second frequency is not within the preset error threshold range, controlling the receiver to continuously receive the sound wave signal.
4. The method of adjusting scene mode according to claim 1, wherein said determining a propagation rate of the first acoustic wave signal comprises:
determining a propagation time and a propagation path length of the first acoustic signal;
wherein the propagation time is a difference between an emission time of the first sound wave signal emitted by the speaker and a receiving time of the first sound wave signal received by the receiver; the propagation path length is a physical distance of the speaker and the earpiece on the electronic device;
determining a propagation rate of the first acoustic signal based on the propagation path length and the propagation time.
5. The method according to claim 1, wherein after the adjusting the profile of the electronic device to the target profile corresponding to the transmission medium, the method comprises:
if the target contextual model is an underwater mode, controlling the electronic equipment to execute at least one of a first operation and a second operation; the first operation comprises controlling a display desktop to display a preset underwater application icon; the second operation comprises expanding the response range of the touch operation in the touch screen of the electronic equipment.
6. An apparatus for profile adjustment, the apparatus comprising:
the control module is used for controlling the loudspeaker to send out a first sound wave signal and controlling the receiver to receive the first sound wave signal by the electronic equipment;
a velocity determination module for determining a propagation velocity of the first acoustic signal;
the medium determining module is used for determining a transmission medium where the electronic equipment is located according to the propagation rate;
and the adjusting module is used for adjusting the contextual model of the electronic equipment to a target contextual model corresponding to the transmission medium.
7. The profile adjustment apparatus of claim 6, wherein the control module comprises:
the frequency determination submodule is used for controlling the loudspeaker to emit a first sound wave signal by the electronic equipment and determining a first frequency of the first sound wave signal;
the control submodule is used for controlling the receiver to receive the sound wave signal and acquiring a second frequency of the sound wave signal;
and the signal determining submodule is used for determining that the sound wave signal received by the receiver is the first sound wave signal if the error between the first frequency and the second frequency is within a preset error threshold range.
8. The profile adjustment apparatus according to claim 7, further comprising:
and the receiving module is used for controlling the receiver to continuously receive the sound wave signal if the error between the first frequency and the second frequency is not within the preset error threshold range.
9. The profile adjustment apparatus of claim 6, wherein the rate determination module comprises:
a first determination submodule for determining a propagation time and a propagation path length of the first acoustic wave signal;
wherein the propagation time is a difference between an emission time of the first sound wave signal emitted by the speaker and a receiving time of the first sound wave signal received by the receiver; the propagation path length is a physical distance of the speaker and the earpiece on the electronic device;
and the second determining submodule is used for determining the propagation speed of the first sound wave signal according to the propagation path length and the propagation time.
10. The profile adjustment apparatus according to claim 6, wherein the apparatus comprises:
the execution module is used for controlling the electronic equipment to execute at least one of a first operation and a second operation if the target contextual model is an underwater mode; the first operation comprises controlling a display desktop to display a preset underwater application icon; the second operation comprises expanding the response range of the touch operation in the touch screen of the electronic equipment.
CN202110105236.1A 2021-01-26 2021-01-26 Contextual model adjusting method and device Pending CN112954107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110105236.1A CN112954107A (en) 2021-01-26 2021-01-26 Contextual model adjusting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110105236.1A CN112954107A (en) 2021-01-26 2021-01-26 Contextual model adjusting method and device

Publications (1)

Publication Number Publication Date
CN112954107A true CN112954107A (en) 2021-06-11

Family

ID=76237167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110105236.1A Pending CN112954107A (en) 2021-01-26 2021-01-26 Contextual model adjusting method and device

Country Status (1)

Country Link
CN (1) CN112954107A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202527A1 (en) * 2022-04-22 2023-10-26 维沃移动通信有限公司 Method and apparatus for controlling power supply circuit, and electronic device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160294442A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Device environment determination
CN106357935A (en) * 2016-11-29 2017-01-25 维沃移动通信有限公司 Mobile terminal mode switching method and mobile terminal
CN107181854A (en) * 2017-02-27 2017-09-19 惠州Tcl移动通信有限公司 Mobile terminal accurately judges whether into water and the method for opening marine mode automatically
CN109345773A (en) * 2018-10-31 2019-02-15 广东小天才科技有限公司 A kind of overboard detection method and intelligent wearable device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160294442A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Device environment determination
CN106357935A (en) * 2016-11-29 2017-01-25 维沃移动通信有限公司 Mobile terminal mode switching method and mobile terminal
CN107181854A (en) * 2017-02-27 2017-09-19 惠州Tcl移动通信有限公司 Mobile terminal accurately judges whether into water and the method for opening marine mode automatically
CN109345773A (en) * 2018-10-31 2019-02-15 广东小天才科技有限公司 A kind of overboard detection method and intelligent wearable device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202527A1 (en) * 2022-04-22 2023-10-26 维沃移动通信有限公司 Method and apparatus for controlling power supply circuit, and electronic device and readable storage medium

Similar Documents

Publication Publication Date Title
US11658932B2 (en) Message sending method and terminal device
CN108958606B (en) Split screen display method and device, storage medium and electronic equipment
CN111324235A (en) Screen refreshing frequency adjusting method and electronic equipment
CN106940997B (en) Method and device for sending voice signal to voice recognition system
CN108984066B (en) Application icon display method and mobile terminal
CN110096203B (en) Screenshot method and mobile terminal
WO2017215661A1 (en) Scenario-based sound effect control method and electronic device
CN109189260B (en) Touch detection method and device
CN108984142B (en) Split screen display method and device, storage medium and electronic equipment
CN110930964B (en) Display screen brightness adjusting method and device, storage medium and terminal
CN112015365A (en) Volume adjustment method and device and electronic equipment
CN110505660B (en) Network rate adjusting method and terminal equipment
CN107622234B (en) Method and device for displaying budding face gift
CN108833791B (en) Shooting method and device
CN110198560B (en) Power configuration method and terminal
CN108564539B (en) Method and device for displaying image
CN112954107A (en) Contextual model adjusting method and device
CN110392158A (en) A kind of message treatment method, device and terminal device
CN110740265B (en) Image processing method and terminal equipment
JP7114747B2 (en) Random access resource selection method and terminal device
CN112003983A (en) Adaptive vibration system, terminal, method, and computer-readable storage medium
CN110099170B (en) Picture deleting method and mobile terminal
CN110333803B (en) Multimedia object selection method and terminal equipment
CN109450508B (en) Antenna determination method and device and mobile terminal
CN111050223A (en) Bullet screen information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210611

WD01 Invention patent application deemed withdrawn after publication