WO2023065839A1 - 一种触控反馈方法与电子设备 - Google Patents

一种触控反馈方法与电子设备 Download PDF

Info

Publication number
WO2023065839A1
WO2023065839A1 PCT/CN2022/116339 CN2022116339W WO2023065839A1 WO 2023065839 A1 WO2023065839 A1 WO 2023065839A1 CN 2022116339 W CN2022116339 W CN 2022116339W WO 2023065839 A1 WO2023065839 A1 WO 2023065839A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
sound
mark
feedback
sound feedback
Prior art date
Application number
PCT/CN2022/116339
Other languages
English (en)
French (fr)
Inventor
张超
金伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023065839A1 publication Critical patent/WO2023065839A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop

Definitions

  • the present application relates to the field of terminal technology, and in particular to a touch feedback method and electronic equipment.
  • the display screens of most electronic devices are touch screens, through which users interact with the electronic devices.
  • touch feedback such as sound feedback.
  • a click operation for example, clicking a letter key in a keyboard input interface
  • a sliding operation for example, sliding to delete a contact
  • the sound feedback of the user's touch operation on the electronic device is often fixed.
  • the sliding operation no matter how the user performs the sliding operation (fast or slow sliding speed, high or low touch pressure when sliding), the sound feedback is fixed.
  • This method is not flexible enough, and the fixed sound feedback cannot make the user feel the difference between different touch operations, and the experience is not good.
  • the purpose of the present application is to provide a touch feedback method and an electronic device for improving the touch experience.
  • a touch feedback method is provided, which is applied to a first electronic device.
  • the first electronic device is such as a mobile phone, a tablet computer, and the like.
  • the first electronic device displays a first interface in response to the first operation, and the first interface includes the first identification of the first electronic device, the second identification of the second electronic device, and the third identification of the third electronic device.
  • the second electronic device and the third electronic device are surrounding devices scanned by the first electronic device; the first electronic device outputs a first sound feedback in response to the first drag operation, and communicates with The second electronic device establishes a connection, and the first dragging operation is used to drag the second mark so that the second electronic device establishes a connection with the first electronic device; the first electronic device responds to the second A drag operation, outputting a second sound feedback, and establishing a connection with the third electronic device, the second drag operation is used to drag the third mark to make the third electronic device and the first electronic device The device establishes a connection; wherein, the second sound feedback is different from the first sound feedback.
  • the user may open the first interface on the first electronic device, and realize the connection between the device corresponding to the identifier and the first electronic device by dragging and dropping the identifier in the first interface.
  • audio feedback can be generated when dragging the logo, and the audio feedback generated when dragging different logos is different, avoiding the boring feeling caused by a single audio feedback, and further improving the interactive experience.
  • the second sound feedback is different from the first sound feedback, including: the sound type, loudness, pitch, channel, and duration of the second sound feedback and the first sound feedback At least one of them is different.
  • the sound feedback includes various attributes such as sound type, loudness, pitch, channel, and duration.
  • the two sound feedbacks are different, and at least one of the attributes of the two sound feedbacks may be different, which is not limited in this embodiment of the present application.
  • the second sound feedback is different from the first sound feedback, including: when at least one of the following conditions is met, the second sound feedback is different from the first sound feedback; Said conditions include:
  • the second electronic device is different from the third electronic device in at least one of device type, volume, weight, and material;
  • the distance between the second electronic device and the third electronic device is different from the first electronic device
  • the second electronic device and the third electronic device are located in different directions from the first electronic device;
  • said second mark is at a different distance from said third mark to said first mark
  • the second mark and the third mark are located in different directions from the first mark
  • Drag speeds of the first drag operation are different from those of the second drag operation
  • the first drag operation and the second drag operation are different from the touch pressure of the touch screen.
  • the generated sound feedback is related to at least one of the following items: the type, volume, weight, and material of the device corresponding to the logo; The distance and direction between the device and the first electronic device; the distance and direction between the logo and the logo of the first electronic device; dragging speed; dragging pressure, etc.
  • the acoustic feedback is more flexible and variable.
  • the second sound feedback is different from the first sound feedback, including: when at least one of the following conditions is met, the loudness and/or pitch of the first sound feedback is greater than the the second audio feedback; the conditions include:
  • the distance from the second marker to the first marker is greater than the distance from the third marker to the first marker
  • the distance from the second electronic device to the first electronic device is greater than the distance from the third electronic device to the first electronic device
  • the dragging speed of the first dragging operation is greater than the dragging speed of the second dragging operation
  • the touch pressure of the first drag operation is greater than the touch pressure of the second drag operation
  • the volume of the second electronic device is larger than the volume of the third electronic device
  • the weight of the second electronic device is greater than the weight of the third electronic device
  • the material hardness of the second electronic device is greater than the material hardness of the third electronic device.
  • the second sound feedback is different from the first sound feedback, including: when at least one of the following conditions is met, the duration of the first sound feedback is longer than that of the second sound feedback
  • the duration of the audible feedback include:
  • the distance from the second marker to the first marker is greater than the distance from the third marker to the first marker
  • the distance from the second electronic device to the first electronic device is greater than the distance from the third electronic device to the first electronic device.
  • the second sound feedback is different from the first sound feedback, including:
  • the direction of the sound source indicated by the first sound feedback is the first direction when the following conditions are met, the conditions include: the second sign is located in the first direction of the first sign, and/or, the second electronic device is located in the first direction of the first electronic device;
  • the sound source direction indicated by the second sound feedback is the second direction
  • the conditions include: the third mark is located in the second direction of the first mark, and/or, the The second electronic device is located in a second direction of the first electronic device.
  • the generated sound feedback can indicate the direction of the second logo relative to the first logo (the logo corresponding to the first electronic device), such as , the second logo is located at the left rear of the first logo, and the user can feel the sound coming from the left rear through the sound feedback.
  • This kind of sound feedback provides a better interactive experience.
  • the generated sound feedback can indicate the actual direction of the second electronic device corresponding to the second logo relative to the first electronic device, for example, the second The second electronic device is located at the left rear of the first electronic device, and the user can feel that the sound comes from the left rear through the sound feedback.
  • This sound feedback method conforms to the directional relationship between the devices in the real environment, and the interaction experience is better.
  • the sound source direction indicated by the first sound feedback is a first direction, including: the first sound feedback includes first left channel information and first right channel information, and the The phase difference between the first left channel information and the first right channel information is the first phase difference; the first phase difference is used to determine the sound source direction as the first direction; the second sound The direction of the sound source indicated by the feedback is the second direction, including: the second sound feedback includes second left channel information and second right channel information, and the second left channel information and the second right channel information The phase difference between the channel information is the second phase difference; the second phase difference is used to determine the direction of the sound source as the second direction. Therefore, in the embodiment of the present application, when dragging an icon, the generated sound feedback can indicate the direction information, which helps to improve the touch experience.
  • the method further includes: the first distance between the second mark and the first mark in the first interface, and the distance between the second electronic device and the first electronic device Positively correlated with the second distance of ; and/or,
  • the second logo is located in the first direction of the first logo, which is consistent with the second direction in which the second electronic device is located in the first electronic device; and/or,
  • a third distance from the third mark to the first mark in the first interface is positively correlated with a fourth distance from the third electronic device to the first electronic device;
  • the third logo is located in a third direction of the first logo, which is consistent with the fourth direction in which the third electronic device is located in the first electronic device.
  • the user can open the first interface on the first electronic device, and the first interface includes the identification corresponding to the surrounding equipment of the first electronic device, for example, the second identification corresponding to the second electronic device, the third A third identifier corresponding to the electronic device.
  • the position distribution of each mark in the first interface is related to the position distribution of each device in the real environment. For example, in a real environment, if the second electronic device is behind the left of the first electronic device, then in the first interface, the second logo is behind the left of the first logo. Therefore, through the first interface, the user can know which devices are around the first electronic device, and can also know the location distribution of the surrounding devices, and the user experience is better.
  • the first drag operation is used to drag the second marker to move to the position where the first marker is located.
  • the first drag operation is used to drag the second mark to move to the position where the first mark is located without touching the first mark, or the first drag operation is used to drag The second mark moves toward the position of the first mark and contacts the first mark.
  • the second drag operation is used to drag the third marker to move to the position where the first marker is located.
  • the second drag operation is used to drag the third mark to move to the position of the first mark without touching the first mark, or the second drag operation is used to drag The second mark moves toward the position of the first mark and contacts the first mark.
  • first drag operation can also be implemented in other ways, as long as the second electronic device is connected to the first electronic device, and similarly, the second drag operation can also be implemented in other ways, as long as the third It only needs to connect the electronic device to the first electronic device, which is not limited in this embodiment of the present application.
  • the method further includes: when the first drag operation is used to drag the second mark to move to the position of the first mark and touch the first mark , the first sound feedback includes a third sound feedback and a fourth sound feedback; wherein, the third sound feedback is the corresponding sound feedback before the second mark starts to move into contact with the first mark, the The fourth sound feedback is the sound feedback corresponding to when the second mark is in contact with the first mark;
  • the second sound feedback includes fifth sound feedback and The sixth sound feedback; wherein, the fifth sound feedback is the corresponding sound feedback before the third mark starts to move into contact with the first mark, and the sixth sound feedback is the sound feedback between the third mark and the first mark The corresponding sound feedback when the first mark is touched.
  • the first drag operation is used to drag the second mark to move to the position where the first mark is located and touch the first mark, therefore, in the second mark
  • One kind of sound feedback that is, the third sound feedback
  • another sound feedback that is, the fourth sound feedback
  • the corresponding sound feedback before the second mark starts to move to contact with the first mark is "ding”
  • the corresponding sound feedback when the second mark touches the first mark is a collision sound of "bang”. In this way, the sound feedback is richer and the user experience is better.
  • the third audio feedback is a different type of sound from the fourth audio feedback
  • the fifth audio feedback is a different type of sound from the sixth audio feedback.
  • the sound type is such as “ding”, “dong”, “kuang”, etc., and may also be a music segment.
  • the first drag operation when the first drag operation is used to drag the second marker to move to the position where the first marker is located, during the process of outputting the first sound feedback, The loudness and/or pitch of the first sound feedback, as the distance from the second sign to the first sign shortens, the dragging speed of the first dragging operation decreases, the first dragging At least one of the reduction in the touch pressure of the dragging operation is reduced;
  • the first drag operation When the first drag operation is used to drag the third mark to move to the position of the first mark, during the process of outputting the second sound feedback, the loudness of the second sound feedback and/or tone, as the distance between the third mark and the first mark shortens, the dragging speed of the second dragging operation decreases, and the touch pressure of the second dragging operation decreases at least one of the
  • the first drag operation since the first drag operation is used to drag the second mark to move to the position where the first mark is located, the second mark is gradually approaching the first mark.
  • the loudness and/or pitch of the sound feedback is dynamically changed, for example, as the distance from the second mark to the first mark shortens, the dragging speed of the first drag operation decreases 1. Decrease at least one of touch pressure of the first drag operation. In this way, the speakers and loudness of the sound feedback change with the distance between different signs, and the experience is better.
  • the second sound feedback is the same type of sound as the first sound feedback.
  • the sound type is such as “ding”, “dong”, “kuang”, etc., and may also be a music segment, etc., which is not limited in this embodiment of the present application.
  • an electronic device including:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, and the one or more programs include instructions, and when the instructions are executed by the processor, the electronic device performs the above-mentioned first aspect The method steps described.
  • a computer-readable storage medium the computer-readable storage medium is used to store a computer program, and when the computer program is run on a computer, the computer executes the method provided in the above-mentioned first aspect .
  • a computer program product including a computer program, which, when the computer program is run on a computer, causes the computer to execute the method provided in the first aspect above.
  • a graphical user interface on an electronic device the electronic device has a display screen, a memory, and a processor, and the processor is configured to execute one or more computer programs stored in the memory,
  • the graphical user interface includes a graphical user interface displayed when the electronic device executes the method provided in the first aspect above.
  • the embodiment of the present application further provides a chip system, the chip system is coupled with the memory in the electronic device, and is used to call the computer program stored in the memory and execute the technical solution of the first aspect of the embodiment of the present application.
  • “Coupling” in the embodiments of the application means that two components are directly or indirectly combined with each other.
  • FIGS. 1 to 3 are schematic diagrams of application scenarios provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a touch feedback method provided by an embodiment of the present application.
  • FIG. 7 is another schematic flowchart of a touch feedback method provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device may have a touch screen.
  • the electronic device may be a portable electronic device such as a mobile phone, a tablet computer, or a notebook computer; it may also be a wearable device such as a watch or a bracelet; or it may be a smart home device such as a TV or a refrigerator; or it may be It is a vehicle-mounted device such as a vehicle-mounted display.
  • the embodiment of the present application does not limit the specific type of the electronic device.
  • the electronic device (the electronic device to which the touch feedback method provided in the embodiment of the present application is applicable) may be an electronic device in a communication system.
  • the communication system includes multiple devices, where different devices can establish connections, so as to realize data transmission between different devices.
  • the communication system may be called a hyperterminal, a hyperterminal group, a hyperterminal system, multi-device collaboration, multi-device interconnection, and the like.
  • the communication system is called a "hyper terminal" as an example.
  • the electronic device when the user performs a touch operation on an electronic device in the hyper terminal to realize connection with other devices, the electronic device can output sound feedback, which helps to improve the interactive experience.
  • FIG. 1 to FIG. 3 are schematic diagrams of an application scenario provided by an embodiment of the present application.
  • This application scenario takes the HyperTerminal scenario as an example.
  • the user's environment includes N electronic devices.
  • N is an integer greater than or equal to 2.
  • the device types of the N electronic devices may be the same or different.
  • all of the N electronic devices are mobile phones or tablet computers; or, among the N electronic devices, there are mobile phones, tablet computers, all-in-one machines, TV sets, and so on.
  • Applications (application, app for short) in the N electronic devices may be the same or different.
  • the applications include: instant messaging applications, video applications, audio applications, image capture applications, and the like.
  • instant messaging applications for example, may include Changlian, WhatsApp Photo sharing (Instagram), Kakao wait.
  • the image capturing application may include, for example, a camera application (system camera or third-party camera application).
  • Video applications can include etc. Audio applications, such as Google etc.
  • Connections can be established between N electronic devices.
  • the mobile phone can establish a connection with other devices.
  • a main interface 101 is displayed on the mobile phone, and the main interface 101 includes icons of various applications.
  • the control center interface 102 as shown in (b) in FIG. 2 is displayed.
  • the control center interface 102 includes a hyper terminal window 103 .
  • the identifications of the surrounding devices searched by the mobile phone are displayed in the hyper terminal window 103 .
  • the mobile phone displays a first interface 110 as shown in (c) of FIG. 2 .
  • the first interface 110 displays the identifiers of each device scanned by the mobile phone, such as a television identifier 106 , a laptop computer identifier 108 , a speaker identifier 109 and a mobile phone identifier 107 .
  • each logo may be displayed in bubbles or other forms, which are not limited in this application. It can be understood that, in order to enhance the sense of experience, the mobile phone logo 107 is located in the middle of the first interface 110, and other marks are distributed around the mobile phone logo 107, and the devices used to represent the mobile phone include TV sets, laptop computers and speakers. In this way, through the first interface 110, the user can be shown which devices are around the mobile phone, and the experience is better.
  • the user wants to connect the mobile phone to a certain device, he can drag the identification corresponding to the device to the mobile phone identification 107 on the first interface 110 to realize the connection between the device and the mobile phone.
  • the mobile phone responds to the first drag operation of dragging the TV ID 106 to the mobile phone ID 107 to establish a connection between the mobile phone and the TV.
  • the first dragging operation may be an operation of moving the TV logo 106 close to the mobile phone logo 107 without touching the mobile phone logo 107, or moving the TV logo 106 close to the mobile phone logo 107 and touching the mobile phone logo
  • the operation of 107 is not limited in this embodiment. This way of establishing a device connection by dragging and dropping the logo is easy to operate and has a better experience.
  • the mobile phone can display an interface as shown in (d) in Figure 2, in which the TV logo 106 and the mobile phone logo 107 are displayed next to each other (or pasted, overlapped, adsorption, etc.), it means that the mobile phone is connected to the TV. That is to say, the user drags the TV logo 106 to the position where the mobile phone logo 107 is located. If the TV logo 106 is next to the mobile phone logo 107, it means that the TV is connected to the mobile phone successfully, so as to give the user a certain reminder. It can be understood that if the connection between the mobile phone and the TV fails, the TV logo 106 can be restored to the position of (c) in FIG. 2 , and the user can drag the TV logo 106 to the mobile phone logo 107 again to try to connect again. Therefore, in this way, the user can intuitively perceive the connection result (connection success or connection failure) of the two devices.
  • the mobile phone establishes a connection between the mobile phone and the notebook computer in response to the second drag operation of dragging the notebook computer identification 108 to the mobile phone identification 107 .
  • the second dragging operation may be an operation of moving the laptop logo 108 close to the mobile phone logo 107 without touching the mobile phone logo 107, or moving the laptop logo 108 close to the mobile phone logo 107 and touching the mobile phone logo
  • the operation of 107 is not limited in this embodiment.
  • the mobile phone displays an interface as shown in (d) in Figure 2, in which the notebook computer logo 108 and the mobile phone logo 107 are displayed next to each other (or fit, overlap, absorb). etc.), which means that the mobile phone and the laptop are connected.
  • the laptop logo 108 returns to the position (c) in FIG. 2 , which means the connection fails.
  • the user can quickly and efficiently realize the connection between the mobile phone and other devices on the first interface 110, and can intuitively perceive the connection result.
  • the mobile phone After the mobile phone is connected with other devices, it can perform data transmission with the devices.
  • the display interface of the video playback application (such as the interface of a movie or TV series) on the mobile phone can be displayed through the TV, so that the user can watch the movie or TV series on the large-screen device.
  • the document (word) interface on the mobile phone can be displayed through the notebook computer, so that the user can edit the document on the notebook computer for office work, and the experience is better.
  • the above embodiment is described as an example of establishing a connection between a mobile phone and other devices (such as a TV set and a notebook computer).
  • this method can also be used to communicate with other devices establish connection.
  • the display positions of each logo are different from the actual positions of each device in the real environment (that is, the real environment shown in FIG. 1 ).
  • the TV logo 106 is in the front right of the mobile phone logo 107 , but in the real environment shown in FIG. 1 , the TV is in the front left of the mobile phone. Therefore, in some embodiments, after the mobile phone scans the surrounding devices, it only needs to display the identifiers corresponding to the scanned devices on the first interface 110, without considering the actual location of each device. This way is easy to implement, and the user can know which devices are around the mobile phone through the first interface 110 .
  • the display position of each mark on the first interface 110 may be related to the actual position of each device in the real environment. For example, including at least one of the following methods 1 or 2:
  • the distance between the two marks on the first interface 110 is positively correlated with the actual distance between the devices corresponding to the two marks. That is, the farther the actual distance between the two devices is, the farther the distance between the identifiers corresponding to the two devices is. For example, if the actual distance between the two devices is L, then the distance between the two identifiers is L/n, where n can be a positive integer. In other words, the actual distance is reduced by a certain ratio.
  • FIG. 3 it is another schematic diagram of the first interface 110 .
  • the TV logo 106 and the mobile phone logo 107 are relatively close (for example, 0.02 cm), and the notebook computer logo and the mobile phone logo 107 are far away (for example, 0.04 m).
  • the mobile phone needs to determine the real distance between the surrounding devices and the mobile phone. There are many specific determination methods, such as laser ranging, etc., which will not be described in this article.
  • the direction in which a logo on the first interface 110 is located in the mobile phone logo 107 is consistent with the real direction in which the device corresponding to the logo is located in the mobile phone. For example, if a device is located in the first direction of the mobile phone, then the identification corresponding to the device is located in the first direction of the mobile phone identification 107, or located in the first direction range, and the first direction range includes the first direction. For example, in FIG. 1 , the TV is located in the left front of the mobile phone (for example, 45 degrees to the left of the north), then the TV logo 106 in the first interface 110 is located in the left front of the mobile phone logo 107 (such as 45 degrees to the left of the north), as shown in the figure 3. For another example, in Fig.
  • the notebook computer is positioned at the right front of the mobile phone (such as 45 degrees to the right of the north), then the notebook computer logo 108 is located at the right front of the mobile phone logo 107 in the first interface 110 (such as 45 degrees to the right of the north), as image 3.
  • the mobile phone needs to determine the direction of the surrounding devices relative to the mobile phone (such as 45 degrees to the right of north or (such as 45 degrees to the left of north)).
  • the first interface 110 may be as shown in (c) in FIG. 2 or as shown in FIG. 3 .
  • the following mainly takes the first interface 110 shown in (c) of FIG. 2 as an example for illustration.
  • the mobile phone can output corresponding Touch feedback, such as sound feedback.
  • the mobile phone outputs a first sound feedback in response to the first drag operation for dragging the TV logo 106 to the mobile phone logo 107 .
  • the phone outputs a second audio feedback in response to the second drag operation for dragging the laptop icon 108 towards the phone icon 107 .
  • the first audio feedback and the second audio feedback may be the same.
  • the first sound feedback is the same as the second sound feedback, including at least one of: the first sound feedback and the second sound feedback have the same sound type, the same loudness, the same pitch, the same channel, and the same duration.
  • the same type of sound can be understood as the same type of sound, for example, both are "ting dong (tinkle)", or both are “ding", or the same song segment, the same accompaniment segment, and so on.
  • the loudness, pitch, and channel of the sound are mentioned above. In order to facilitate understanding, these three parameters are briefly introduced.
  • pitch also called pitch (Pitch) represents the tone of the sound.
  • the size of the pitch mainly depends on the frequency of the sound wave.
  • the unit of pitch is expressed in hertz (Hz).
  • Loudness also known as volume (Gain), indicates the strength of sound energy. The loudness mainly depends on the amplitude of the sound wave, the larger the amplitude, the louder the louder, and the smaller the amplitude, the lower the loudness.
  • the unit of loudness is usually decibel (dB). It is understandable that pitch and loudness are two different properties of sound. A high-pitched sound (such as a soprano) is not necessarily loud, and a low-pitched sound (such as a bass) is not necessarily low in loudness.
  • the reason why people feel stereo is because there is a phase difference between the two collected sound wave signals, so two sound emitting units can be installed on the electronic device, and the sound wave signals emitted by the two sound generating units have a phase difference.
  • two sound wave signals with a phase difference emitted by the electronic device are transmitted to the back of the human ear, and the brain can perceive the stereo effect based on the phase difference.
  • the two sound emitting units are called two-channel, such as left and right channels, wherein the sound wave signal sent by the left channel and the sound wave signal sent by the right channel have a phase difference, and the phase difference can be used to determine the direction of the sound source.
  • the sound output by such electronic devices can indicate the direction of the sound source, that is, after the user collects the output sound, the brain can recognize the direction of the sound source. Therefore, the channel of the first sound feedback is the same as that of the second sound feedback. It can be understood that the direction of the sound source indicated by the first sound feedback and the second sound feedback is the same, for example, they are both directly in front, so that the user can feel the sound through the first sound feedback. The sound comes from the front, and the second sound feedback also feels that the sound comes from the front.
  • the duration of the first sound feedback and the second sound feedback is the same, for example, both the first sound feedback and the second sound feedback are 1s, 2s or 3s, etc., and the specific duration is not limited. That is to say, when different symbols in the first interface 110 are dragged to the mobile phone symbol 107, there may be no difference in the output sound feedback.
  • the first audio feedback and the second audio feedback may be different.
  • the first sound feedback is different from the second sound feedback, and may include at least one of: different sound types, different loudness, different pitches, different channels, and different durations.
  • the sound type of the output audio feedback is related to the device type of the device corresponding to the identifier. For example, if the device corresponding to the dragged logo is type A (such as a TV set), the sound type of the output audio feedback is type 1, such as "ding". If the device corresponding to the dragged logo is type B (such as a tablet computer), the sound type of the output sound feedback is type 2, such as "boom". Wherein, the device type identifies various types such as a TV set, a mobile phone, a speaker, a watch, and the like.
  • Sound types include “ding,” “rub-a-dub,” “thud,” “splash,” musical snippets, and more.
  • the TV and the notebook computer belong to different types of equipment, so the first sound feedback when the TV logo 106 is dragged and the second sound feedback when the laptop logo 108 is dragged The type of sound is different.
  • the first sound feedback is “ding”
  • the second sound feedback is "dong”. In this way, different logos produce different types of sound feedback when dragging, and the difference is quite large.
  • One possible way is to store the corresponding relationship between the sound type of the sound feedback and the device type in the mobile phone, for example, device type A corresponds to sound type 1, device type B corresponds to sound type 2, and so on. Based on the corresponding relationship, the mobile phone can determine which type of sound feedback the dragged logo corresponds to.
  • the corresponding relationship can be stored in the electronic device by default or set by the user, which is not limited in this application.
  • the loudness of the output sound feedback is related to at least one of the following:
  • materials include: glass, porcelain, metal, pottery, plastic and so on.
  • the hardness of glass or porcelain is higher than that of pottery or plastic.
  • the tone of the output audio feedback is related to at least one of the following:
  • the volume of the dragged object The smaller, lighter or softer the material, the lower the pitch of the sound produced. Therefore, in this way, when the user drags a logo, the output sound feedback can bring the user the same feeling as if dragging a real object, and the user experience is better.
  • the channel of the output sound feedback is related to at least one of the following:
  • the direction of the dragged logo relative to the mobile phone logo 107 can be understood as the direction of the vector from the mobile phone logo 107 to the dragged logo, which is referred to as the direction of the dragged logo for convenience of description.
  • the TV logo 106 is located in the right front of the mobile phone logo 107, so the sound source direction indicated by the first sound feedback corresponding to the TV logo 106 is the right front. so. After collecting the first sound feedback, the user can feel that the sound comes from the front right, which is in the same direction as the TV logo 106, and the user experience is better.
  • the laptop logo 108 is located at the left rear of the mobile phone logo 107, so the sound source direction indicated by the second sound feedback corresponding to the laptop logo 108 is the left rear. In this way, after the user collects the second sound feedback, the brain can feel that the sound comes from the left rear, which is in the same direction as the notebook computer logo 108 .
  • the TV is located in the left front of the mobile phone, so the sound source direction indicated by the first sound feedback corresponding to the TV logo 106 is the left front.
  • the laptop is located in the right front of the mobile phone, so the direction of the sound source indicated by the second sound feedback corresponding to the laptop logo 108 is the right front. In this way, the user can perceive the real direction of the device through the sound feedback, and the experience is better.
  • the duration of the output sound feedback is related to at least one of the following:
  • At least one of sound type, loudness, pitch, channel, and duration of the first sound feedback and the second sound feedback may be different.
  • the sound types of the first sound feedback and the second sound feedback are the same, both being “ding”, but different in loudness, pitch, channel, and duration.
  • the sound type of the first sound feedback and the second sound feedback are the same, both are “ding", and the duration is the same, but the loudness, pitch, and channel are different.
  • the output sound feedback may change dynamically during the dragging process of a marker.
  • the loudness and/or pitch of the first sound feedback (such as "ding") decreases along with the shortening of the distance between the television logo 106 and the mobile phone logo 107, or, along with: the first dragging The dragging speed of the operation decreases, or, as: the touch pressure of the first dragging operation decreases.
  • the output sound feedback may include more than one type of sound.
  • the first dragging operation may be to bring the TV logo 106 closer to the mobile phone logo 107 and touch or collide with the mobile phone logo 107 . Therefore, in some embodiments, the mobile phone responds to the first drag operation, and the first sound feedback output includes two types of sounds, called the third sound feedback and the fourth sound feedback, wherein the third sound feedback is the It is generated before the television logo 106 starts to move to contact with the mobile phone logo 107 , and the fourth sound feedback is generated when the television logo 106 is in contact with the mobile phone logo 107 .
  • the TV logo 106 starts to move and produces a "ding" sound before touching the mobile phone logo 107, and produces a "clank” sound when the TV logo 106 contacts the mobile phone logo 107.
  • a sign has one type of sound output when moving to the mobile phone sign 107, and another type of sound output (such as a collision sound) when it collides with the mobile phone sign 107, and the interaction experience is better.
  • the first sound feedback generated when the TV logo 106 is dragged includes two types of sounds as an example. It can be understood that the second sound feedback generated when the notebook computer 108 is dragged may also include two types. voice, without repetition.
  • the mobile phone can detect the coordinate position of the dragged logo on the display screen in real time, and when the distance between the coordinate position and the coordinate position of the mobile phone logo 107 is greater than the preset distance, a third sound feedback is output , otherwise, output the fourth sound feedback.
  • the specific value of the preset distance is not limited in this application.
  • Fig. 4 shows a schematic structural diagram of an electronic device.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, A display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait.
  • application processor application processor
  • AP application processor
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic equipment. The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device, and can also be used to transmit data between the electronic device and peripheral devices.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in an electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied to electronic devices.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system, etc. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • general packet radio service general packet radio service
  • CDMA code division multiple access
  • WCDMA broadband Code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • BT GNSS
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the display screen 194 is used to display the display interface of the application and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the storage program area can store the operating system and software codes of at least one application program (such as iQiyi application, WeChat application, etc.).
  • the data storage area can store data (such as images, videos, etc.) generated during the use of the electronic device.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to realize expanding the storage capacity of the electronic device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, save pictures, videos and other files in the external memory card.
  • the electronic device can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device.
  • the angular velocity of the electronic device about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device may detect opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device when the electronic device is a flip machine, the electronic device can detect opening and closing of the flip according to the magnetic sensor 180D.
  • the acceleration sensor 180E can detect the acceleration of the electronic device in various directions (generally three axes). When the electronic device is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • Electronic devices can measure distance via infrared or laser light. In some embodiments, when shooting a scene, the electronic device can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • Electronic devices emit infrared light outwards through light-emitting diodes.
  • Electronic devices use photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object in the vicinity of the electronic device.
  • the electronic device may determine that there is no object in the vicinity of the electronic device.
  • the electronic device can use the proximity light sensor 180G to detect that the user holds the electronic device close to the ear to make a call, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints. Electronic devices can use the collected fingerprint features to unlock fingerprints, access application locks, take pictures with fingerprints, answer incoming calls with fingerprints, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device may reduce the performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device when the temperature is lower than another threshold, the electronic device heats the battery 142 to avoid abnormal shutdown of the electronic device caused by low temperature.
  • the electronic device boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device, which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device can receive key input and generate key signal input related to user settings and function control of the electronic device.
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to realize contact and separation with the electronic device.
  • FIG. 4 do not constitute a specific limitation on the electronic device.
  • the electronic device in the embodiment of the present invention may include more or less components than those shown in FIG. 4 .
  • the combination/connection relationship between the components in FIG. 4 can also be adjusted and modified.
  • Fig. 5 shows a software structural block diagram of an electronic device provided by an embodiment of the present application.
  • the software structure of the electronic device may be a layered architecture, for example, the software may be divided into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. Assuming that the electronic device is an Android system, it may include an application program layer (referred to as the application layer), an application program framework layer (referred to as the framework layer) (framework, FWK), a hardware layer and so on.
  • the application layer an application program layer
  • framework layer framework layer
  • FWK hardware layer and so on.
  • the application program layer may include a series of application program packages.
  • it can include Huawei Smart Life, cameras, instant messaging applications, and more.
  • the framework layer includes monitoring module, sound processing module and parameter model.
  • the monitoring module can be used to monitor the current interaction scene, for example, the current interaction scene is a hyper terminal scene, a content loading scene, a keyboard input scene, and the like.
  • the monitoring module can also be used to monitor interactive operations and determine sound attribute parameters based on interactive operations. The detailed functions of the monitoring module will be introduced later.
  • the sound processing module can be used to process the sound to be output, such as loudness, increase or decrease of pitch, etc., which will be described in detail later.
  • the parameter model can be used to store various mapping relationships (will be introduced later).
  • the hardware layer includes device discovery, which is used to discover surrounding devices; and device connection, which is used to establish connections with surrounding devices.
  • the hardware layer may also include other devices such as sensors, and the user collects touch operations on the user's touch screen; it also includes a player for playing sound feedback.
  • FIG. 6 is a schematic flowchart of a touch feedback method provided by an embodiment of the present application. This method can be applied to electronic devices in the scene shown in FIG. 1 , such as mobile phones.
  • the hardware structure of the electronic device is shown in FIG. 4
  • the software structure is shown in FIG. 5 .
  • the process includes:
  • the electronic device determines a current interaction scene.
  • hyperterminal scenarios there are many kinds of interaction scenarios, such as hyperterminal scenarios (as described above), content loading scenarios, keyboard input scenarios, and so on.
  • hyperterminal scene when the electronic device detects that the first interface 110 in (c) in FIG. 2 is opened, it determines that the current interaction scene is the hyperterminal scene.
  • the electronic device determines that content is being loaded in the current display interface for example, a web page is being loaded
  • it determines that the current interaction scene is a content loading scene.
  • the electronic device detects that the user clicks a key on the keyboard (physical keyboard or soft keyboard), it determines that the current interaction scene is the keyboard input scene.
  • the electronic device determines an interaction operation type that needs to be monitored in a current interaction scenario.
  • the type of interactive operation includes click, double click, drag and drop, long press and so on.
  • the operation time including start time, end time, etc.
  • operation position including start position, end position, etc.
  • touch pressure and other data of the interactive operation can be collected through sensors (such as touch sensors).
  • the data collected by the sensor can determine the type of interactive operation, for example, it can be determined whether the interactive operation is a single-click operation or a long-press operation by the time difference between the start time and the end time, and/or, by the time difference between the start position and the end position
  • the distance of can determine whether the interaction is a swipe, a click, and so on.
  • different types of interaction operations need to be monitored in different interaction scenarios.
  • a hyper terminal scenario that is, when the first interface 110 in (c) in FIG. 2 is displayed
  • the drag operation can be understood as a long press and slide operation, such as a long press and then drag operation on the TV logo 106 .
  • the keyboard input scenario it is necessary to monitor the press operation, and so on.
  • the characteristic information of the drag operation is determined.
  • the feature information includes: the dragged identifier, the target identifier, the distance and/or direction between the dragged identifier and the target identifier, the dragging speed, the touch pressure, the device attribute corresponding to the dragged identifier ( device type, volume, weight, material, etc.), the distance and/or direction of the device corresponding to the dragged logo and the machine, etc.
  • the dragged logo may be the television logo 106 and the laptop computer logo 108 in (c) of FIG. 2 above, and the target logo may be the mobile phone logo 107 .
  • the electronic device determines a sound attribute parameter corresponding to the characteristic information according to the characteristic information of the interactive operation.
  • the sound attribute parameters include at least one of sound type, loudness, pitch, channel (that is, phase difference), and duration.
  • the electronic device determines the sound attribute parameters corresponding to the feature information according to the characteristic information of the interactive operation and the preset mapping relationship, and the mapping relationship includes the mapping between the characteristic information of the interactive operation and the sound attribute parameters relation.
  • the feature information of the interactive operation determined in S603 includes: the distance between the dragged identifier and the target identifier, the dragging speed of the dragging operation, the touch pressure of the dragging operation, the
  • the loudness of the sound feedback can be determined based on the feature information and the preset mapping relationship.
  • the mapping relationship includes the following table 1:
  • Table 1 Mapping relationship between feature information of interactive operation and loudness
  • the distance between the dragged logo and the target logo is [L1, L2] 0
  • the distance between the dragged mark and the target mark is less than L1 -20dB
  • the distance between the dragged mark and the target mark is greater than L2 +20dB
  • the numbers in this article are all examples, and the present application does not limit specific values.
  • the loudness value is determined to be -20 decibels
  • the loudness of the audio file to be played (hereinafter the initial audio file) is reduced by 20 decibels, and the loudness of the playback is The reduced audio file (i.e. sound feedback).
  • the audio file may be sounds such as "Ding Dong", "Ding", etc., or may also be song fragments, accompaniment fragments or other sounds, which are not limited in this embodiment of the present application.
  • the characteristic information of the interactive operation determined in S603 includes: the distance between the dragged identifier and the target identifier, the dragging speed of the dragging operation, the touch pressure of the dragging operation, and the corresponding
  • the tone of the sound feedback can be determined based on the feature information and the preset mapping relationship.
  • the mapping relationship may include the following table 2:
  • Table 2 The mapping relationship between the feature information of the interactive operation and the tone
  • the material of the device corresponding to the dragged logo is metal 0 Hz
  • the material of the device corresponding to the dragged logo is plastic -20 Hz
  • the material of the device corresponding to the dragged logo is glass +20 Hz
  • the material of the device corresponding to the dragged object is glass.
  • the pitch value is -20 Hz, so the pitch of the audio file needs to be lowered by 20 Hz, and the audio file after the pitch reduction (ie audio feedback).
  • the electronic device can determine the corresponding tone by querying the above mapping relationship according to the material of the device corresponding to the dragged mark, and then adjust the tone of the audio file, play the audio file after the tone adjustment, and generate sound feedback .
  • the feature information of the interactive operation determined in S603 includes: at least one of: the dragged mark is located in the direction of the target mark, and the device corresponding to the dragged mark is located in the direction of the machine, based on the feature
  • the information and the preset mapping relationship can determine the channel of the sound feedback. Take determining the channel of the sound feedback based on the direction of the dragged marker located at the target marker and the preset mapping relationship as an example.
  • the mapping relationship may include the following table 3:
  • Table 3 The mapping relationship between the feature information of the interactive operation and the channel
  • the direction of the dragged logo is in [A1, A2] 0
  • the direction of the dragged logo is smaller than A1 -2s
  • the direction of the dragged logo is greater than A2 +2s
  • A1 and A2 are angle values respectively.
  • the phase difference or time difference
  • the phase difference between the left channel signal and the right channel signal corresponding to the audio file to be played is reduced by 2s .
  • a head-response transfer function (Head-Response Transfer Function, HRTF) can be used to adjust the phase difference between the left and right channel signals, which will not be described in this embodiment.
  • the characteristic information of the interactive operation determined in S603 includes: when the type of device corresponding to the dragged logo is selected, based on the characteristic information and the preset mapping relationship, the sound type of the sound feedback can be determined.
  • the mapping relationship is as follows in Table 4:
  • Table 4 Mapping relationship between feature information of interactive operation and sound type
  • the device type corresponding to the dragged logo is TV boom
  • the device type corresponding to the dragged logo is a speaker Ding
  • the electronic device determines the type of device corresponding to the dragged logo
  • the corresponding sound type can be determined through Table 1 above.
  • the electronic device there are multiple ways for the electronic device to determine the device type corresponding to the dragged symbol, for example, sending query information to the device corresponding to the dragged symbol to query its device type, and so on.
  • mapping relationship as an example to determine the sound attribute parameters. It can be understood that there may be other ways to determine the sound attribute parameters, for example, using a function to determine the sound attribute parameters.
  • the first function is stored in the electronic device, and the input of this function is the distance between the drag mark and the target mark, the drag speed of the drag operation, the touch pressure of the drag operation, and the corresponding At least one of the volume, weight, material of the device, and the distance between the device corresponding to the dragged logo and the machine, and the output of this function is the loudness.
  • the values of k1 and b1 may be set by default or set by the user, which is not limited in this application.
  • the second function is stored in the electronic device, and the input of this function is the distance between the dragged mark and the target mark, the dragging speed of the dragging operation, the touch pressure of the dragging operation, and the corresponding At least one of the volume, weight, material, and the distance between the device corresponding to the dragged identifier and the device, and the output of this function is the tone.
  • At least one of the volume, weight, material of the device corresponding to the drag mark, and the distance between the device corresponding to the dragged mark and the device, y is the loudness
  • k2 and b2 are known quantities. Wherein, the values of k2 and b2 may be set by default or set by the user, which is not limited in this application.
  • the electronic device can adjust the sound attribute of the audio file, play the adjusted audio file, and generate sound feedback by querying the above mapping relationship or calculating the corresponding sound attribute parameters according to the characteristic information of the interactive operation.
  • the electronic device processes the audio file according to the sound attribute parameter.
  • the sound attribute parameters include sound type, loudness, pitch, phase difference, duration and so on.
  • the electronic device can search for an audio file corresponding to the sound type among many audio files, then adjust the loudness of the audio file, and then play the loudness-adjusted audio file.
  • many audio files can be stored in the electronic device, corresponding to various types of sounds, such as "ding", "dong", or music clips and so on.
  • the audio file may be defaulted by the electronic device or set by the user, which is not limited in this embodiment of the present application.
  • the electronic device plays the processed audio file.
  • a step may also be included: establishing a connection between devices corresponding to the dragged logo in response to the interaction operation.
  • FIG. 7 is another schematic flowchart of the touch feedback method provided by the embodiment of the present application.
  • the flow chart can be understood as an information interaction diagram between different software modules in FIG. 5 , for example, an information interaction diagram between a listening module and a sound processing module.
  • Figure 7 can be understood as a refinement of Figure 6, for example, Figure 7 refines the execution subject of each step in Figure 6. As shown in Figure 7, the process includes:
  • the monitoring module determines the current interaction scene.
  • the monitoring module can monitor the display interface on the display screen (not shown in FIG. 5 ) in the hardware layer, and when it is detected that the display interface is the first interface 110 shown in FIG. 2 , it is determined that the current interaction scene is a super In the terminal scene, when it is detected that the display interface is a content loading interface, it is determined that the current interaction scene is a content loading scene.
  • the monitoring module determines the type of interaction operation that needs to be monitored in the current interaction scenario.
  • the monitoring module determines characteristic information of the interactive operation.
  • the monitoring module determines, according to the characteristic information of the interactive operation, a sound attribute parameter corresponding to the characteristic information.
  • the listening module determines the sound attribute parameters corresponding to the feature information according to the feature information and the preset mapping relationship. Therefore, the process of determining the sound attribute parameters will not be repeated here.
  • the framework layer includes a parameter model, and the mapping relationship can be stored in the parameter model, so in S704, the monitoring module can query the parameter model for corresponding sound attribute parameters according to the characteristic information.
  • the monitoring module sends the determined sound attribute parameters to the sound processing module.
  • the sound attribute parameters include sound type, loudness, pitch, phase difference, duration and so on. After the monitoring module determines the sound attribute parameters, it sends them to the sound processing module.
  • the sound processing module processes the audio file according to the determined sound attribute parameters.
  • the sound processing module plays the processed audio file.
  • the sound processing module may call a player in the hardware layer to play the processed audio file.
  • the monitoring module in the electronic device detects the drag operation
  • in response to the drag operation establishes a connection between devices corresponding to the dragged identifier.
  • the monitoring module calls the device connection module in the hardware layer to connect with the device corresponding to the dragged logo.
  • the hyper terminal scenario is mainly used as an example for introduction
  • the keyboard input scenario is used as an example for introduction below.
  • the type of interactive operation that needs to be monitored is the press operation.
  • characteristic information of the pressing operation is determined, such as pressing duration, pressing force, pressing speed, pressing depth, and the like.
  • the mapping relationship between the feature information of the interactive operation and the sound attribute parameters includes: the mapping relationship between at least one of pressing duration, pressing force, pressing speed, pressing depth and loudness or pitch.
  • the mapping relationship includes the following table 5:
  • Table 5 Mapping relationship between feature information of interactive operation and loudness/pitch
  • the loudness of the audio file is reduced by 20 decibels, and the audio file after loudness reduction is played (ie, sound feedback). That is to say, for interactive operations with different pressing depths, the loudness and/or tone of the sound feedback generated are different. Similarly, when at least one of the pressing speed, pressing force, and pressing duration is different, the loudness and/or pitch of the sound feedback are different.
  • the types of interactive operations that need to be monitored include operations for loading content, such as operations for opening web pages.
  • characteristic information of the operation is determined, such as loading speed and/or loading progress.
  • the mapping relationship between the feature information of the interactive operation and the sound attribute parameters includes: the mapping relationship between loading speed and/or loading progress and loudness or pitch.
  • the mapping relationship includes the following table 6:
  • Table 6 Mapping relationship between feature information of interactive operation and loudness/pitch
  • Characteristic Information for Interactive Operations Loudness/pitch Loading speed and/or loading progress is at [L1, L2] 0 Loading speed and/or loading progress is less than L1 -20 Loading speed and/or loading progress greater than L2 +20
  • the loudness of the audio file is reduced by 20 decibels, and the audio file after the loudness reduction is played (ie, sound feedback). That is to say, when the loading progress is different, the loudness and/or pitch of the generated sound feedback are different. For example, as the loading progress increases, the loudness and/or pitch increases. Users can judge the loading progress by the loudness and/or pitch of the audio feedback. Similarly, when the loading speed is different, the loudness and/or pitch of the generated sound feedback are different. For example, as the loading speed increases, the louder and/or the higher the pitch, the user can judge the loading speed through the loudness and/or pitch of the sound feedback.
  • FIG. 8 shows an electronic device 800 provided by this application.
  • the electronic device 800 may be the aforementioned mobile phone.
  • an electronic device 800 may include: one or more processors 801; one or more memories 802; a communication interface 803, and one or more computer programs 804, and each of the above devices may communicate through one or more bus 805 connection.
  • the one or more computer programs 804 are stored in the memory 802 and are configured to be executed by the one or more processors 801
  • the one or more computer programs 804 include instructions, and the instructions can be used to perform the above Relevant steps of the electronic device in the corresponding embodiments (such as the relevant steps in FIG. 6 or FIG. 7 ).
  • the communication interface 803 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
  • the methods provided in the embodiments of the present application are introduced from the perspective of an electronic device (such as a mobile phone) as an execution subject.
  • the electronic device may include a hardware structure and/or a software module, and realize the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the terms “when” or “after” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting ".
  • the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • relational terms such as first and second are used to distinguish one entity from another, without limiting any actual relationship and order between these entities.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly stated otherwise.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
  • a magnetic medium for example, a floppy disk, a hard disk, or a magnetic tape
  • an optical medium for example, DVD
  • a semiconductor medium for example, a Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种触控反馈方法与电子设备。第一电子设备响应于第一操作,显示第一界面,第一界面中包括第一电子设备的第一标识、第二电子设备的第二标识,以及第三电子设备的第三标识;响应于第一拖拽操作,输出第一声音反馈,并与所述第二电子设备建立连接,所述第一拖拽操作用于拖拽所述第二标识使得所述第二电子设备与所述第一电子设备建立连接;响应于第二拖拽操作,输出第二声音反馈,并与所述第三电子设备建立连接,所述第二拖拽操作用于拖拽所述第三标识使得所述第三电子设备与所述第一电子设备建立连接;所述第二声音反馈与所述第一声音反馈不同。通过这种方式,提升设备连接时的交互体验。

Description

一种触控反馈方法与电子设备
相关申请的交叉引用
本申请要求在2021年10月20日提交中国专利局、申请号为202111219989.1、申请名称为“一种触控反馈方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种触控反馈方法与电子设备。
背景技术
大多数电子设备的显示屏是触摸屏,用户通过触摸屏实现与电子设备的交互。为了提升交互体验,用户在触摸屏上执行触控操作时伴随有触控反馈,比如声音反馈。比如,点击操作(如,在键盘输入界面中点击字母键)时有声音反馈;或者,滑动操作(如,滑动删除联系人)时有声音反馈等等。
目前,用户在电子设备上的触控操作的声音反馈往往是固定不变的。以滑动操作为例,无论用户如何执行滑动操作(滑动速度快或慢、滑动时触摸压力大或小),其声音反馈固定不变。这种方式不够灵活,而且固定不变的声音反馈无法让用户感受到不同触控操作之间的差异,体验不佳。
发明内容
本申请的目的在于提供了一种触控反馈方法与电子设备,用于提升触控体验。
第一方面,提供一种触控反馈方法,应用于第一电子设备。所述第一电子设备比如手机、平板电脑等。第一电子设备响应于第一操作,显示第一界面,所述第一界面中包括所述第一电子设备的第一标识、第二电子设备的第二标识,以及第三电子设备的第三标识;其中,所述第二电子设备和所述第三电子设备是所述第一电子设备扫描到的周围设备;第一电子设备响应于第一拖拽操作,输出第一声音反馈,并与所述第二电子设备建立连接,所述第一拖拽操作用于拖拽所述第二标记使得所述第二电子设备与所述第一电子设备建立连接;第一电子设备响应于第二拖拽操作,输出第二声音反馈,并与所述第三电子设备建立连接,所述第二拖拽操作用于拖拽所述第三标记使得所述第三电子设备与所述第一电子设备建立连接;其中,所述第二声音反馈与所述第一声音反馈不同。
在本申请实施例中,用户可以在第一电子设备上打开第一界面,通过拖拽第一界面中的标识实现该标识所对应的设备与第一电子设备的连接。为了提升触控体验,拖拽标识时可以产生声音反馈,而且,拖拽不同标识时产生的声音反馈不同,避免单一的声音反馈带来的乏味感,进一步提升交互体验。
在一些可能的设计中,所述第二声音反馈与所述第一声音反馈不同,包括:所述第二声音反馈与所述第一声音反馈的声音类型、响度、音调、声道、持续时长中的至少一项不同。
在本申请实施例中,声音反馈包括声音类型、响度、音调、声道、持续时长等多种属性。两种声音反馈不同,可以是两种声音反馈的多种属性中的至少一个属性不同即可,本申请实施例不作限定。
在一些可能的设计中,所述第二声音反馈与所述第一声音反馈不同,包括:在满足如下条件中的至少一种时,所述第二声音反馈与所述第一声音反馈不同;所述条件包括:
所述第二电子设备与所述第三电子设备的设备类型、体积、重量、材质中的至少一项不同;
所述第二电子设备与所述第三电子设备到所述第一电子设备的距离不同;
所述第二电子设备与所述第三电子设备位于所述第一电子设备的不同方向;
所述第二标记与所述第三标记到所述第一标记的距离不同;
所述第二标记与所述第三标记位于所述第一标记的不同方向;
所述第一拖拽操作与所述第二拖拽操作的拖拽速度不同;
所述第一拖拽操作与所述第二拖拽操作与触摸屏的触摸压力不同。
在本申请实施例中,用户在第一电子设备的第一界面上拖拽一个标识时,产生的声音反馈与如下至少一项相关:该标识所对应的设备的类型、体积、重量、材质;该设备与第一电子设备之间的距离、方向;该标识与第一电子设备的标识之间的距离、方向;拖拽速度;拖拽压力等。通过这种方式,声音反馈更为灵活变化。
在一种可能的设计中,所述第二声音反馈与所述第一声音反馈不同,包括:在满足如下条件中的至少一种时,所述第一声音反馈的响度和/或音调大于所述第二声音反馈;所述条件包括:
所述第二标识到所述第一标识的距离大于所述第三标识到所述第一标识的距离;
所述第二电子设备到所述第一电子设备的距离大于所述第三电子设备到所述第一电子设备的距离;
所述第一拖拽操作的拖拽速度大于所述第二拖拽操作的拖拽速度;
所述第一拖拽操作的触摸压力大于所述第二拖拽操作的触摸压力;
所述第二电子设备的体积大于所述第三电子设备的体积;
所述第二电子设备的重量大于所述第三电子设备的重量;
所述第二电子设备的材质硬度大于所述第三电子设备的材质硬度。
在一种可能的设计中,所述第二声音反馈与所述第一声音反馈不同,包括:在满足如下条件中的至少一种时,所述第一声音反馈的持续时长大于所述第二声音反馈的持续时长;所述条件包括:
所述第二标识到所述第一标识的距离大于所述第三标识到所述第一标识的距离;
所述第二电子设备到所述第一电子设备的距离大于所述第三电子设备到所述第一电子设备的距离。
也就是说,两个标识越远,或者两个标识对应的设备之间的距离越远,则声音反馈持续的时长越长,这种声音反馈方式符合实际情况,用户体验较好。
在一种可能的设计中,所述第二声音反馈与所述第一声音反馈不同,包括:
在满足如下条件时,所述第一声音反馈所指示的声源方向为第一方向,所述条件包括:所述第二标识位于所述第一标识的第一方向,和/或,所述第二电子设备位于所述第一电子设备的第一方向;
在满足如下条件时,所述第二声音反馈所指示的声源方向为第二方向,所述条件包括:所述第三标识位于所述第一标识的第二方向,和/或,所述第二电子设备位于所述第一电子设备的第二方向。
举例来说,用户在第一电子设备的第一界面上拖拽第二标识时,产生的声音反馈能够指示该第二标识相对于第一标识(第一电子设备对应的标识)的方向,比如,第二标识位于第一标识的左后方,用户通过声音反馈可以感受到声音来自左后方,这种声音反馈方式,交互体验较好。
再例如,用户在第一电子设备的第一界面上拖拽第二标识时,产生的声音反馈能够指示该第二标识对应的第二电子设备相对于第一电子设备的实际方向,比如,第二电子设备位于第一电子设备的左后方,用户通过声音反馈可以感受到声音来自左后方,这种声音反馈方式,符合真实环境中设备之间的方向关系,交互体验较好。
在一种可能的设计中,所述第一声音反馈所指示的声源方向为第一方向,包括:所述第一声音反馈包括第一左声道信息和第一右声道信息,所述第一左声道信息与所述第一右声道信息之间的相位差为第一相位差;所述第一相位差用于确定声源方向为所述第一方向;所述第二声音反馈所指示的声源方向为第二方向,包括:所述第二声音反馈包括第二左声道信息和第二右声道信息,所述第二左声道信息与所述第二右声道信息之间的相位差为第二相位差;所述第二相位差用于确定声源方向为所述第二方向。因此,在本申请实施例中,在拖拽一个标识时,产生的声音反馈可以指示方向信息,有助于提升触控体验。
在一种可能的设计中,所述方法还包括:所述第一界面中所述第二标识到所述第一标识的第一距离,与所述第二电子设备到所述第一电子设备的第二距离正相关;和/或,
所述第一界面中所述第二标识位于所述第一标识的第一方向,与所述第二电子设备位于所述第一电子设备的第二方向一致;和/或,
所述第一界面中所述第三标识到所述第一标识的第三距离,与所述第三电子设备到所述第一电子设备的第四距离正相关;和/或,
所述第一界面中所述第三标识位于所述第一标识的第三方向,与所述第三电子设备位于所述第一电子设备的第四方向一致。
在本申请实施例中,用户可以在第一电子设备上打开第一界面,第一界面中包括第一电子设备周围设备所对应的标识,比如,第二电子设备对应的第二标识、第三电子设备对应的第三标识。而且,第一界面中各个标识的位置分布与真实环境中各个设备的位置分布相关。比如真实环境中,第二电子设备在第一电子设备的左后方,那么第一界面中,第二标识在第一标识的左后方。因此,通过第一界面用户可以知道第一电子设备周围有哪些设备,而且可以知道周围设备的位置分布,用户体验较好。
在一种可能的设计中,第一拖拽操作用于拖拽第二标识向第一标识所在位置处移动。比如,所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且不接触所述第一标记,或者,所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记。和/或,第二拖拽操作用于拖拽第三标识向第一标识所在位置处移动。比如,所述第二拖拽操作用于拖拽所述第三标记向所述第一标记所在位置处移动且不接触所述第一标记,或者,所述第二拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记。
需要说明的是,第一拖拽操作还可以有其它实现方式,只要使得第二电子设备与第一 电子连接即可,同理,第二拖拽操作还可以有其它实现方式,只要使得第三电子设备与第一电子设备连接即可,本申请实施例不作限定。
在一种可能的设计中,所述方法还包括:所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记的情况下,所述第一声音反馈包括第三声音反馈和第四声音反馈;其中,所述第三声音反馈是所述第二标识开始移动到与所述第一标识接触之前对应的声音反馈,所述第四声音反馈是所述第二标识与所述第一标识接触时对应的声音反馈;
和/或,
在所述第二拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记的情况下,所述第二声音反馈包括第五声音反馈和第六声音反馈;其中,所述第五声音反馈是所述第三标识开始移动到与所述第一标识接触之前对应的声音反馈,所述第六声音反馈是所述第三标识与所述第一标识接触时对应的声音反馈。
以第一拖拽操作为例,由于第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记,所以,在所述第二标识开始移动到与所述第一标识接触之前对应的一种声音反馈(即第三声音反馈),在第二标识与第一标识接触时对应另一种声音反馈(即第四声音反馈)。比如,第二标识开始移动到与第一标识接触之前对应的声音反馈是“叮”,第二标识接触第一标识时对应的声音反馈是“哐”的碰撞音。通过这种方式,声音反馈更为丰富,用户体验较好。
在一些实施例中,第三声音反馈与第四声音反馈是不同类型的声音,和/或,第五声音反馈与第六声音反馈是不同类型的声音。其中,声音类型比如“叮”、“咚”、“哐”、等等,还可以是音乐片段等。
在一种可能的设计中,在所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动的情况下,输出所述第一声音反馈的过程中,所述第一声音反馈的响度和/或音调,随着所述第二标识到所述第一标识的距离的缩短、所述第一拖拽操作的拖拽速度的降低、所述第一拖拽操作的触摸压力减小中的至少一项而降低;
和/或,
在所述第一拖拽操作用于拖拽所述第三标记向所述第一标记所在位置处移动的情况下,输出所述第二声音反馈的过程中,所述第二声音反馈的响度和/或音调,随着所述第三标识到所述第一标识的距离的缩短、所述第二拖拽操作的拖拽速度的降低、所述第二拖拽操作的触摸压力减小中的至少一项而降低。
以第一拖拽操作为例,由于第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动,所以第二标识是逐渐靠近第一标识的,在靠近的过程中一直有声音反馈,而且,该声音反馈的响度和/或音调是动态变化的,比如,随着第二标识到第一标识的距离的缩短、第一拖拽操作的拖拽速度的降低、第一拖拽操作的触摸压力减小中的至少一项而降低。这种方式,声音反馈的音箱和响度随着不同标识之间的距离的变化而变化,体验较好。
在一种可能的设计中,所述第二声音反馈与所述第一声音反馈是同一类声音。其中,声音类型比如“叮”、“咚”、“哐”、等等,还可以是音乐片段等,本申请实施例不作限定。
第二方面,还提供一种电子设备,包括:
处理器,存储器,以及,一个或多个程序;
其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令, 当所述指令被所述处理器执行时,使得所述电子设备执行如上述第一方面所述的方法步骤。
第三方面,还提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面提供方法。
第四方面,还提供一种计算机程序产品,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面提供的方法。
第五方面,还提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、存储器、以及处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如上述第一方面提供的方法时显示的图形用户界面。
第六方面,本申请实施例还提供一种芯片系统,所述芯片系统与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面的技术方案,本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
上述第二方面至第六方面的有益效果,参见第一方面的有益效果,不重复赘述。
附图说明
图1至图3为本申请一实施例提供的应用场景的示意图;
图4为本申请一实施例提供的电子设备的硬件结构的示意图;
图5为本申请一实施例提供的电子设备的软件结构的示意图;
图6为本申请一实施例提供的触控反馈方法的流程示意图;
图7为本申请一实施例提供的触控反馈方法的另一种流程示意图;
图8为本申请一实施例提供的电子设备的结构示意图。
具体实施方式
本申请实施例提供的触控反馈方法适用于电子设备。在一些实施例中,所述电子设备可以具有触摸屏。比如,所述电子设备可以是手机、平板电脑、笔记本电脑等便捷式电子设备;还可以是手表、手环等穿戴设备;或者,还可以是电视机、冰箱等智能家居设备;或者,还可以是车载显示器等车载设备,总之本申请实施例不限定电子设备的具体类型。
在一些实施例中,所述电子设备(本申请实施例提供的触控反馈方法所适用的电子设备),可以是通信系统中的电子设备。所述通信系统中包括多个设备,其中不同设备可以建立连接,从而实现不同设备之间的数据传输。在一些实施例中,该通信系统可称为超级终端、超级终端组、超级终端系统、多设备协同、多设备互联互通等。以下各实施例中,以所述通信系统称为“超级终端”为例。在本申请实施例中,用户在超级终端中的一个电子设备上执行触控操作以实现与其它设备的连接时,该电子设备可以输出声音反馈,有助于提升交互体验。
示例性的,图1至图3为本申请实施例提供的一种应用场景的示意图。该应用场景以超级终端场景为例。
如图1,用户所处环境(如,家里)中包括N个电子设备。N为大于或等于2的整数。N个电子设备的设备类型可以相同或不同。比如,N个电子设备都是手机或者都是平板电 脑;或者,N个电子设备中有手机、有平板电脑、还有一体机、电视机等等。N个电子设备中的应用(application,简称app)可以相同或不同。所述应用包括:即时通讯类应用、视频类应用、音频类应用、图像拍摄类应用等等。其中,即时通信类应用,例如可以包括畅连、
Figure PCTCN2022116339-appb-000001
WhatsApp
Figure PCTCN2022116339-appb-000002
照片分享(Instagram)、Kakao
Figure PCTCN2022116339-appb-000003
等。图像拍摄类应用,例如可以包括相机应用(系统相机或第三方相机应用)。视频类应用,例如可以包括
Figure PCTCN2022116339-appb-000004
Figure PCTCN2022116339-appb-000005
等等。音频类应用,例如可以包括Google
Figure PCTCN2022116339-appb-000006
等等。电子设备中的应用可以是电子设备出厂时已安装好的应用,也可以是用户在使用电子设备的过程中从网络下载或其他电子设备获取的应用,本申请实施例不作限定。需要说明的是,图1是以N=4为例,分别是用户手中握持的手机、电视机、音箱以及笔记本电脑。
N个电子设备之间可以建立连接。以手机为例,手机可以与其它设备建立连接。示例性的,如图2中的(a),手机上显示主界面101,主界面101中包括各种应用的图标。当手机检测到预设操作(如,从主界面101屏幕上方向下滑动的操作)时,显示如图2中的(b)所示的控制中心界面102。控制中心界面102中包括超级终端窗口103。超级终端窗口103中显示手机搜索到的周围设备的标识。当检测到针对超级终端窗口103中标记104的操作时,手机显示如图2中的(c)所示的第一界面110。第一界面110中显示手机扫描到的各个设备的标识,比如,电视机标识106、笔记本电脑标识108、音箱标记109以及手机标识107。示例性的,各个标识可以呈气泡或其它形式展示,本申请不限定。可以理解的是,为了提升体验感,手机标识107位于第一界面110的中间位置、其它标记分布在手机标识107周围,用于表征手机周围的设备有电视机、笔记本电脑以及音箱。如此,通过第一界面110可以向用户展示手机周围有哪些设备,体验较好。
在一些实施例中,如果用户想要将手机与某个设备连接,可以在第一界面110中将该设备所对应的标识向手机标识107拖拽,以实现该设备与手机的连接。
举例来说,如图2中的(c),手机响应于将电视机标识106向手机标识107拖拽的第一拖拽操作,建立手机与电视机的连接。示例性的,所述第一拖拽操作可以是将电视机标识106向手机标识107靠近但未接触手机标识107的操作,或者,是将电视机标识106向手机标识107靠近且接触到手机标识107的操作,本申请实施例不限定。这种通过拖拽标识以建立设备连接的方式,操作便捷,体验较好。可选的,如果手机与电视机连接成功,手机可以显示如图2中的(d)所示的界面,该界面中电视机标识106与手机标识107紧挨着显示(或贴合、重叠、吸附等),代表手机与电视机已连接。也就是说,用户将电视机标识106拖拽到手机标识107所在位置处,如果电视机标识106紧挨着手机标识107的话,说明电视机与手机连接成功,以给予用户一定的提示作用。可以理解的是,如果手机与电视机连接失败,电视机标识106可以恢复到图2中的(c)的位置,用户可以再次拖拽电视机标识106到手机标识107以尝试再次连接。因此,通过这种方式,用户可以直观的感知到两个设备的连接结果(连接成功或连接失败)。
再例如,如图2中的(c),手机响应于将笔记本电脑标识108向手机标识107拖拽的第二拖拽操作,建立手机与笔记本电脑的连接。示例性的,所述第二拖拽操作可以是将笔记本电脑标识108向手机标识107靠近但未接触手机标识107的操作,或者,是将笔记本电脑标识108向手机标识107靠近且接触到手机标识107的操作,本申请实施例不限定。可选的,如果手机与笔记本电脑连接成功,手机显示如图2中的(d)所示的界面,该界 面中笔记本电脑标识108与手机标识107紧挨着显示(或贴合、重叠、吸附等),代表手机与笔记本电脑已连接。可以理解的是,如果手机与笔记本电脑连接失败,笔记本电脑标识108恢复到图2中的(c)的位置,代表连接失败。
因此,通过以上方式,用户在第一界面110上可以快速、高效的实现手机与其它设备的连接,并且可以直观的感知到连接结果。当手机与其它设备连接后,可以与所述设备进行数据传输。比如,手机与电视机连接后,可以将手机上的视频播放类应用的显示界面(比如电影或电视剧的界面)通过电视机展示,这样用户可以在大屏设备上观看电影或电视剧。或者,手机与笔记本电脑连接后,可以将手机上的文档(word)界面等通过笔记本电脑展示,这样用户可以在笔记本电脑上编辑文档进行办公,体验较好。
可以理解的是,上面的实施例是以手机与别的设备(如,电视机和笔记本电脑)建立连接为例进行说明,对于除手机之外的其它设备也可以使用这种方式与别的设备建立连接。
需要说明的是,图2中的(c)所示的第一界面110中,各个标识的显示位置与真实环境(即图1所示的真实环境)中各个设备的实际位置不同。比如,图2中的(c)中,电视机标识106在手机标识107的右前方,而图1所示的真实环境中,电视机位于手机的左前方。因此,在一些实施例中,手机扫描到周围的设备之后,将扫描到的设备对应的标识显示在第一界面110中即可,无需考虑各个设备的实际位置。这种方式实现难度低,而且用户可以通过第一界面110知道手机周围有哪些设备。在另一些实施例中,第一界面110中各个标识的显示位置可以与真实环境中各个设备的实际位置相关。比如,包括如下方式1或方式2中的至少一种:
方式1,第一界面110中两个标识之间的距离与这两个标识对应的设备之间的实际距离正相关。即,两个设备之间的实际距离越远,那么这两个设备所对应的标识之间的距离越远。比如,两个设备的实际距离为L,那么两个标识之间的距离为L/n,n可以是正整数。换言之,将实际距离按照一定的比例缩小。以n=100为例,假设图1中电视机与手机之间的真实距离为2m,那么电视机标识106与手机标识107之间的距离为0.02m,假设图1中笔记本电脑与手机之间的真实距离为4m,那么笔记本电脑标识108与手机标识107之间的距离为0.04m。示例性的,如图3所示,为第一界面110的另一种示意图。该第一界面110中,电视机标识106与手机标识107较近(比如0.02cm),笔记本电脑标识与手机标识107较远(比如,0.04m)。这样,用户通过第一界面110中两个标识之间的距离大小可以感知到两个标识所对应的两个设备之间的真实距离的大小。可以理解的是,使用方式1时,手机需要确定周围设备与手机之间的真实距离,具体的确定方式有多种,比如激光测距等等,本文不多赘述。
方式2,第一界面110中一个标识位于手机标识107的方向与该标识对应的设备位于手机的真实方向一致。比如,一个设备位于手机的第一方向,那么该设备所对应的标识位于手机标识107的第一方向,或,位于第一方向范围,第一方向范围中包括第一方向。比如,图1中,电视机位于手机的左前方(比如北偏左45度),那么第一界面110中电视机标识106位于手机标识107的左前方(比如北偏左45度),如图3。再比如,图1中,笔记本电脑位于手机的右前方(比如北偏右45度),那么第一界面110中笔记本电脑标识108位于手机标识107的右前方(比如北偏右45度),如图3。这样,用户通过第一界面110可以知道周围设备的实际方向,用户体验较好。可以理解的是,使用方式2时,手机需要 确定周围设备相对于手机的方向(比如北偏右45度或(比如北偏左45度)),具体的确定方式有多种,比如,麦克风阵列定位技术、波束指向(steered-beamformer)法,基于高分辩率谱分析(high-resolution spectral analysis)定向法,和基于声音时间差(time-delay estimation,TDE)定向法等等,本文不多赘述。
通过以上描述可知,第一界面110可以是图2中的(c)所示的,也可以是图3所示的。为了方便描述,下文主要以图2中的(c)所示的第一界面110为例进行说明。
为了提升交互体验,在图2中的(c)所示的第一界面110中,手机响应于用户的触控操作(比如前面的第一拖拽操作或第二拖拽操作)可以输出对应的触控反馈,比如声音反馈。
示例性的,在图2中的(c)所示的第一界面110中,手机响应于用于将电视机标识106拖拽向手机标识107的第一拖拽操作,输出第一声音反馈。手机响应于用于将笔记本电脑标识108拖拽向手机标识107的第二拖拽操作,输出第二声音反馈。
在一些实施例中,第一声音反馈和第二声音反馈可以相同。示例性的,第一声音反馈与第二声音反馈相同,包括:第一声音反馈与第二声音反馈的声音类型相同、响度相同、音调相同、声道相同、持续时长相同中的至少一种。其中,声音类型相同可以理解为同一类声音,比如,都是“叮咚(tinkle)”,或都是“叮”,或是同一歌曲片段、同一伴奏片段等等。前面提到了声音的响度、音调、声道,为了便于理解先对这三个参数进行简单的介绍。其中,音调,也称音高(Pitch),表示声音的调子。音调大小主要取决于声波频率的高低,频率高则音调高,频率低则音调低。音调的单位用赫兹(Hz)表示。响度,也称为音量(Gain),表示声音能量的强弱程度。响度主要取决于声波振幅的大小,振幅大则响度大,振幅小则响度低。响度的单位一般是分贝(dB)。可以理解的是,音调和响度是声音的两种不同属性,音调高的声音(比如女高音)响度不一定大,音调低的声音(比如,男低音)响度不一定低。为了清楚的说明声道(Pan),先简单介绍人耳的听觉原理。环境中物体发出声音时,人的左耳和右耳都会采集到声波信号,但由于左耳和右耳的位置不同、朝向不同,所以左耳采集到的声波信号与右耳采集到的声波信号具有相位差(即时间差),大脑基于该相位差确定声源的具体位置,进而能够感受到立体声。为了给用户带来立体声的听觉感受,电子设备的设计者利用人耳听觉原理来设计电子设备上的发声单元。前面提到的,人感受到立体声的原因是因为采集的两个声波信号之间存在相位差,所以电子设备上可以设置两个发声单元,这两个发声单元发出的声波信号具有相位差。这样,电子设备发出的两个本身具有相位差的声波信号被传入人耳后,大脑基于所述相位差可以感受到立体声的效果。通常,所述两个发声单元被称为双声道,比如左声道和右声道,其中,左声道发出的声波信号与右声道发出的声波信号具有相位差,该相位差可以用于确定声源方向。因此,这类电子设备(利用人耳听觉原理设计的电子设备)所输出的声音能够指示声源方向,即,用户采集到输出的声音之后,大脑能够识别出声源方向。因此,第一声音反馈与第二声音反馈的声道相同,可以理解为第一声音反馈和第二声音反馈所指示的声源方向相同,比如都是正前方,这样用户通过第一声音反馈感受到声音来源于正前方,通过第二声音反馈也感受到声音来源于正前方。其中,第一声音反馈与第二声音反馈的持续相同比如第一声音反馈和第二声音反馈都是1s、2s或3s等,具体时长不限定。也就是说,第一界面110中不同标识被拖拽向手机标识107时,输出的声音反馈可以无差异。
在另一些实施例中,第一声音反馈和第二声音反馈可以不同。示例性的,第一声音反馈与第二声音反馈不同,可以包括;声音类型不同、响度不同、音调不同、声道不同、持续时间不同中的至少一种。
以第一声音反馈与第二声音反馈的声音类型不同为例。在一些实施例中,当一个标识被拖拽时,输出的声音反馈的声音类型与该标识所对应设备的设备类型相关。比如,如果被拖拽的标识对应的设备是类型A(比如电视机),输出的声音反馈的声音类型为类型1,比如“叮”。如果被拖拽的标识对应的设备是类型B(比如平板电脑),输出的声音反馈的声音类型为类型2,比如“咚”。其中,设备类型标识电视机、手机、音箱、手表等等各种类型。声音类型包括“叮”、“咚(rub-a-dub)”、“砰(thud)”、“咕咚(splash)”、音乐片段等等。以图2中的(c)为例,电视机与笔记本电脑属于不同类型的设备,所以电视机标识106被拖拽时的第一声音反馈与笔记本电脑标识108被拖拽时的第二声音反馈的声音类型不同。比如,第一声音反馈是“叮”、第二声音反馈是“咚”。如此,不同标识对拖拽时产生不同类型的声音反馈,区别较大。一种可实现方式为,手机中存储有声音反馈的声音类型与设备类型的对应关系,比如,设备类型A对应声音类型1,设备类型B对应声音类型2,等等。手机基于该对应关系可以确定被拖拽标识对应的哪种声音类型的声音反馈,该对应关系可以是默认存储在电子设备中的,或者是用户设置的,本申请不限定。
以第一声音反馈与第二声音反馈的响度不同为例。在一些实施例中,当一个标识被拖拽时,输出的声音反馈的响度与如下至少一项相关:
1、与被拖拽的标识到手机标识107的距离相关。比如,所述距离越大,则响度越大。以图2中的(c)为例,电视机标识106距离手机标识107较远,笔记本电脑标识108距离手机标识107较近,所以电视机标识106对应的第一声音反馈的响度大于笔记本电脑标识108对应的第二声音反馈的响度。
2、与被拖拽的标识所对应的设备到手机的距离(该距离为两个设备之间的实际距离)相关。比如,所述距离越大,则响度越大。以图1和图2中的(c)为例,假设图1的真实环境中,电视机距离手机较远,笔记本电脑距离手机较近,那么电视机标识106对应的第一声音反馈的响度大于笔记本电脑标识108对应的第二声音反馈的响度。
3、与拖拽操作的拖拽速度和/或触摸压力相关。比如,拖拽速度和/或触摸压力越大,则响度越大。以图2中的(c)为例,假设电视机标识106被拖拽的速度较大,笔记本电脑标识108被拖拽的速度较小,则第一声音反馈的响度大于第二声音反馈的响度。
4、与被拖拽的标识所对应的设备的体积、重量、材质中的至少一项相关。比如,体积越大、重量越重或材质硬度越高,则响度越大。以图2中的(c)为例,假设电视机的体积、体重、材质硬度中的至少一项大于笔记本电脑,则电视机标识106对应的第一声音反馈的响度大于笔记本电脑标识108对应的第二声音反馈的响度。需要说明的是,在真实世界中,人拖拽物体时,被拖拽物体的体积越大、重量越重或材质越硬,则发出声音的响度越高。因此,这种方式中,当用户拖拽一个标识时,输出的声音反馈能够给用户带来仿佛在拖拽真实物体一样的感受,用户体验较好。其中,材质包括:玻璃、瓷器、金属、陶器、塑料等等。示例性的,玻璃或瓷器的硬度高于陶器或塑料等。
以第一声音反馈与第二声音反馈的音调不同为例。在一些实施例中,当一个标识被拖拽时,输出的声音反馈的音调与如下至少一项相关:
1、与被拖拽的标识与手机标识107之间的距离相关。比如,所述距离越大,则音调 越高。以图2中的(c)为例,电视机标识106距离手机标识107较远,笔记本电脑标识108距离手机标识107较近,所以电视机标识106对应的第一声音反馈的音调大于笔记本电脑标识108对应的第二声音反馈的音调。
2、与被拖拽的标识所对应的设备到手机的距离(该距离为两个设备之间的实际距离)相关。比如,所述距离越大,则音调越高。以图1和图2中的(c)为例,假设图1的真实环境中,电视机距离手机较远,笔记本电脑距离手机较近,那么电视机标识106对应的第一声音反馈的音调大于笔记本电脑标识108对应的第二声音反馈的音调。
3、与拖拽操作的拖拽速度和/或触摸压力相关。比如,拖拽速度和/或触摸压力越大,则音调越高。
4、与被拖拽的标识所对应的设备的体积、重量、材质中的至少一种相关。比如,体积越大、重量越重或材质硬度越高,则音调越低。以图2中的(c)中,假设电视机的体积、体重、材质硬度中的至少一项大于笔记本电脑,那么电视机标识106对应的第一声音反馈的音调高于笔记本电脑标识108对应的第二声音反馈的音调。需要说明的是,在真实世界中,人拖拽物体时,被拖拽物体的体积越大、重量越重或材质越硬,则发出声音的音调越高,相反地,被拖拽物体的体积越小、重量越轻盈或材质越软,则发出的声音的音调较低。因此,这种方式中,当用户拖拽一个标识时,输出的声音反馈能够给用户带来仿佛在拖拽真实物体一样的感受,用户体验较好。
以第一声音反馈与第二声音反馈的声道不同为例。在一些实施例中,当一个标识被拖拽时,输出的声音反馈的声道与如下至少一项相关:
1、与被拖拽的标识相对于手机标识107的方向相关。其中,被拖拽标识相对于手机标识107的方向,可以理解为从手机标识107到被拖拽标识的向量的方向,为了方便描述,简称为被拖拽标识所在方向。
以图2中的(c)为例,电视机标识106位于手机标识107的右前方,所以电视机标识106对应的第一声音反馈所指示的声源方向为右前方。这样。用户采集到第一声音反馈之后,可以感受到声音来源于右前方,与电视机标识106所在方向一致,体验较好。
继续以图2中的(c)为例,笔记本电脑标识108位于手机标识107的左后方,所以笔记本电脑标识108对应的第二声音反馈所指示的声源方向为左后方。这样,当用户采集到第二声音反馈之后,大脑可以感受到声音来源于左后方,与笔记本电脑标识108所在方向一致。
2、与被拖拽的标识所对应的设备相对于手机的方向(该方向为两个设备之间的实际方向)有关。
以图1和图2中的(c)为例,真实环境中,电视机位于手机的左前方,所以电视机标识106对应的第一声音反馈所指示的声源方向为左前方。笔记本电脑位于手机的右前方,所以笔记本电脑标识108对应的第二声音反馈所指示的声源方向为右前方。通过这种方式,用户可以通过声音反馈感知到设备的真实方向,体验较好。
以第一声音反馈与第二声音反馈的持续时长不同为例。在一些实施例中,当一个标识被拖拽时,输出的声音反馈的持续时长与如下至少一项相关:
1、与被拖拽的标识与手机标识107之间的距离相关。比如,所述距离越大,则持续 时长越长。以图2中的(c)为例,电视机标识106距离手机标识107较远,笔记本电脑标识108距离手机标识107较近,所以电视机标识106对应的第一声音反馈的持续时长大于笔记本电脑标识108对应的第二声音反馈的持续时长。比如,第一声音反馈持续输出2s,第二声音反馈持续输出1s。
2、与被拖拽的标识所对应的设备到手机的距离相关。比如,所述距离越大,则持续时长越长。以图1和图2中的(c)为例,假设图1的真实环境中,电视机距离手机较远,笔记本电脑距离手机较近,那么电视机标识106对应的第一声音反馈的持续时长大于笔记本电脑标识108对应的第二声音反馈的持续时长。
在一些实施例中,第一声音反馈与第二声音反馈的声音类型、响度、音调、声道、持续时间中的至少一种不同即可。比如,第一声音反馈与第二声音反馈的声音类型相同,都是“叮”,但是响度、音调、声道、持续时间均不同。或者,第一声音反馈与第二声音反馈的声音类型相同,都是“叮”,而且持续时长都相同,但是响度、音调、声道均不同。
在一些实施例中,一个标识被拖拽的过程中,输出的声音反馈可以动态变化。以第一声音反馈为例,在输出第一声音反馈的过程中,第一声音反馈的响度或音调中的至少一项可以动态变化。比如,第一声音反馈(如,“叮”)的响度和/或音调随着:该电视机标识106与手机标识107之间的距离的缩短而降低,又或者,随着:第一拖拽操作的拖拽速度的降低而降低,又或者,随着:第一拖拽操作的触摸压力的减小而降低。
在另一些实施例中,一个标识被拖拽的过程中,输出的声音反馈中可以包括不止一种类型的声音。以前文所述的第一拖拽操作为例,第一拖拽操作可以是将电视机标识106向手机标识107靠近且接触或碰撞到手机标识107。因此,在一些实施例中,手机响应于第一拖拽操作,输出的第一声音反馈包括两种类型的声音,称为第三声音反馈和第四声音反馈,其中,第三声音反馈是电视机标识106开始移动到与手机标识107接触前产生的,第四声音反馈对应电视机标识106与手机标识107接触时产生的。比如,电视机标识106开始移动到与手机标识107接触前产生“叮”的声音,当电视机标识106与手机标识107接触时产生“当啷(clank)”的碰撞音。这种方式,一个标识在移动向手机标识107的过程中有一种类型的声音输出,在碰撞到手机标识107时,有另一种类型的声音输出(比如碰撞音),交互体验更好。这里是以电视机标识106被拖拽时产生的第一声音反馈包括两种类型的声音为例,可以理解的是,笔记本电脑108被拖拽时产生的第二声音反馈也可以包括两种类型的声音,不重复赘述。一种可实现方式为,手机可以实时的检测被拖拽标识在显示屏上的坐标位置,当该坐标位置与手机标识107的坐标位置之间的距离大于预设距离时,输出第三声音反馈,否则,输出第四声音反馈。所述预设距离的具体取值,本申请不限定。
下面介绍本申请的相关设备。
图4示出了电子设备的结构示意图。如图4所示,电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以 及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与外围设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信 号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
显示屏194用于显示应用的显示界面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头193,N为大于1的正整数。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,以及至少一个应用程序(例如爱奇艺应用,微信应用等)的软件代码等。存储数据区可存储电子设备使用过程中所产生的数据(例如图像、视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备 的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。陀螺仪传感器180B可以用于确定电子设备的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备围绕三个轴(即,x,y和z轴)的角速度。
陀螺仪传感器180B可以用于拍摄防抖。气压传感器180C用于测量气压。在一些实施例中,电子设备通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。磁传感器180D包括霍尔传感器。电子设备可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备是翻盖机时,电子设备可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。加速度传感器180E可检测电子设备在各个方向上(一般为三轴)加速度的大小。当电子设备静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备可以利用距离传感器180F测距以实现快速对焦。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备通过发光二极管向外发射红外光。电子设备使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备附近有物体。当检测到不充分的反射光时,电子设备可以确定电子设备附近没有物体。电子设备可以利用接近光传感器180G检测用户手持电子设备贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备是否在口袋里,以防误触。指纹传感器180H用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备对电池142加热,以避免低温导致电子设备异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备的接触和分离。
可以理解的是,图4所示的部件并不构成对电子设备的具体限定。本发明实施例中的电子设备可以包括比图4中更多或更少的部件。此外,图4中的部件之间的组合/连接关系也是可以调整修改的。
图5示出了本申请一实施例提供的电子设备的软件结构框图。
电子设备的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。假设电子设备是Android系统,可包括应用程序层(简称应用层),应用程序框架层(简称框架层)(framework,FWK),硬件层等等。
其中,应用程序层可以包括一系列应用程序包。比如可以包括华为智慧生活、相机、即时通信应用等等。
框架层中包括监听模块、声音处理模块以及参数模型。其中,监听模块可以用于监听当前交互场景,比如当前交互场景是超级终端场景或者内容加载场景、键盘输入场景等。监听模块还可以用于监听交互操作,并基于交互操作确定声音属性参数等,关于监听模块的详细作用将在后文介绍。声音处理模块可以用于对要输出的声音进行处理,比如响度、音调的增大或降低等处理,具体将在后文介绍。参数模型可以用于存储各种映射关系(将在后文介绍)。
硬件层包括设备发现,用于发现周围设备;还包括设备连接,用于与周围设备建立连接。当然硬件层还可以包括其它器件比如传感器、用户采集用户触摸屏上的触摸操作;还包括播放器,用于播放声音反馈。
示例性的,图6为本申请实施例提供的触控反馈方法的流程示意图。该方法可以适用于图1所示的场景中的电子设备,比如手机。所述电子设备的硬件结构如图4,软件结构如图5。如图6所示,所述流程包括:
S601,电子设备确定当前交互场景。
交互场景包括多种,比如超级终端场景(如前文描述)、内容加载场景、键盘输入场景,等等。示例性的,以超级终端场景为例,当电子设备检测到打开图2中的(c)的第一界面110时,确定当前交互场景为超级终端场景。再比如,当电子设备确定当前显示界面中正在加载内容(比如正在加载某个网页)时,确定当前交互场景为内容加载场景。再比如,当电子设备检测到用户点击键盘(物理键盘或软键盘)上的按键时,确定当前交互场 景为键盘输入场景。
S602,电子设备确定当前交互场景下,需要监听的交互操作类型。
示例性的,交互操作的类型包括点击、双击、拖拽、长按等。一般通过传感器(比如触摸传感器)可以采集到交互操作的操作时间(包括开始时间、结束时间等)、操作位置(包括开始位置、结束位置等)、触摸压力等数据。通过传感器采集的数据可以确定出交互操作的类型,比如,通过开始时间和结束时间之间的时间差可以确定交互操作是单击操作还是长按操作,和/或,通过开始位置和结束位置之间的距离可以确定交互操作是滑动操作还是单击操作等等。
在一些实施例中,不同交互场景下需要监听不同类型的交互操作。比如,在超级终端场景(即显示图2中的(c)的第一界面110时)下,需要监听拖拽操作。所述拖拽操作可以理解为长按且滑动的操作,比如对电视机标识106的长按然后拖动的操作。在键盘输入场景下,需要监听按压操作,等等。
S603,当监听到所述类型的交互操作时,确定交互操作的特征信息。
以超级终端场景为例,监听到拖拽操作时,确定拖拽操作的特征信息。示例性的,所述特征信息包括:被拖拽标识、目标标识、被拖拽标识与目标标识之间的距离和/或方向、拖拽速度、触摸压力、被拖拽标识对应的设备属性(设备类型、体积、重量、材质等等)、被拖拽标识对应的设备与本机的距离和/或方向等等。示例性的,被拖拽标识可以是前文图2中的(c)中电视机标识106、笔记本电脑标识108等,目标标识可以是手机标识107。
S604,电子设备根据所述交互操作的特征信息,确定所述特征信息对应的声音属性参数。
其中,声音属性参数包括声音类型、响度、音调、声道(即相位差)、持续时长中的至少一种。
一种可实现方式为,电子设备根据交互操作的特征信息和预设映射关系,确定所述特征信息对应的声音属性参数,所述映射关系包括交互操作的特征信息与声音属性参数之间的映射关系。
作为一种示例,假设S603中确定的交互操作的特征信息包括:被拖拽标识与目标标识之间的距离、拖拽操作的拖拽速度、拖拽操作的触摸压力、被拖拽标识对应的设备的体积、重量、材质、被拖拽标识对应的设备与本机的距离中的至少一项时,基于该特征信息以及预设映射关系,可以确定声音反馈的响度。以基于被拖拽标识与目标标识之间的距离和预设映射关系,确定声音反馈的响度为例,示例性的,所述映射关系包括下表1:
表1:交互操作的特征信息与响度的映射关系
交互操作的特征信息 响度值
被拖拽标识与目标标识之间的距离处于[L1,L2] 0
被拖拽标识与目标标识之间的距离小于L1 -20分贝
被拖拽标识与目标标识之间的距离大于L2 +20分贝
需要说明的是,本文中的数字(比如,0分贝、20分贝、-20分贝等)均是举例,本申请不限定具体取值。举例来说,被拖拽标识与目标标识之间的距离小于L1时,确定响度值为-20分贝,那么将待播放的音频文件(下文中的初始音频文件)的响度降低20分贝,播放响度降低之后的音频文件(即声音反馈)。所述音频文件可以是“叮咚”、“叮”等等声音、 或者,还可以是歌曲片段、伴奏片段或者其它声音,本申请实施例不作限定。
作为另一种示例,假设S603中确定的交互操作的特征信息包括:被拖拽标识与目标标识之间的距离、拖拽操作的拖拽速度、拖拽操作的触摸压力、被拖拽标识对应的设备的体积、重量、材质、被拖拽标识对应的设备与本机的距离中的至少一项时,基于该特征信息以及预设映射关系,可以确定声音反馈的音调。以基于被拖拽标识对应的设备的材质和预设映射关系,确定声音反馈的音调为例,示例性的,所述映射关系可以包括下表2:
表2:交互操作的特征信息与音调的映射关系
交互操作的特征信息 音调
被拖拽标识所对应的设备的材质为金属 0赫兹
被拖拽标识所对应的设备的材质为塑料 -20赫兹
被拖拽标识所对应的设备的材质为玻璃 +20赫兹
举例来说,被拖拽对象所对应的设备的材质是玻璃,基于上述映射关系可确定音调值为-20赫兹,所以需要将音频文件的音调降低20赫兹,播放音调降低之后的音频文件(即声音反馈)。示例性的,调整音频文件的音调的方式有多种,比如相位声码器方法等等,本申请实施例不多赘述。通过这种方式,电子设备可以根据被拖拽标识所对应的设备的材质,通过查询上述映射关系确定出对应的音调,进而调整音频文件的音调,播放经过音调调整后的音频文件,产生声音反馈。
作为另一种示例,假设S603中确定的交互操作的特征信息包括:被拖拽标识位于目标标识的方向、被拖拽标识对应的设备位于本机的方向中的至少一项时,基于该特征信息以及预设映射关系,可以确定声音反馈的声道。以基于被拖拽标识位于目标标识的方向和预设映射关系,确定声音反馈的声道为例。示例性的,所述映射关系可以包括下表3:
表3:交互操作的特征信息与声道的映射关系
交互操作的特征信息 相位差
被拖拽标识所在方向处于[A1,A2] 0
被拖拽标识所在方向小于A1 -2s
被拖拽标识所在方向大于A2 +2s
其中,A1和A2分别为角度值。举例来说,被拖拽标识所在方向小于A1时,确定相位差(或时间差)为-2s,那么将待播放的音频文件对应的左声道信号与右声道信号之间的相位差减少2s。比如,可以使用头部反应传送函数(Head-Response Transfer Function,HRTF)调整左、右声道信号之间的相位差,本申请实施例不多赘述。
作为另一种示例,假设S603中确定的交互操作的特征信息包括:被拖拽标识所对应的设备的类型时,基于该特征信息以及预设映射关系,可以确定声音反馈的声音类型。示例性的,所述映射关系如下表4:
表4:交互操作的特征信息与声音类型的映射关系
交互操作的特征信息 声音类型
被拖拽标识对应的设备类型为电视机
被拖拽标识对应的设备类型为音箱
也就是说,电子设备确定被拖拽的标识所对应的设备的类型之后,通过上述表1可以确定对应的声音类型。在一些实施例中,电子设备确定被拖拽标识所对应的设备类型的方式有多种,比如,向被拖拽标识对应的设备发送查询信息,以查询其设备类型,等等。
需要说明的是,上面的实施例以使用映射关系确定声音属性参数为例,可以理解的是,还可以有其它方式确定声音属性参数,比如,使用函数确定声音属性参数。
以响度为例,电子设备中存储第一函数,该函数的输入是拖拽标识与目标标识之间的距离、拖拽操作的拖拽速度、拖拽操作的触摸压力、被拖拽标识对应的设备的体积、重量、材质、被拖拽标识对应的设备与本机的距离中的至少一项,该函数的输出是响度。示例性的,所述第一函数比如是线性函数y=-k1x+b1,x1为拖拽标识与目标标识之间的距离、拖拽操作的拖拽速度、拖拽操作的触摸压力、被拖拽标识对应的设备的体积、重量、材质、被拖拽标识对应的设备与本机的距离中的至少一项,y为响度,k1和b1是已知量。其中,k1和b1的取值可以是默认设置好的,或用户设置的,本申请不限定。
以音调为例,电子设备中存储第二函数,该函数的输入是被拖拽标识与目标标识之间的距离、拖拽操作的拖拽速度、拖拽操作的触摸压力、被拖拽标识对应的设备的体积、重量、材质、被拖拽标识对应的设备与本机的距离中的至少一项,该函数的输出是音调。示例性的,所述第二函数比如是线性函数y=-k2x+b2,x2为被拖拽标识与目标标识之间的距离、拖拽操作的拖拽速度、拖拽操作的触摸压力、被拖拽标识对应的设备的体积、重量、材质、被拖拽标识对应的设备与本机的距离中的至少一项,y为响度,k2和b2是已知量。其中,k2和b2的取值可以是默认设置好的,或用户设置的,本申请不限定。
总之,电子设备可以根据交互操作的特征信息,通过查询上述映射关系或者通过函数计算出对应的声音属性参数,进而调整音频文件的声音属性,播放经过调整后的音频文件,产生声音反馈。
S605,电子设备根据所述声音属性参数对音频文件进行处理。
如前文所述,声音属性参数包括声音类型、响度、音调、相位差、持续时长等等。以声音属性参数包括声音类型和响度为例,电子设备可以在诸多音频文件中寻找该声音类型对应的音频文件,然后调整该音频文件的响度,之后播放经过响度调整后的音频文件。其中,电子设备中可以存储诸多音频文件,对应各种类型的声音,比如“叮”、“咚”、或者音乐片段等等。所述音频文件可以是电子设备默认的或者用户设置的,本申请实施例不作限定。
S606,电子设备播放处理后的音频文件。
可以理解的是,如果在S601中确定的交互场景为超级终端场景,那么还可以包括步骤:响应于交互操作,建立与被拖拽标识对应的设备之间的连接。
示例性的,请参见图7,为本申请实施例提供的触控反馈方法的另一种流程示意图。该流程图可以理解为图5中不同软件模块之间的信息交互图,比如,监听模块与声音处理模块之间的信息交互图。图7可以理解为是对图6的细化,比如图7中对图6中每一步的执行主体作细化。如图7所示,所述流程包括:
S701,监听模块确定当前交互场景。
关于交互场景的介绍请参见图6中S601。
示例性的,监听模块可以监听硬件层中的显示屏(图5中未示出)上的显示界面,当监听到显示界面为图2所示的第一界面110时,确定当前交互场景为超级终端场景,当监听到显示界面为内容加载界面时,确定当前交互场景为内容加载场景。
S702,监听模块确定当前交互场景下,需要监听的交互操作类型。
其中,S702的实现原理与S602的实现原理相同,不重复赘述。比如,不同交互场景下,需要监听哪些类型的交互操作,请参见S602的介绍。
S703,当监听到所述类型的交互操作时,监听模块确定交互操作的特征信息。
其中,确定特征信息的过程的实现原理与S603的实现原理相同,此处不重复赘述。
S704,监听模块根据所述交互操作的特征信息,确定所述特征信息对应的声音属性参数。
其中,S704的实现原理与S604的实现原理相同。比如,监听模块根据特征信息与预设映射关系,确定所述特征信息对应的声音属性参数。因此,此处不重复赘述声音属性参数的确定过程。
示例性的,如图5中,框架层中包括参数模型,参数模型中可以存储所述映射关系,所以S704中,监听模块可以根据特征信息在参数模型中查询对应的声音属性参数。
S705,监听模块将确定出的声音属性参数发送给声音处理模块。
如前文所述,声音属性参数包括声音类型、响度、音调、相位差、持续时长等等。监听模块确定声音属性参数之后发送给声音处理模块。
S706,声音处理模块根据确定出的声音属性参数对音频文件进行处理。
其中,根据声音属性参数对音频文件的处理过程请参见S605的介绍,此处不重复赘述。
S707,声音处理模块播放处理后的音频文件。
示例性的,如图5,声音处理模块可以调用硬件层中的播放器播放处理后的音频文件。
在一些实施例中,电子设备中的监听模块监听到拖拽操作之后,响应于该拖拽操作,建立与被拖拽标识对应的设备之间的连接。比如,如图5,监听模块调用硬件层中的设备连接模块与被拖拽标识对应的设备连接。
上面的实施例中,主要以超级终端场景为例进行介绍,下面以键盘输入场景为例介绍。如前文所述,在键盘输入场景中,需要监听的交互操作类型为按压操作。当监听到按压操作时,确定按压操作的特征信息,比如按压时长、按压力度、按压速度、按压深度等。这种场景下,交互操作的特征信息与声音属性参数之间的映射关系包括:按压时长、按压力度、按压速度、按压深度中的至少一项与响度或音调之间的映射关系。示例性的,所述映射关系包括如下表5:
表5:交互操作的特征信息与响度/音调的映射关系
交互操作的特征信息 响度/音调
按压时长、按压速度、按压深度和/或按压力度处于[L1,L2] 0
按压时长、按压速度、按压深度和/或按压力度小于L1 -20
按压时长、按压速度、按压深度和/或按压力度小于大于L2 +20
举例来说,当交互操作的按压深度小于L1时,对应的响度值为-20,那么将音频文件的响度降低20分贝,播放响度降低之后的音频文件(即声音反馈)。也就是说,按压深度不同的交互操作,产生的声音反馈的响度和/或音调不同。同理,按压速度、按压力度、按压时长中的至少一种不同时,声音反馈的响度和/或音调不同。
下面以内容加载场景为例介绍。在内容加载场景中,需要监听的交互操作类型包括用于加载内容的操作,比如用于打开网页的操作。当监听到所述操作时,确定所述操作的特征信息,比如加载速度和/或加载进度。这种场景下,交互操作的特征信息与声音属性参数之间的映射关系包括:加载速度和/或加载进度与响度或音调之间的映射关系。示例性的,所述映射关系包括如下表6:
表6:交互操作的特征信息与响度/音调的映射关系
交互操作的特征信息 响度/音调
加载速度和/或加载进度处于[L1,L2] 0
加载速度和/或加载进度小于L1 -20
加载速度和/或加载进度大于L2 +20
举例来说,当加载进度小于L1时,确定响度值为-20,那么将音频文件的响度降低20分贝,播放响度降低之后的音频文件(即声音反馈)。也就是说,加载进度不同时,产生的声音反馈的响度和/或音调不同。比如,随着加载进度的增大,响度越大和/或音调越高。用户可以通过声音反馈的响度和/或音调,判断加载进度。同理,加载速度不同时,产生的声音反馈的响度和/或音调不同。比如,随着加载速度的增大,响度越大和/或音调越高,用户可以通过声音反馈的响度和/或音调,判断加载速度大小。
图8所示为本申请提供的一种电子设备800。该电子设备800可以是前文中的手机。如图8所示,电子设备800可以包括:一个或多个处理器801;一个或多个存储器802;通信接口803,以及一个或多个计算机程序804,上述各器件可以通过一个或多个通信总线805连接。其中该一个或多个计算机程序804被存储在上述存储器802中并被配置为被该一个或多个处理器801执行,该一个或多个计算机程序804包括指令,上述指令可以用于执行如上面相应实施例中电子设备的相关步骤(比如图6或图7的相关步骤)。通信接口803用于实现与其他设备的通信,比如通信接口可以是收发器。
上述本申请提供的实施例中,从电子设备(例如手机)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。另外,在上述实施例中,使用诸如第一、第二之类的关系术语来区份一个实体和另一个实体,而并不限制这些实体之间的任何实际的关系和顺序。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意 味着“包括但不限于”,除非是以其他方式另外特别强调。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。在不冲突的情况下,以上各实施例的方案都可以组合使用。
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。

Claims (15)

  1. 一种触控反馈方法,其特征在于,应用于第一电子设备,包括:
    响应于第一操作,显示第一界面,所述第一界面中包括所述第一电子设备的第一标识、第二电子设备的第二标识,以及第三电子设备的第三标识;其中,所述第二电子设备和所述第三电子设备是所述第一电子设备扫描到的周围设备;
    响应于第一拖拽操作,输出第一声音反馈,并与所述第二电子设备建立连接,所述第一拖拽操作用于拖拽所述第二标记使得所述第二电子设备与所述第一电子设备建立连接;
    响应于第二拖拽操作,输出第二声音反馈,并与所述第三电子设备建立连接,所述第二拖拽操作用于拖拽所述第三标记使得所述第三电子设备与所述第一电子设备建立连接;
    所述第二声音反馈与所述第一声音反馈不同。
  2. 根据权利要求1所述的方法,其特征在于,所述第二声音反馈与所述第一声音反馈不同,包括:
    所述第二声音反馈与所述第一声音反馈的声音类型、响度、音调、声道、持续时长中的至少一项不同。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第二声音反馈与所述第一声音反馈不同,包括:
    在满足如下条件中的至少一种时,所述第二声音反馈与所述第一声音反馈不同;所述条件包括:
    所述第二电子设备与所述第三电子设备的设备类型、体积、重量、材质中的至少一项不同;
    所述第二电子设备与所述第三电子设备到所述第一电子设备的距离方向;
    所述第二电子设备与所述第三电子设备位于所述第一电子设备的不同方向;
    所述第二标记与所述第三标记到所述第一标记的距离不同;
    所述第二标记与所述第三标记位于所述第一标记的不同方向;
    所述第一拖拽操作与所述第二拖拽操作的拖拽速度不同;
    所述第一拖拽操作与所述第二拖拽操作与触摸屏的触摸压力不同。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述第二声音反馈与所述第一声音反馈不同,包括:
    在满足如下条件中的至少一种时,所述第一声音反馈的响度和/或音调大于所述第二声音反馈;所述条件包括:
    所述第二标识到所述第一标识的距离大于所述第三标识到所述第一标识的距离;
    所述第二电子设备到所述第一电子设备的距离大于所述第三电子设备到所述第一电子设备的距离;
    所述第一拖拽操作的拖拽速度大于所述第二拖拽操作的拖拽速度;
    所述第一拖拽操作的触摸压力大于所述第二拖拽操作的触摸压力;
    所述第二电子设备的体积大于所述第三电子设备的体积;
    所述第二电子设备的重量大于所述第三电子设备的重量;
    所述第二电子设备的材质硬度大于所述第三电子设备的材质硬度。
  5. 根据权利要求1-3任一所述的方法,其特征在于,所述第二声音反馈与所述第一声 音反馈不同,包括:
    在满足如下条件中的至少一种时,所述第一声音反馈的持续时长大于所述第二声音反馈的持续时长;所述条件包括:
    所述第二标识到所述第一标识的距离大于所述第三标识到所述第一标识的距离;
    所述第二电子设备到所述第一电子设备的距离大于所述第三电子设备到所述第一电子设备的距离。
  6. 根据权利要求1-3任一所述的方法,其特征在于,所述第二声音反馈与所述第一声音反馈不同,包括:
    在满足如下条件时,所述第一声音反馈所指示的声源方向为第一方向,所述条件包括:所述第二标识位于所述第一标识的第一方向,和/或,所述第二电子设备位于所述第一电子设备的第一方向;
    在满足如下条件时,所述第二声音反馈所指示的声源方向为第二方向,所述条件包括:所述第三标识位于所述第一标识的第二方向,和/或,所述第二电子设备位于所述第一电子设备的第二方向。
  7. 根据权利要求6所述的方法,其特征在于,
    所述第一声音反馈所指示的声源方向为第一方向,包括:
    所述第一声音反馈包括第一左声道信息和第一右声道信息,所述第一左声道信息与所述第一右声道信息之间的相位差为第一相位差;所述第一相位差用于确定声源方向为所述第一方向;
    所述第二声音反馈所指示的声源方向为第二方向,包括:
    所述第二声音反馈包括第二左声道信息和第二右声道信息,所述第二左声道信息与所述第二右声道信息之间的相位差为第二相位差;所述第二相位差用于确定声源方向为所述第二方向。
  8. 根据权利要求1-7任一所述的方法,其特征在于,所述方法还包括:
    所述第一界面中所述第二标识到所述第一标识的第一距离,与所述第二电子设备到所述第一电子设备的第二距离正相关;和/或,
    所述第一界面中所述第二标识位于所述第一标识的第一方向,与所述第二电子设备位于所述第一电子设备的第二方向一致;和/或,
    所述第一界面中所述第三标识到所述第一标识的第三距离,与所述第三电子设备到所述第一电子设备的第四距离正相关;和/或,
    所述第一界面中所述第三标识位于所述第一标识的第三方向,与所述第三电子设备位于所述第一电子设备的第四方向一致。
  9. 根据权利要求1-8任一所述的方法,其特征在于,
    所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且不接触所述第一标记,或者,所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记;
    和/或,
    所述第二拖拽操作用于拖拽所述第三标记向所述第一标记所在位置处移动且不接触所述第一标记,或者,所述第二拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    在所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记的情况下,所述第一声音反馈包括第三声音反馈和第四声音反馈;其中,所述第三声音反馈是所述第二标识开始移动到与所述第一标识接触之前对应的声音反馈,所述第四声音反馈是所述第二标识与所述第一标识接触时对应的声音反馈;
    和/或,
    在所述第二拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动且接触所述第一标记的情况下,所述第二声音反馈包括第五声音反馈和第六声音反馈;其中,所述第五声音反馈是所述第三标识开始移动到与所述第一标识接触之前对应的声音反馈,所述第六声音反馈是所述第三标识与所述第一标识接触时对应的声音反馈。
  11. 根据权利要求10所述的方法,其特征在于,
    所述第三声音反馈与所述第四声音反馈的声音类型不同;和/或,
    所述第五声音反馈与所述第六声音反馈的声音类型不同。
  12. 根据权利要求1-11任一所述的方法,其特征在于,
    在所述第一拖拽操作用于拖拽所述第二标记向所述第一标记所在位置处移动的情况下,输出所述第一声音反馈的过程中,所述第一声音反馈的响度和/或音调,随着所述第二标识到所述第一标识的距离的缩短、所述第一拖拽操作的拖拽速度的降低、所述第一拖拽操作的触摸压力减小中的至少一项而降低;
    和/或,
    在所述第一拖拽操作用于拖拽所述第三标记向所述第一标记所在位置处移动的情况下,输出所述第二声音反馈的过程中,所述第二声音反馈的响度和/或音调,随着所述第三标识到所述第一标识的距离的缩短、所述第二拖拽操作的拖拽速度的降低、所述第二拖拽操作的触摸压力减小中的至少一项而降低。
  13. 根据权利要求1-12任一所述的方法,其特征在于,所述第二声音反馈与所述第一声音反馈是同一类声音。
  14. 一种电子设备,其特征在于,包括:
    处理器,存储器,以及,一个或多个程序;
    其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如权利要求1至13中任意一项所述的方法步骤。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至13中任意一项所述的方法。
PCT/CN2022/116339 2021-10-20 2022-08-31 一种触控反馈方法与电子设备 WO2023065839A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111219989.1 2021-10-20
CN202111219989.1A CN115993885A (zh) 2021-10-20 2021-10-20 一种触控反馈方法与电子设备

Publications (1)

Publication Number Publication Date
WO2023065839A1 true WO2023065839A1 (zh) 2023-04-27

Family

ID=85989162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116339 WO2023065839A1 (zh) 2021-10-20 2022-08-31 一种触控反馈方法与电子设备

Country Status (2)

Country Link
CN (1) CN115993885A (zh)
WO (1) WO2023065839A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090013254A1 (en) * 2007-06-14 2009-01-08 Georgia Tech Research Corporation Methods and Systems for Auditory Display of Menu Items
CN102422712A (zh) * 2009-05-13 2012-04-18 皇家飞利浦电子股份有限公司 音频反馈和对光功能和设置的依赖性
CN102651915A (zh) * 2012-05-11 2012-08-29 卢泳 一种无线智能路由器及无线通信系统
CN103869950A (zh) * 2012-12-14 2014-06-18 联想(北京)有限公司 信息处理的方法及电子设备
CN110109730A (zh) * 2015-09-08 2019-08-09 苹果公司 用于提供视听反馈的设备、方法和图形用户界面
CN114077373A (zh) * 2021-04-29 2022-02-22 华为技术有限公司 一种电子设备间的交互方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090013254A1 (en) * 2007-06-14 2009-01-08 Georgia Tech Research Corporation Methods and Systems for Auditory Display of Menu Items
CN102422712A (zh) * 2009-05-13 2012-04-18 皇家飞利浦电子股份有限公司 音频反馈和对光功能和设置的依赖性
CN102651915A (zh) * 2012-05-11 2012-08-29 卢泳 一种无线智能路由器及无线通信系统
CN103869950A (zh) * 2012-12-14 2014-06-18 联想(北京)有限公司 信息处理的方法及电子设备
CN110109730A (zh) * 2015-09-08 2019-08-09 苹果公司 用于提供视听反馈的设备、方法和图形用户界面
CN114077373A (zh) * 2021-04-29 2022-02-22 华为技术有限公司 一种电子设备间的交互方法及电子设备

Also Published As

Publication number Publication date
CN115993885A (zh) 2023-04-21

Similar Documents

Publication Publication Date Title
EP3872807B1 (en) Voice control method and electronic device
CN114397979B (zh) 一种应用显示方法及电子设备
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
CN113157231A (zh) 一种数据传输的方法及相关设备
CN116360725B (zh) 显示交互系统、显示方法及设备
WO2020024108A1 (zh) 一种应用图标的显示方法及终端
WO2022143883A1 (zh) 一种拍摄方法、系统及电子设备
WO2022028537A1 (zh) 一种设备识别方法及相关装置
WO2022048453A1 (zh) 解锁方法及电子设备
WO2022028290A1 (zh) 基于指向操作的设备之间的交互方法及电子设备
WO2021143650A1 (zh) 数据分享的方法、电子设备
CN114356195B (zh) 一种文件传输的方法及相关设备
WO2023088459A1 (zh) 设备协同方法及相关装置
CN115242994B (zh) 视频通话系统、方法和装置
WO2023065839A1 (zh) 一种触控反馈方法与电子设备
WO2022152174A1 (zh) 一种投屏的方法和电子设备
WO2022062902A1 (zh) 一种文件传输方法和电子设备
CN114079691B (zh) 一种设备识别方法及相关装置
CN116414500A (zh) 电子设备操作引导信息录制方法、获取方法和终端设备
WO2022022722A1 (zh) 配件主题自适应方法、装置和系统
WO2022268009A1 (zh) 一种屏幕共享的方法及相关设备
WO2022148355A1 (zh) 界面的控制方法、装置、电子设备和可读存储介质
WO2023061054A1 (zh) 非接触式手势控制方法和电子设备
WO2024037542A1 (zh) 一种触控输入的方法、系统、电子设备及存储介质
CN115291780A (zh) 一种辅助输入方法、电子设备及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882455

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022882455

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022882455

Country of ref document: EP

Effective date: 20240322