Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminal devices described in embodiments of the invention include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, a schematic diagram of an implementation flow of a sound effect control method provided in an embodiment of the present invention is shown, where the method includes:
s11: acquiring positioning information generated when second wireless interactive elements on a plurality of sound boxes receive positioning trigger signals of a target user, and determining the position of the target user according to the plurality of positioning information and a preset positioning rule;
in this embodiment, the main execution body of the sound effect control method is a sound effect control device, and the sound box control device may be a sound box array, or other devices that communicate with the sound box array, such as a mobile phone, a tablet computer, a notebook computer, or a server. In this embodiment, a plurality of enclosures of an enclosure array are housed in an indoor environment.
In this embodiment, the speaker array includes at least two speakers, and the speakers and the second wireless interactive element are disposed on the speakers. The second wireless interactive element can be a WIFI interactive element or a Bluetooth element. The target user is provided with a first wireless interactive element which is in communication connection with a second wireless interactive element. Optionally, the first wireless interactive element and the second wireless interactive element are both bluetooth elements. When the target user moves, the first wireless interactive element sends a positioning trigger signal to each sound box of the sound box array. And after the second wireless interactive elements on the sound boxes receive the positioning trigger signals, determining the position of the target user according to a preset positioning rule. The preset positioning rule may be an ultrasonic positioning method, a WIFI position fingerprint positioning method, a bluetooth trilateration method, a time difference method, etc., as long as the position of the target user can be measured, which is not described herein again.
In a specific application scenario, the speaker array includes three speakers, and the second wireless interactive element on each speaker is a bluetooth Beacon (bluetooth Beacon). And when the second wireless interactive elements on the sound boxes receive the positioning trigger signals of the target users, corresponding positioning signals are respectively generated. At this time, the preset positioning rule is bluetooth trilateration. Correspondingly, the distance information from the position of the target user to the target user is obtained according to the signal strength information in the positioning signal, three distance information is used as a radius to draw a circle, and the intersection point of the three circles is the position of the target user.
S12: calculating distance information from each sound box to the target user according to the position of the target user, and calculating sound wave emission delay time of each sound box according to the distance information;
in this embodiment, after the position of the target user is determined, the sound wave emission delay time of each sound box is calculated according to the distance information from the sound box to the target user. Optionally, the time required for the sound wave emitted by the speaker of each loudspeaker to propagate to the target user is calculated according to the distance information, and then the sound wave emission delay time of other speakers is calculated on the basis of the propagation time of the speaker with the longest time propagation time.
S13: and outputting corresponding sound wave emission delay time to each sound box so that the sound waves emitted by each loudspeaker reach the target user at the same time.
And finally, outputting the corresponding sound wave emission delay time to each sound box so as to control the loudspeaker of each sound box to emit sound waves according to the corresponding sound wave emission delay time.
One embodiment is illustrated as follows: the sound box array comprises a sound box 1, a sound box 2, a sound box 3 and a sound box 4, the shortest propagation distances of the sound box 1, the sound box 2, the sound box 3 and the sound box 4 are respectively calculated to be 1m, 1.2m, 1.5m and 1.8m according to the positions of all the sound boxes to a target user, then on the basis of 1.8m, the delay time of the sound box 1, the sound box 2, the sound box 3 and the sound box 4 is calculated to be 0.8v, 0.6v and 0.3v correspondingly, and then the sequence of outputting sound waves is determined according to the respective delay time to be sequentially: the loudspeaker of the sound box 3 starts to output sound waves within 0.3v after the loudspeaker of the sound box 4 outputs sound waves, the loudspeaker of the sound box 2 starts to output sound waves within 0.6v after the loudspeaker of the sound box 4 outputs sound waves, and the loudspeaker of the sound box starts to output sound waves within 0.8v after the loudspeaker of the sound box 4 outputs sound waves. And calculating the delay time of the sound waves output by the loudspeakers of the sound boxes so that the sound waves output by the loudspeakers of the sound boxes simultaneously reach the target user.
In this embodiment, when the second wireless interactive element receives the positioning trigger signal of the target user, the positioning information can be generated, the position of the target user can be determined according to the positioning information, and the sound wave emission delay time of the speakers of the sound box can be correspondingly adjusted according to the distance between the sound box and the target user, so that the sound waves output by the speakers can reach the user at the same time, can follow the position change of the user, and can achieve a multi-channel effect on the basis of not increasing the hardware cost.
Referring to fig. 2, a schematic flow chart of an implementation process of the sound effect control method according to the second embodiment of the present invention includes steps S21 to S24, where steps S21 to S22 are the same as step S11, and steps S23 to S24 are the same as steps S11 to S12, which are not repeated herein and are detailed as follows:
s21: acquiring an angle signal generated when the antenna array of the second wireless interactive element receives the trigger signal;
in this embodiment, the second wireless interactive element is a bluetooth element, and an antenna array is disposed on the second wireless interactive element, so that when the second wireless interactive element receives the positioning trigger signal, the direction of arrival of the positioning trigger signal can be known through the antenna array on the second wireless interactive element. The direction of arrival of the signal refers to the incident wave direction of the positioning trigger signal relative to the second wireless interactive element, that is, an angle signal which can be generated through the direction of arrival. When the number of the sound boxes is two, the exact position of the user is calculated according to the angle information contained in the angle signals of the second wireless interactive elements on the two sound boxes and the installation distance between the two wireless interactive elements, and the positioning precision of the method is high.
S22: determining the arrival direction of the positioning trigger signal corresponding to the angle signal according to the angle signals of the plurality of second wireless interactive elements, and determining the position of a target user according to the arrival directions;
when angle signals of the second wireless interactive elements of the two or more sound boxes are obtained, the position of the target user can be determined according to the arrival direction represented by the angle signals.
In other embodiments, the second wireless interactive elements are bluetooth elements, and the number of the second wireless interactive elements is three. At this time, the relative position of the user can be determined according to the time difference of the three second wireless interactive elements receiving the positioning trigger signals and the distance between the three second wireless interactive elements.
The embodiment realizes the positioning of the target user by using the wireless interactive element, and can give consideration to both cost and positioning precision and have wide adaptability compared with positioning technologies such as infrared positioning and ultrasonic positioning.
Referring to fig. 3, a schematic flow chart of an implementation process of the sound effect control method according to the third embodiment of the present invention includes steps S31 to S34, where steps S32 to S34 are the same as steps S11 to S13, which are not repeated herein, but the difference is that step S31 is detailed as follows:
s31: and when the movement of the target user is detected, controlling a first wireless interaction element on the target user to send out a positioning trigger signal.
In this embodiment, it is determined whether the positions of the target users corresponding to the respective speakers need to be determined again based on the motion conditions of the target users. And a monitoring device is arranged corresponding to the target user and is used for monitoring the behavior of the target user and controlling a first wireless interaction element on the target user to send out a positioning trigger signal when the position of the target user changes. The monitoring device can be a displacement sensor, a camera, an infrared sensor or an acceleration sensor and the like.
In one embodiment, the monitoring device is an acceleration sensor and the first wireless interactive element is a bluetooth element. The acceleration sensor and the Bluetooth element are both arranged on the body of the target user. When the target user moves, the acceleration sensor senses the acceleration change, a positioning trigger instruction is sent to the Bluetooth element, and the Bluetooth element sends a positioning trigger signal according to the positioning trigger instruction.
Referring to fig. 4, a schematic flow chart of an implementation process of the sound effect control method according to the fourth embodiment of the present invention includes steps S41 to S44, where steps S42 to S44 are the same as steps S11 to S13, which are not repeated herein, but the difference is that step S41 is detailed as follows:
s41: detecting whether the power supply of the sound box is connected or not, and if so, controlling a first wireless interactive element on the target user body to send a positioning trigger signal;
in this embodiment, the sound box cannot acquire the position information of the target user in the power-off process or when the user starts to use the sound box array, and when it is detected that the power of the sound box is turned on, a first wireless interactive element which is idle to a second target user on the body needs to send out a positioning trigger signal so as to position the target user. The first wireless interactive element does not transmit a positioning trigger signal based on the movement of the target user at this time. Specifically, the sound effect control device transmits a signal transmission instruction to the first wireless interactive element when detecting that the power supply of the sound box is switched on, and the first wireless interactive element transmits a positioning trigger signal to the outside after receiving the signal transmission instruction, so that the position of a target user is calculated after the second wireless interactive element on the sound box generates the positioning signal according to the positioning trigger signal.
The sound box control device can control a second interactive element on a certain sound box to transmit a signal transmitting instruction, and a special instruction transmitting element can also transmit the signal transmitting instruction. It should be noted that, after receiving the signal transmission instruction, the first wireless interactive element sends the signal transmission instruction to the processor for identification processing, and then the processor controls the first wireless interactive element to send out the positioning trigger signal.
Controlling a first wireless interactive element on a target user to send a positioning trigger signal, and executing the following steps: and acquiring positioning information generated when the second wireless interactive elements on the plurality of sound boxes receive the positioning trigger signals of the target user.
Referring to fig. 5, a schematic flow chart of an implementation process of the sound effect control method according to the fifth embodiment of the present invention includes steps S51 to S54, where steps S51 and S54 are the same as steps S11 to S13, and are not repeated here, but after S51, steps S52 and S53 are further included, which are detailed as follows:
s52: acquiring the number of indoor target users, and determining the positions of actual users based on a preset determination rule and the positions of a plurality of target users;
s53: and determining distance information from each sound box to the actual user according to the position of the actual user, and calculating sound wave emission delay time of each sound box according to the distance information.
In the present embodiment, when the number of the indoor target users is plural, in order to comprehensively consider the audiovisual feelings of the respective target users, the sound effect felt by the respective target users is balanced as much as possible in combination with the selection of the user. In order to distinguish each target user, the target users are provided with unique identification codes corresponding to each target user and determining the distance information from each sound box to the actual user according to the position of the actual user. When the relative position of the loudspeaker box corresponding to the target user is determined according to the preset positioning rule, the identification code of the target user is sent to the second wireless interactive element in the loudspeaker box through the positioning trigger signal when the relative positions of a plurality of target users are determined. The user identification code may be a name of the first wireless interactive element corresponding to the target user, or a product number of the first wireless interactive element, or other user-defined contents, which is not limited herein.
In an application scenario, a motion sensor and a bluetooth element are arranged on a mobile terminal or a wearable intelligent device of a target user, and an identification code of the target user is represented by a device code of the mobile terminal or the wearable intelligent device, or is generated based on the mobile terminal or the wearable intelligent device.
The determination rule may be to take the center positions of the N target users as the positions of the actual users, and calculate the relative positions of the actual users according to the center positions. The actual user may be a virtual spatial position, for example, when a plurality of target users exist in the calculation range, the actual user represents the central positions of the plurality of target users, and when only one target user exists in the calculation range, the actual user is the target user. After the position of the actual user is determined, sound wave emission delay time of each sound box is calculated according to the distance information from each sound box to the actual user, and the sound waves emitted by the loudspeakers reach the position of the actual user at the same time. Because the position of the actual user is the central position of a plurality of target users, the audio-visual experience effect of each target user is similar.
In the embodiment, the audio-visual experience of a plurality of target users can be considered simultaneously, so that the functions of the sound box are diversified, and the application scenes of the sound box are increased.
On the basis of the previous embodiment, step S52 includes step a and step b, specifically, step a, acquiring the number of indoor target users, determining whether the number of target users exceeds a preset value, and sending an inquiry signal to a decision user among the target users when the number of target users exceeds the preset value; step b: receiving a reply signal sent by the decision user corresponding to the inquiry signal; and determining the relative position of the actual user based on a preset determination rule, the reply signal and the relative positions of the target users.
In this embodiment, in order to further provide the intelligence of the sound box array, a decision user is bound with the intelligent array, and the decision user determines the actual user. Specifically, when a plurality of target users exist indoors, whether the number of the target users exceeds a preset value or not is judged, the preset value is defaulted to be 1, the sound box is defaulted to be in a single-user mode at the moment, the sound effect of the sound box can be enabled to be optimal, and user experience is improved.
The preset value can also be other values, and can be set by the user, for example, set to 2, and two people at home. And sending an inquiry signal to the decision user when the number of the target users exceeds a preset value. And after the decision-making user receives the inquiry signal, feeding back a reply signal according to the inquiry signal. The inquiry signal comprises the accessed target users and the calculation range request, the reply signal comprises the identification codes and the determination rules of the target users in the calculation range, and the positions of which target users in the target users can be determined as the basis for determining the positions of the actual users according to the reply signal, so as to further ensure the requirements of the users with sound effect requirements.
For example, there are three target users A, B, C in the room, but considering only the sound effect effects of two target users of a and B, the target users of a and B are within the calculation range. The determination rule may be to take the center positions of the N target users as the positions of the actual users, and calculate the relative positions of the actual users according to the center positions. The actual user may be a virtual spatial position, for example, when a plurality of target users exist in the calculation range, the actual user represents the central positions of the plurality of target users, and when only one target user exists in the calculation range, the actual user is the target user.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 6 shows an array of sound boxes 6 according to a sixth embodiment of the present invention, which includes a plurality of sound boxes 61 and a controller 62 disposed indoors, the sound box 61 is provided with a second wireless interactive element 611 and a loudspeaker 612, the controller 62 is electrically connected with the second wireless interactive element 611, the second wireless interactive element 611 is used for receiving a positioning trigger signal of a target user and generating positioning information, the controller 62 is configured to determine the location of the target user according to a plurality of positioning information and a preset positioning rule, and calculates the distance information from each speaker 61 to the target user according to the position of the target user, and calculates the sound wave emission delay time of each speaker 61 according to the distance information, and outputs the corresponding sound wave emission delay time to each speaker 61 so that the sound waves emitted by each speaker 612 reach the target user at the same time.
Fig. 7 is a wearable device 7 according to a seventh embodiment of the present invention, which includes a motion sensor 71, a first wireless interactive element 72, and a first processing unit 73, where the motion sensor 71 is configured to sense a motion of a user, and the first processing unit 73 is configured to control the first wireless interactive element 72 to send a positioning trigger signal when the motion sensor senses a motion of a target user.
The embodiment of the invention also provides a sound effect control device 8, and each unit included in the sound effect control device 8 is used for executing each step in the embodiment corresponding to the figure 1. Please refer to fig. 1 for the related description of the corresponding embodiment. Fig. 8 shows a schematic diagram of an audio effect control device 8 according to an eighth embodiment of the present invention, which includes:
the positioning module 81 is configured to acquire positioning information generated when second wireless interactive elements on the multiple speakers receive a positioning trigger signal of a target user, and determine the position of the target user according to the multiple positioning information and a preset positioning rule;
the calculating module 82 is used for calculating distance information from each sound box to the target user according to the position of the target user and calculating sound wave emission delay time of each sound box according to the distance information;
and the output control module 83 is configured to output the corresponding sound wave emission delay time to each sound box so that the sound waves emitted by each speaker reach the target user at the same time.
Further, the positioning module 81 includes a signal obtaining module 811 and a position determining module 812, where the signal obtaining module 811 is configured to obtain an angle signal generated when the antenna array of the second wireless interactive element receives the trigger signal;
the position determining module 812 is further configured to determine, according to the angle signals of the plurality of second wireless interaction elements, a direction of arrival of the positioning trigger signal corresponding to the angle signals, and determine a position of the target user according to the plurality of directions of arrival.
Further, the sound effect control device 8 further includes a user monitoring module 84, where the user monitoring module 84 is configured to control the first wireless interactive element on the target user to send a positioning trigger signal when the target user motion is detected;
further, the sound effect control device 8 further includes a power detection module 86, configured to detect whether the power of the sound box is turned on, and if the power of the sound box is turned on, control the first wireless interactive element on the body of the target user to send a positioning trigger signal.
Further, the sound effect control device 8 further comprises an inquiry module 85, and the inquiry module 85 is configured to send an inquiry signal to the target user when the number of the target users exceeds a preset threshold;
the information acquisition module 811 is further configured to acquire the number of indoor target users;
a position determining module 812, configured to determine a position of an actual user based on a preset determination rule and positions of a plurality of target users;
and the calculating module 82 is further configured to determine distance information from each sound box to the actual user according to the position of the actual user.
Further, the location determining module 812 is further configured to determine whether the number of the target users exceeds a preset value, and send an inquiry signal to a decision user among the target users when the number of the target users exceeds the preset value;
the information acquisition module 811 is further configured to receive a reply signal sent by the decision user corresponding to the inquiry signal;
the position determining module 812 is further configured to determine a relative position of the actual user based on a preset determination rule and the reply signal and relative positions of the plurality of target users.
The function of each module in the sound effect control device 8 is implemented corresponding to each step in the above method for monitoring a scene by using an earphone, and the functions and implementation processes are not described in detail herein.
Fig. 9 is a schematic diagram of a hardware structure of the sound effect control device 9 according to the ninth embodiment of the present invention. As shown in fig. 9, the sound-effect control apparatus 9 of this embodiment includes: a processor 90, a memory 91 and a computer program 92, such as a sound effect control program, stored in the memory 91 and executable on the processor 90. The processor 90, when executing the computer program 92, implements the steps of the above-mentioned embodiments of the method for monitoring a scene with headphones, such as the steps S11 to S13 shown in fig. 1. Alternatively, the processor 90, when executing the computer program 92, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 81 to 83 shown in fig. 8.
Illustratively, the computer program 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 92 in the prominence control apparatus 9. For example, the computer program 92 may be divided into a positioning module, a computing module, and an output control module (module in a virtual device), and the specific functions of each module are as follows:
the positioning module is used for acquiring positioning information generated when second wireless interactive elements on the plurality of sound boxes receive positioning trigger signals of a target user, and determining the position of the target user according to the plurality of positioning information and a preset positioning rule;
the calculation module is used for calculating the distance information from each sound box to the target user according to the position of the target user and calculating the sound wave emission delay time of each sound box according to the distance information;
and the output control module is used for outputting corresponding sound wave emission delay time to each sound box so that the sound waves emitted by each loudspeaker reach the target user at the same time.
The sound effect control device 9 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or other computing devices. The sound effect control device 9 may include, but is not limited to, a processor 90, a memory 91. It will be understood by those skilled in the art that FIG. 9 is merely an example of the sound effects control device 9, and does not constitute a limitation of the sound effects control device 9, and may include more or fewer components than those shown, or some components may be combined, or different components, for example, the sound effects control device 9 may also include input output devices, network access devices, buses, etc.
The Processor 90 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 91 may be an internal storage unit of the sound effect control apparatus 9, such as a hard disk or a memory of the terminal apparatus 9. The memory 91 may also be an external storage device of the sound effect control device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory 91 may also include both an internal storage unit of the sound effect control apparatus 9 and an external storage apparatus. The memory 91 is used for storing the computer program and other programs and data required by the terminal device. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a readable storage medium and used by a processor to implement the steps of the above-described embodiments of the method. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.