CN112492506A - Audio playing method and device, computer readable storage medium and robot - Google Patents

Audio playing method and device, computer readable storage medium and robot Download PDF

Info

Publication number
CN112492506A
CN112492506A CN201910857938.8A CN201910857938A CN112492506A CN 112492506 A CN112492506 A CN 112492506A CN 201910857938 A CN201910857938 A CN 201910857938A CN 112492506 A CN112492506 A CN 112492506A
Authority
CN
China
Prior art keywords
robot
slave
audio
slave robot
channel data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910857938.8A
Other languages
Chinese (zh)
Inventor
谢吉东
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201910857938.8A priority Critical patent/CN112492506A/en
Publication of CN112492506A publication Critical patent/CN112492506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/026Services making use of location information using location based information parameters using orientation information, e.g. compass
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Manipulator (AREA)

Abstract

The present application belongs to the field of robotics, and in particular, to an audio playing method, an audio playing device, a computer-readable storage medium, and a robot. The method comprises the steps of sending preset sound wave signals to each slave robot and receiving feedback signals of each slave robot; determining the position relation between the master robot and each slave robot according to the feedback signals; receiving audio data to be played, and splitting the audio data into various channel data; and respectively sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot. Through the embodiment of the application, the sound effect rendering effect of simultaneously playing audio data by multiple robots can be effectively improved on the premise of not increasing hardware cost.

Description

Audio playing method and device, computer readable storage medium and robot
Technical Field
The present application belongs to the field of robotics, and in particular, to an audio playing method, an audio playing device, a computer-readable storage medium, and a robot.
Background
The audio playing is an important function of the business robots, and the common business robots respectively play audio. In commercial activities, the current robot basically has only one audio output device, such as a loudspeaker, and is not provided with a Digital Signal Processor (DSP) for rendering. In this way, when multiple robots play audio together, the audio is almost free of sound effect rendering, which is likely to make listeners feel boring.
On the other hand, from the viewpoint of saving hardware cost, if sound effect rendering is implemented by adding a DSP rendering unit or an audio output device to an audio module of a robot, the design cost of each robot is inevitably increased.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, embodiments of the present application provide an audio playing method, an audio playing apparatus, a computer-readable storage medium, and a robot, so as to solve the problem that when multiple robots play audio together, no sound effect is rendered, and listeners feel boring easily.
In a first aspect of the embodiments of the present application, an audio playing method is provided, which is applied to a preset host robot, and the method includes:
sending preset sound wave signals to each slave robot, and receiving feedback signals of each slave robot;
determining the position relation between the master robot and each slave robot according to the feedback signals;
receiving audio data to be played, and splitting the audio data into various channel data;
and respectively sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot.
In some embodiments of the present application, the determining the position relationship between the master robot and the slave robots according to the feedback signals includes:
extracting a receiving time difference value of the preset sound wave signals received by each slave robot and azimuth information of the master robot relative to each slave robot from the feedback signals;
determining the distance value between each slave robot and the master robot according to the receiving time difference;
and determining the position relation between the master robot and each slave robot according to the position information and the distance value.
In some embodiments of the present application, before sending the corresponding channel data to the slave robots respectively according to the position relationship, the method further includes:
establishing a position relation and channel data matching relation table;
the sending of the corresponding channel data to each slave robot according to the position relationship specifically includes:
searching the matching relation table according to the position relation between the master robot and each slave robot, and determining the channel data matched with the position relation;
and sending the corresponding channel data to each slave robot according to the matched channel data.
In some embodiments of the present application, after the transmitting the corresponding channel data to each of the slave robots according to the position relationship, the method further includes:
determining the starting time of playing the audio data by each slave robot according to the position relation;
and sending corresponding playing starting instructions to the slave robots according to the starting time, wherein the playing starting instructions are used for indicating the slave robots to start audio playing.
In some embodiments of the present application, after the transmitting the corresponding channel data to each of the slave robots according to the position relationship, the method further includes:
receiving a playing start feedback instruction of each slave robot to the playing start instruction;
and when the playing start feedback instruction of the specific slave robot is not received within the preset time, playing the sound channel data corresponding to the specific slave robot.
In some embodiments of the present application, after the transmitting the corresponding channel data to each of the slave robots according to the position relationship, the method further includes:
determining target volume values of the audio data played by the slave robots according to the position relation;
and sending a corresponding volume adjusting instruction to each slave robot according to the target volume value, wherein the volume adjusting instruction is used for indicating each slave robot to adjust the volume.
In some embodiments of the present application, prior to receiving the audio data to be played, the method further comprises:
creating a wireless communication network with a mobile terminal;
the receiving of the audio data to be played includes:
receiving the audio data from the mobile terminal through the wireless communication network.
In a second aspect of the embodiments of the present application, an audio playing device is provided, which is applied to a preset host robot, and is characterized in that the audio playing device includes:
the first sending module is used for sending preset sound wave signals to each slave robot and receiving feedback signals of each slave robot;
the position relation determining module is used for determining the position relation between the master robot and each slave robot according to the feedback signals;
the receiving module is used for receiving audio data to be played and splitting the audio data into all channel data;
and the second sending module is used for sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot.
In some embodiments of the present application, the position relationship determination module may include:
an extraction unit, configured to extract, from the feedback signals, a receiving time difference value at which each slave robot receives the preset acoustic wave signal and orientation information of the master robot with respect to each slave robot;
the first calculating unit is used for determining the distance value between each slave robot and the master robot according to the receiving time difference;
and the second calculation unit is used for determining the position relation between the master robot and each slave robot according to the azimuth information and the distance value.
In some embodiments of the present application, the apparatus further comprises:
the establishing unit is used for establishing a position relation and channel data matching relation table;
the searching unit is used for searching the matching relation table according to the position relation between the master robot and each slave robot and determining the channel data matched with the position relation;
and the first sending unit is used for sending the corresponding channel data to each slave robot according to the matched channel data.
In some embodiments of the present application, the apparatus further comprises:
the third calculating unit is used for determining the starting time of playing the audio data by each slave robot according to the position relation;
and the second sending unit is used for sending corresponding playing starting instructions to the slave robots according to the starting time, and the playing starting instructions are used for indicating the slave robots to start audio playing.
In some embodiments of the present application, the apparatus further comprises:
a first receiving unit configured to receive a playback start feedback instruction for the playback start instruction from each slave robot;
and the playing unit is used for playing the sound channel data corresponding to the specific slave robot when the playing start feedback instruction of the specific slave robot is not received within the preset time.
In some embodiments of the present application, the apparatus further comprises:
the fourth calculating unit is used for determining target volume values of the audio data played by the slave robots according to the position relation;
and the third sending unit is used for sending corresponding volume adjusting instructions to the slave robots according to the target volume value, wherein the volume adjusting instructions are used for instructing the slave robots to adjust the volume.
In a third aspect of the embodiments of the present application, a computer-readable storage medium is provided, where computer-readable instructions are stored, and when executed by a processor, the computer-readable instructions implement the steps of any of the audio playing methods described above.
In a fourth aspect of the embodiments of the present application, there is provided a robot, including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor implements the steps of the audio playing method according to any one of the above items when executing the computer readable instructions.
Compared with the prior art, the embodiment of the application has the advantages that: according to the embodiment of the application, preset sound wave signals are sent to all slave robots, and feedback signals of all the slave robots are received; determining the position relation between the master robot and each slave robot according to the feedback signals; receiving audio data to be played, and splitting the audio data into various channel data; and respectively sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot. Through the embodiment of the application, the sound effect rendering effect of simultaneously playing audio data by multiple robots can be effectively improved on the premise of not increasing hardware cost.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of an audio playing method in an embodiment of the present application;
FIG. 2 is a schematic diagram of a robot position relationship in one embodiment of the present application;
FIG. 3 is a flowchart illustrating the step S120 according to an embodiment of the present application;
FIG. 4 is a detailed flowchart of an audio playing method according to another embodiment of the present application;
FIG. 5 is a flowchart illustrating the step S140 according to an embodiment of the present application;
fig. 6 is a structural diagram of an embodiment of an audio playing apparatus in an embodiment of the present application;
fig. 7 is a schematic block diagram of a robot in an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an implementation flow of an audio playing method according to an embodiment of the present application. The audio playing method as shown in the figure can comprise the following steps:
and step S110, sending preset sound wave signals to each slave robot, and receiving feedback signals of each slave robot.
The implementation subject of this embodiment is a preset host robot, where the host robot is a robot that executes an audio playing method selected by using a specific algorithm after each robot completes an ad hoc network, where the algorithm includes, but is not limited to, a Bully algorithm, and the Bully algorithm is a coordinator (master node) election algorithm. Other nodes may choose to accept the claim or reject and enter the master node competition. The node accepted by all other nodes can become the master node.
The slave robot is a robot other than the master robot and is mainly used for cooperating with the master robot. However, when the master robot cannot complete the task of the master robot due to a power shortage or the like, one of the slave robots may be selected again by the algorithm instead of the previous master robot.
The preset sound wave signal refers to a sound wave signal which is actively sent to each slave robot by the master robot after the master robot is determined and is used for detecting the distance and the direction between each slave robot and the master robot. Acoustic signals can propagate in all directions through various media, with the particles where the acoustic waves arrive vibrating near the equilibrium location along the direction of propagation.
As shown in fig. 2, in one embodiment, there are 4 slave robots, named A, B, D, E respectively; 1 main robot, named C. The master robot C sends acoustic signals to the surroundings for detecting the distance and orientation of the 4 slave robots A, B, D, E from the current master robot C.
The feedback signals are response signals of the slave robots after receiving the sound wave signals sent by the master robot.
In an embodiment of the present application, before step S110, the method further includes:
and carrying out wireless Mesh ad hoc network with each slave robot to establish a wireless local area network, wherein the wireless local area network is used for the mutual communication among the robots.
The receiving of the feedback signals of the slave robots according to the preset sound wave signals comprises:
and receiving feedback signals of the slave robots according to the preset sound wave signals through the wireless local area network.
It is understood that the wireless Mesh network is a wireless local area network, and belongs to a Mesh network, also called a multi-hop (multi-hop) network. In the wireless Mesh network, all nodes are connected with each other, each node is provided with a plurality of connecting channels, all the nodes form an integral network, and all robots can be in wireless communication with each other.
The wireless Mesh ad hoc network has the advantages that the main robot and each slave robot form a wireless local area network through the wireless Mesh ad hoc network under the condition that a plurality of robots exist, and information interaction among the robots is facilitated.
And step S120, determining the position relation between the master robot and each slave robot according to the feedback signal information.
As shown in fig. 3, in an embodiment of the present application, the step S120 specifically includes the following steps:
step S1201, extracting, from the feedback signal, a receiving time difference value at which each slave robot receives the preset acoustic wave signal and azimuth information of the master robot with respect to each slave robot.
It is understood that the receiving time difference is a difference between transmission times taken from when the master robot transmits the sound wave to the respective slave robots to when the respective slave robots receive the preset sound wave signal. For example, in one technical solution, the time for receiving the preset sound wave signal from robot A, B, D, E is 0.2s, 0.35s, 0.40s, and 0.6s, respectively, and the difference between the time for receiving the preset sound wave signal from robot a and the time for receiving the preset sound wave signal from robot B, D, E is: 0.15s, 0.20s, 0.40 s.
In one embodiment of the present application, the orientation information is calculated by each slave robot through a microphone array technology. Under the condition that target signals received by each array element of the microphone array are from the same sound source, strong correlation exists between channel signals. Therefore, by calculating the correlation function between each two paths of signals, the time delay between the two microphone observation signals can be determined, and the direction information of the master robot relative to each slave robot can be further determined.
The master robot receives the direction information of the master robot with respect to each slave robot, and then automatically converts the direction information into the direction information of each slave robot with respect to the master robot. For example, as shown in fig. 2, if the orientation information of the master robot C with respect to the slave robot a is 15 degrees south, the orientation of the slave robot a with respect to the master robot C is 15 degrees north; the azimuth information of the master robot C relative to the slave robot A is 70 degrees to the south, and the azimuth of the slave robot B relative to the master robot C is 70 degrees to the north; the azimuth information of the master robot C relative to the slave robot E is 75 degrees south, east and north, and the azimuth of the slave robot E relative to the master robot C is 75 degrees north; if the direction information of the master robot C from the robot D is the west direction, the direction information of the slave robot D with respect to the master robot C is the east direction. The distance value refers to a distance value between each slave robot and the master robot.
Of course, the orientation information may be other forms of orientation information of the slave robot relative to the master robot, which is not limited in the embodiments of the present application.
And step S1202, determining the distance value between each slave robot and the master robot according to the receiving time difference value.
As shown in fig. 2, in an embodiment of the present application, the distance value refers to a linear distance value between each slave robot and the master robot. For example, the linear distance between the slave robot a and the master robot C is 20 meters, and the linear distance between the slave robot B and the master robot C is 35 meters; the linear distance between the slave robot D and the master robot C is 40 meters; the straight-line distance from the slave robot E to the master robot C is 50 meters.
Of course, the distance value may also be a curve distance or other distances, which is not limited in this application.
It is understood that a Time Delay Estimation (TDE) algorithm and a microphone array technique may be employed to determine a distance value from the master robot for each slave robot according to a difference in the receiving Time of each slave robot.
This has the advantage that the distance value of each slave robot from the master robot can be determined more accurately.
Step S1203, determining a position relationship between the master robot and each of the slave robots according to the orientation information and the distance values.
The positional relationship refers to a spatial positional relationship of each slave robot with respect to the master robot, and includes orientation information of each slave robot and a distance value from the master robot.
In one embodiment of the present application, the positional relationship between slave robot a and master robot C is: the main robot is located in the direction 15 degrees to the west of the north of the main robot C, and the linear distance is 20 meters. The positional relationship between the slave robot B and the master robot C is: the main robot is located in the direction 70 degrees to the west north of the main robot C, and the linear distance is 35 meters. The positional relationship between the slave robot D and the master robot C is: the straight line distance is 40 meters in the east direction of the main robot C. The positional relationship between the slave robot E and the master robot C is: in the direction 75 degrees east of north of the main robot C, the straight-line distance is 50 meters.
Step S130, receiving audio data to be played, and splitting the audio data into each channel data.
It should be noted that the audio data to be played may be audio data that needs to be played and is transmitted to the main robot by the user through a device such as a mobile terminal and a server. Moreover, the specific transmission carrier and transmission method of the audio data are not limited by the embodiment.
The audio data refers to digitized sound data. Performing analog-to-Digital Conversion (ADC) on continuous analog audio signals from a microphone or the like at a certain frequency to obtain audio data; the audio data is played by performing digital-to-analog conversion (DAC) on the audio data to convert the audio data into an analog audio signal for output. The audio data may be mono audio, may be binaural audio, or may be multi-channel audio. The Sound Channel (Sound Channel) refers to the independent audio signals collected or played back at different spatial positions when the Sound is recorded or played, so the number of Sound channels is the number of Sound sources when the Sound is recorded or the number of corresponding speakers when the Sound is played back. Theoretically, the sound can be reproduced as truly as possible by using a specific sound effect technology, which cannot be achieved in practical applications, because an infinite number of sound collectors, an infinite number of sound channels and an infinite number of speakers are required for the sound to be reproduced completely and truly.
Monaural means a process of picking up sound with a microphone and playing sound with a speaker. The single sound channel means that audio signals from different directions are mixed into a waveform, and the waveform is recorded by a recording device and then reproduced by a sound box. In a monaural audio device, a listener can only perceive the front and rear positions of sound and music and the magnitudes of timbre and volume, but cannot perceive the lateral movement of sound from left to right and the like. Compared with real natural sound, the effect of audio playing is distorted, the sound is dry, the layering sense is avoided, the scene sense is avoided, and the method is generally used for listening to news broadcasting.
By binaural, it is meant that the audio data comprises two channels, which in turn have two completely different waveforms, with a different phase difference at each moment. When the audio is played, different sounds are transmitted from different sound channels to be synthesized together, so that the sound box has a theater feeling and a vivid effect.
Multi-channel refers to audio data that includes three or more channels, for example, 5.1 multi-channel audio, which refers to surround audio having five fundamental channels and one subwoofer channel.
It should be noted that, when the audio data is dual audio data or multi-audio data, the audio data needs to be split into corresponding channel data, and the channel data is transmitted and played respectively. The splitting may be, for example, embedding existing audio data processing software such as Audacity in the host robot, and splitting a channel of audio data received from the mobile terminal.
And step S140, respectively sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot.
In one embodiment of the present application, the master robot determines channel data corresponding to each slave robot according to the determined position and distance between each slave robot and the master robot, and transmits the channel data to each slave robot.
The advantage of doing so is, what can guarantee that each plays from the robot is the corresponding channel data that is fit for self position to promote the broadcast effect of whole audio data.
As shown in fig. 4, in an embodiment of the present application, before step S140, the method further includes: and step S135, establishing a matching relation table of the position relation and the channel data.
In an embodiment of the application, a position relation and channel data matching relation table is preset in the mobile terminal and is sent to the main robot through a wireless network for execution.
With continued reference to fig. 2, in a wireless network having a master robot C and 4 slave robots, the positional relationship of each slave robot with respect to the master robot C and the channel data corresponding thereto are shown in table 1:
in this example, the positional relationship includes the orientation of each slave robot with respect to the master robot and the linear distance value between each slave robot and the master robot, and the channel audio data is audio data acquired from the mobile terminal by the master robot, and may be of various types. In this example, the audio data including 4 channels or 6 channels will be described as an example of the channel data acquired from each robot. That is, when the audio data acquired by the master robot C from the mobile terminal is 4-channel audio data, the types of the channel audio data played by the slave robots are a left front channel, a left channel, a right channel, and a right front channel, respectively. And when the audio data acquired by the master robot C from the mobile terminal is 6-channel audio data, the types of the channel audio data played by the slave robots are a left front channel, a left channel, a right channel and a right front channel respectively. This is the same as the result of the audio data obtained by the main robot C from the mobile terminal being 4-channel audio data, except that the 6-channel audio data further includes a center channel and a 0.1-channel subwoofer channel, and the two audio data can be played by the main robot.
Figure BDA0002198802610000121
TABLE 1 presentation of the relationship between position and channel data
The advantage of doing so is, what can guarantee that each plays from the robot is the corresponding channel data that is fit for self position to promote the broadcast effect of whole audio data.
In an embodiment of the present application, step S140 specifically includes the following steps:
step S1401, according to the position relationship between the master robot and each slave robot, the matching relationship table is searched, and the channel data matching the position relationship is determined.
It is understood that, in the case where the positional relationship between the master robot and the respective slave robots is known, the channel data matching the positional relationship can be directly obtained. For example, by table lookup, the channel data matched with the slave robot a is found as the front left channel; the channel data matched with the slave robot B is a left channel; the channel data matched with the slave robot D is a right channel; the channel data matched with the slave robot E is the right front channel.
Step S1402, sending the corresponding channel data to each slave robot according to the matched channel data.
It can be understood that after the channel data matched with the position relationship is found, each channel data can be sent to the slave robot at the corresponding position through the wireless ad hoc network, so that the slave robot can receive and play the channel data conveniently.
In an embodiment of the present application, after step S140, the method further includes:
and determining the starting time of playing the audio data by each slave robot according to the position relation.
And sending corresponding playing starting instructions to the slave robots according to the starting time, wherein the playing starting instructions are used for indicating the slave robots to start audio playing.
It will be appreciated that the further each slave robot is from the master robot, the earlier the start time for playing the audio data should be. Conversely, the closer the respective slave robots are to the master robot, the later the start time of playing the audio data should be.
The advantage of doing so is that the main robot can calculate the best start time of each slave robot playing the audio data of the corresponding sound track according to the position and the distance of each slave robot, and the whole rendering effect of audio playing is improved.
In an embodiment of the present application, after step S140, the method further includes:
receiving a playing start feedback instruction of each slave robot to the playing start instruction;
and when the playing start feedback instruction of the specific slave robot is not received within the preset time, playing the sound channel data corresponding to the specific slave robot.
It is understood that the preset time may be 8 seconds or 10 seconds, but is not limited thereto, and may be specifically set according to actual needs, and is not limited herein.
The advantage of this is that, the main robot can replace the slave robot to play audio when some slave robots cannot play due to power shortage or the like according to the play start condition of each slave robot, thereby improving the disaster tolerance capability of the whole audio play.
In an embodiment of the present application, after step S140, the method further includes:
and determining the target volume value of the audio data played by each slave robot according to the position relation.
And sending a corresponding volume adjusting instruction to each slave robot according to the target volume value, wherein the volume adjusting instruction is used for indicating each slave robot to adjust the volume.
It will be appreciated that the further each slave robot is from the master robot, the greater the target volume value at which the audio data is played. Conversely, the closer each slave robot is to the master robot, the smaller the start time for playing the audio data should be.
The advantage of doing so is that main robot can be according to the position and the distance of each from the robot, calculates each from the robot and plays the optimum broadcast volume that corresponds sound track audio data, promotes the whole rendering effect that the audio frequency was broadcast.
In an embodiment of the present application, before step S120, the method further includes:
creating a wireless communication network with a mobile terminal;
the receiving of the audio data to be played includes:
receiving the audio data from the mobile terminal through the wireless communication network.
It is understood that the wireless communication includes, but is not limited to, bluetooth communication, ZigBee communication, RFID communication, WiFi communication.
This has the advantage that the mobile terminal can establish a separate communication connection with the main robot so that the mobile terminal can transmit the audio data to be played to the main robot.
According to the embodiment of the application, preset sound wave signals are sent to all slave robots, and feedback signals of all the slave robots are received; determining the position relation between the master robot and each slave robot according to the feedback signals; receiving audio data to be played, and splitting the audio data into various channel data; and respectively sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot. Through the embodiment of the application, the sound effect rendering effect of simultaneously playing audio data by multiple robots can be effectively improved on the premise of not increasing hardware cost.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a structure diagram of an embodiment of an audio playing apparatus according to an embodiment of the present application, which corresponds to an audio playing method described in the foregoing embodiment.
In one embodiment of the present application, the audio data playing apparatus includes:
the first sending module is used for sending preset sound wave signals to each slave robot and receiving feedback signals of each slave robot;
the position relation determining module is used for determining the position relation between the master robot and each slave robot according to the feedback signals;
the receiving module is used for receiving audio data to be played and splitting the audio data into all channel data;
and the second sending module is used for sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot.
In one embodiment of the present application, the position relationship determination module may include:
an extraction unit, configured to extract, from the feedback signals, a receiving time difference value at which each slave robot receives the preset acoustic wave signal and orientation information of the master robot with respect to each slave robot;
the first calculating unit is used for determining the distance value between each slave robot and the master robot according to the receiving time difference;
and the second calculation unit is used for determining the position relation between the master robot and each slave robot according to the azimuth information and the distance value.
In one embodiment of the present application, the apparatus further comprises:
the establishing unit is used for establishing a position relation and channel data matching relation table;
the searching unit is used for searching the matching relation table according to the position relation between the master robot and each slave robot and determining the channel data matched with the position relation;
and the first sending unit is used for sending the corresponding channel data to each slave robot according to the matched channel data.
In one embodiment of the present application, the apparatus further comprises:
the third calculating unit is used for determining the starting time of playing the audio data by each slave robot according to the position relation;
and the second sending unit is used for sending corresponding playing starting instructions to the slave robots according to the starting time, and the playing starting instructions are used for indicating the slave robots to start audio playing.
In one embodiment of the present application, the apparatus further comprises:
a first receiving unit configured to receive a playback start feedback instruction for the playback start instruction from each slave robot;
and the playing unit is used for playing the sound channel data corresponding to the specific slave robot when the playing start feedback instruction of the specific slave robot is not received within the preset time.
In one embodiment of the present application, the apparatus further comprises:
the fourth calculating unit is used for determining target volume values of the audio data played by the slave robots according to the position relation;
and the third sending unit is used for sending corresponding volume adjusting instructions to the slave robots according to the target volume value, wherein the volume adjusting instructions are used for instructing the slave robots to adjust the volume.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Fig. 7 shows a schematic block diagram of a robot provided in an embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of explanation.
As shown in fig. 7, the robot 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72 stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the above-mentioned audio playing method embodiments, such as the steps S110 to S140 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 610 to 640 shown in fig. 6.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the robot 7.
It will be appreciated by those skilled in the art that fig. 7 is merely an example of the robot 7, and does not constitute a limitation of the robot 7, and may include more or less components than those shown, or combine some components, or different components, for example, the robot 7 may further include input and output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the robot 7, such as a hard disk or a memory of the robot 7. The memory 71 may also be an external storage device of the robot 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the robot 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the robot 7. The memory 71 is used for storing the computer program and other programs and data required by the robot 7. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An audio playing method is applied to a preset main robot, and the method comprises the following steps:
sending preset sound wave signals to each slave robot, and receiving feedback signals of each slave robot;
determining the position relation between the master robot and each slave robot according to the feedback signals;
receiving audio data to be played, and splitting the audio data into various channel data;
and respectively sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot.
2. The audio playing method according to claim 1, wherein said determining the position relationship between the master robot and each slave robot according to the feedback signal comprises:
extracting a receiving time difference value of the preset sound wave signals received by each slave robot and azimuth information of the master robot relative to each slave robot from the feedback signals;
determining the distance value between each slave robot and the master robot according to the receiving time difference; and determining the position relation between the master robot and each slave robot according to the position information and the distance value.
3. The audio playback method according to claim 1, wherein before the respective slave robots are respectively sent the corresponding channel data according to the positional relationship, the method further comprises:
establishing a position relation and channel data matching relation table;
the sending of the corresponding channel data to each slave robot according to the position relationship specifically includes:
searching the matching relation table according to the position relation between the master robot and each slave robot, and determining the channel data matched with the position relation;
and sending the corresponding channel data to each slave robot according to the matched channel data.
4. The audio playing method according to claim 1, further comprising, after sending the corresponding channel data to each of the slave robots according to the positional relationship, respectively:
determining the starting time of playing the audio data by each slave robot according to the position relation;
and sending corresponding playing starting instructions to the slave robots according to the starting time, wherein the playing starting instructions are used for indicating the slave robots to start audio playing.
5. The audio playing method according to claim 4, further comprising, after sending the corresponding channel data to each of the slave robots according to the positional relationship, respectively:
receiving a playing start feedback instruction of each slave robot to the playing start instruction;
and when the playing start feedback instruction of the specific slave robot is not received within the preset time, playing the sound channel data corresponding to the specific slave robot.
6. The audio playing method according to claim 1, further comprising, after sending the corresponding channel data to each of the slave robots according to the positional relationship, respectively:
determining target volume values of the audio data played by the slave robots according to the position relation;
and sending a corresponding volume adjusting instruction to each slave robot according to the target volume value, wherein the volume adjusting instruction is used for indicating each slave robot to adjust the volume.
7. The audio playback method according to any one of claims 1 to 6, wherein, before receiving the audio data to be played back, the method further comprises:
creating a wireless communication network with a mobile terminal;
the receiving of the audio data to be played includes:
receiving the audio data from the mobile terminal through the wireless communication network.
8. The utility model provides an audio playback device, is applied to among the predetermined host robot, its characterized in that, audio playback device includes:
the first sending module is used for sending preset sound wave signals to each slave robot and receiving feedback signals of each slave robot;
the position relation determining module is used for determining the position relation between the master robot and each slave robot according to the feedback signals;
the receiving module is used for receiving audio data to be played and splitting the audio data into all channel data;
and the second sending module is used for sending corresponding channel data to each slave robot according to the position relation, wherein the channel data are used for audio playing of each slave robot.
9. A computer-readable storage medium storing computer-readable instructions, which when executed by a processor implement the steps of the audio playback method according to any one of claims 1 to 7.
10. A robot comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, characterized in that the processor, when executing the computer readable instructions, implements the steps of the audio playback method according to any of claims 1 to 7.
CN201910857938.8A 2019-09-11 2019-09-11 Audio playing method and device, computer readable storage medium and robot Pending CN112492506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910857938.8A CN112492506A (en) 2019-09-11 2019-09-11 Audio playing method and device, computer readable storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910857938.8A CN112492506A (en) 2019-09-11 2019-09-11 Audio playing method and device, computer readable storage medium and robot

Publications (1)

Publication Number Publication Date
CN112492506A true CN112492506A (en) 2021-03-12

Family

ID=74920095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910857938.8A Pending CN112492506A (en) 2019-09-11 2019-09-11 Audio playing method and device, computer readable storage medium and robot

Country Status (1)

Country Link
CN (1) CN112492506A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113068056A (en) * 2021-03-18 2021-07-02 广州虎牙科技有限公司 Audio playing method and device, electronic equipment and computer readable storage medium
CN113411725A (en) * 2021-06-25 2021-09-17 Oppo广东移动通信有限公司 Audio playing method and device, mobile terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140132700A (en) * 2014-10-02 2014-11-18 유한회사 밸류스트릿 The method and apparatus for assigning multi-channel audio to multiple mobile devices and its control by recognizing user's gesture
CN104967953A (en) * 2015-06-23 2015-10-07 Tcl集团股份有限公司 Multichannel playing method and system
CN106686491A (en) * 2016-12-29 2017-05-17 维沃移动通信有限公司 Sound processing method and mobile terminal
CN108028976A (en) * 2015-07-08 2018-05-11 诺基亚技术有限公司 Distributed audio microphone array and locator configuration
US20180139560A1 (en) * 2016-11-16 2018-05-17 Dts, Inc. System and method for loudspeaker position estimation
CN109831735A (en) * 2019-01-11 2019-05-31 歌尔科技有限公司 Suitable for the audio frequency playing method of indoor environment, equipment, system and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140132700A (en) * 2014-10-02 2014-11-18 유한회사 밸류스트릿 The method and apparatus for assigning multi-channel audio to multiple mobile devices and its control by recognizing user's gesture
CN104967953A (en) * 2015-06-23 2015-10-07 Tcl集团股份有限公司 Multichannel playing method and system
CN108028976A (en) * 2015-07-08 2018-05-11 诺基亚技术有限公司 Distributed audio microphone array and locator configuration
US20180139560A1 (en) * 2016-11-16 2018-05-17 Dts, Inc. System and method for loudspeaker position estimation
CN106686491A (en) * 2016-12-29 2017-05-17 维沃移动通信有限公司 Sound processing method and mobile terminal
CN109831735A (en) * 2019-01-11 2019-05-31 歌尔科技有限公司 Suitable for the audio frequency playing method of indoor environment, equipment, system and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113068056A (en) * 2021-03-18 2021-07-02 广州虎牙科技有限公司 Audio playing method and device, electronic equipment and computer readable storage medium
CN113068056B (en) * 2021-03-18 2023-08-22 广州虎牙科技有限公司 Audio playing method, device, electronic equipment and computer readable storage medium
CN113411725A (en) * 2021-06-25 2021-09-17 Oppo广东移动通信有限公司 Audio playing method and device, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
JP5990345B1 (en) Surround sound field generation
US7936890B2 (en) System and method for generating auditory spatial cues
US6766028B1 (en) Headtracked processing for headtracked playback of audio signals
US20150264502A1 (en) Audio Signal Processing Device, Position Information Acquisition Device, and Audio Signal Processing System
US9769585B1 (en) Positioning surround sound for virtual acoustic presence
CN110049428B (en) Method, playing device and system for realizing multi-channel surround sound playing
CN108111952B (en) Recording method, device, terminal and computer readable storage medium
CN101112120A (en) Apparatus and method of processing multi-channel audio input signals to produce at least two channel output signals therefrom, and computer readable medium containing executable code to perform the me
CN109195063B (en) Stereo sound generating system and method
CN102577440B (en) Improve apparatus and method that are stereo or pseudo-stereophonic audio signals
CN101924317B (en) Dual-channel processing device, method and sound playing system thereof
CN104969571B (en) Method for rendering stereophonic signal
CN112492506A (en) Audio playing method and device, computer readable storage medium and robot
CN104853283A (en) Audio signal processing method and apparatus
US9124978B2 (en) Speaker array apparatus, signal processing method, and program
CN1983833A (en) Method for wireless transmitting audio signals and appararus thereof
CN100539741C (en) Strengthen the audio-frequency processing method of 3-D audio
CN101184349A (en) Three-dimensional ring sound effect technique aimed at dual-track earphone equipment
JP2000295698A (en) Virtual surround system
CN111857473B (en) Audio playing method and device and electronic equipment
KR20190034487A (en) Apparatus for Stereophonic Sound Service, Driving Method of Apparatus for Stereophonic Sound Service and Computer Readable Recording Medium
CN105075294A (en) Audio signal processing apparatus
JP2018191127A (en) Signal generation device, signal generation method, and program
CN102438200A (en) Method for outputting audio signals and terminal equipment
CN111050270A (en) Multi-channel switching method and device for mobile terminal, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312

RJ01 Rejection of invention patent application after publication