CN112894855A - Robot motion generation method and device, robot, and storage medium - Google Patents

Robot motion generation method and device, robot, and storage medium Download PDF

Info

Publication number
CN112894855A
CN112894855A CN202110202762.XA CN202110202762A CN112894855A CN 112894855 A CN112894855 A CN 112894855A CN 202110202762 A CN202110202762 A CN 202110202762A CN 112894855 A CN112894855 A CN 112894855A
Authority
CN
China
Prior art keywords
action data
rhythm
action
robot
steering engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110202762.XA
Other languages
Chinese (zh)
Inventor
康志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Youbisheng Technology Co.,Ltd.
Original Assignee
Guangdong Zhiyuan Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhiyuan Robot Technology Co Ltd filed Critical Guangdong Zhiyuan Robot Technology Co Ltd
Priority to CN202110202762.XA priority Critical patent/CN112894855A/en
Publication of CN112894855A publication Critical patent/CN112894855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/003Manipulators for entertainment
    • B25J11/0035Dancing, executing a choreography

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a robot action generating method and device, a robot and a storage medium, wherein the robot action generating method comprises the steps of obtaining input audio information, obtaining first action data from an action database matched with the audio type of the audio information, generating an action data packet according to the first action data, wherein at least two sections of action data in the action data packet are action data in the action database matched with the audio type of the audio information, the matching degree of two adjacent sections of action data accords with a matching condition, rhythm point data of the audio information and the action data packet are fused to obtain an action data packet to be executed, dance actions generated when the robot runs the action data packet to be executed are more fit with music rhythms, the matching degree between actions of the robot and music is improved, and the hardness of the generated actions is reduced.

Description

Robot motion generation method and device, robot, and storage medium
Technical Field
The present invention relates to the field of computer program technology, and in particular, to a method and an apparatus for generating robot motions, a robot, and a storage medium.
Background
Existing robots (e.g., restaurant robots) are composed of an entertainment mode in which a user can control the robot to dance. In response to the user operation, the robot enters a dance mode, obtains dance movements from the dance library, and starts dancing when the robot starts playing dance music. However, when the existing robot dances, the inherent dance movements are basically acquired in a dance library, and the dance movements cannot be fused with music, so that the dance movements are hard.
Disclosure of Invention
The embodiment of the application provides a robot action generation method and device, a robot and a storage medium, which can improve the matching degree between the action of the robot and music and reduce the rigidity of the generated action.
In a first aspect, the present application provides a robot motion generating method, including:
acquiring input audio information;
acquiring first action data from an action database matched with the audio type of the audio information;
generating an action data packet according to the first action data, wherein the action data packet comprises at least two segments of action data which are adjacent in sequence in execution order, the at least two segments of action data are action data in an action database matched with the audio type of the audio information, the matching degree of the adjacent two segments of action data in the at least two segments of action data meets the matching condition, and the at least two segments of action data comprise the first action data;
and fusing the rhythm point data of the audio information and the action data packet to obtain an action data packet to be executed.
Further, any two sections of the motion data in the at least two sections of the motion data are different.
Further, the obtaining the first action data from the action database matched with the audio type of the audio information comprises:
randomly acquiring a piece of motion data from a motion database matched with the audio type of the audio information.
Further, the matching degree of the two adjacent sections of motion data meeting the matching condition includes:
matching degree of characteristic values of two adjacent sections of the action data meets matching conditions;
the characteristic values comprise starting position points, ending position points and/or activity frequencies of each steering engine of the robot.
Further, the generating an action data packet according to the first action data includes:
acquiring a starting position point, an ending position point and an activity frequency of each steering engine of each section of action data in at least two sections of action data;
the determining step of the starting position point includes:
after a position array of a steering engine in current action data is obtained, first average position information of previously set position information in the position array is obtained, and the first average position information is used as a starting position point;
the determining of the end position point includes:
after the position array of the steering engine in the current action data is obtained, second average position information of later set position information in the position array is obtained, and the second average position information is used as the end position point;
the determining of the activity frequency comprises:
after a position array of a steering engine in current action data is obtained, absolute position differences between every two adjacent positions in the position array are obtained, and a first absolute position difference quantity and a second absolute position difference quantity in each absolute position difference are obtained, wherein the first absolute position difference is larger than a set difference value, and the second absolute position difference is smaller than or equal to the set difference value;
and determining the activity frequency of the target steering engine when executing the current action data according to the first absolute position difference quantity, the second absolute position difference quantity and the group number of the adjacent positions in the position array.
Further, the activity frequency may be determined by the following calculation:
fsn=Abs(N1–N2)/(N3)
wherein n represents steering engine ID, fsnIndicating steering engine snThe frequency of activity when executing the current motion data, Abs representing an absolute value function, N1 representing the first absolute position difference number, N2 representing the second absolute position difference number, N3 representing the number of groups of adjacent positions in the position array.
Further, before the obtaining the first action data from the action database matched with the audio type of the audio information, the method further includes:
and acquiring the audio time length of the audio information, and determining the quantity information of the at least two sections of action data according to the audio time length of the audio information.
Further, the matching degree of the two adjacent sections of motion data meeting the matching condition includes:
in the two adjacent sections of action data, the absolute value of the sum of the position differences between the starting position point of each steering engine in the characteristic value of the latter section of action data and the ending position point of the corresponding steering engine in the characteristic value of the former section of action data is a first parameter;
in the two adjacent sections of action data, the absolute value of the sum of the frequency difference between the active frequency of each steering engine in the characteristic value of the latter section of action data and the active frequency of the corresponding steering engine in the characteristic value of the former section of action data is a second parameter;
and if the first parameter is smaller than a first set value and the second parameter is smaller than a second set value, determining that the matching degree of the two adjacent sections of action data meets the matching condition.
Further, before the obtaining the first action data from the action database matched with the audio type of the audio information, the method further includes:
generating rhythm point data of the audio information, the generating rhythm point data of the audio information comprising:
acquiring an original rhythm moment list of audio data of the audio information;
filtering the original rhythm moment list according to a response performance threshold of the robot to obtain a first rhythm moment list;
obtaining a weighted value of a corresponding type of audio according to the audio type of the audio information, wherein a first rhythm moment in the first rhythm moment list comprises the first rhythm moment, a target rhythm moment is extracted after every rhythm moment corresponding to the weighted value, and the rhythm point data comprises the first rhythm moment and the extracted target rhythm moment.
Furthermore, the rhythm point data comprises time information of a plurality of rhythm points;
the step of fusing the rhythm point data of the audio information and the action data packet to obtain an action data packet to be executed comprises the following steps:
acquiring the position information of each steering engine of the robot at the corresponding moment in the action data packet according to the moment information of the plurality of rhythm points;
acquiring the running speeds of all the adjacent rhythm points of the corresponding steering engine according to the time information of the adjacent rhythm points in the plurality of rhythm points and the position information of the time information of each steering engine at the adjacent rhythm points;
and matching the speed of the position information corresponding to each adjacent rhythm point in the action data packet based on the running speed between all the adjacent rhythm points to obtain the action data packet to be executed.
In a second aspect, the present application also provides a robot motion generating apparatus, comprising:
a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the robot action generating method as provided above in the first aspect.
In a third aspect, the present application further provides a robot including the robot body and the robot motion generating device according to the second aspect.
In a fourth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the robot motion generation method as provided in the first aspect above.
According to the technical scheme, the input audio information is obtained, first action data are obtained from an action database matched with the audio type of the audio information, an action data packet is generated according to the first action data, the action data packet comprises at least two sections of action data which are adjacent in sequence in execution order, the at least two sections of action data are the action data in the action database matched with the audio type of the audio information, any two sections of action data in the at least two sections of action data are different, the matching degree of the two adjacent sections of action data meets the matching condition, and the at least two sections of action data comprise the first action data. Furthermore, rhythm point data of the audio information is obtained, and the action data packets of the rhythm point data of the audio information are fused to obtain action data packets to be executed. The action generated by the robot running the action data packet to be executed generated by the scheme is more suitable for the style and rhythm of the input audio information, the matching degree between the action of the robot and the music is improved, and the vividness of the generated action is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram of pulse code modulation data according to an embodiment of the present application;
FIG. 2 is a diagram of an original audio list provided by an embodiment of the present application;
fig. 3 is a schematic diagram of an original tempo time list provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a first rhythm moment list provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of tempo point data provided in an embodiment of the present application;
fig. 6 is a schematic diagram of position arrays when 4 actuators of left and right arms of a robot provided by another embodiment of the present application perform a certain basic action;
fig. 7 is a schematic flowchart of a robot motion generating method according to an embodiment of the present application;
FIG. 8 is a schematic view of a robot interaction flow diagram according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a robot motion generating apparatus according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a robot action generation method, based on which an action data packet to be executed can be generated, and a robot can execute a corresponding action based on the action data packet to be executed. The to-be-executed action data packet may be a dance action data packet to be executed. Before executing the robot motion generating method, the following pre-operations may be performed: 1. obtaining rhythm point data of each audio frequency stored in the robot; 2. and creating a corresponding type of basic action according to the audio type.
The flow of acquisition of the tempo point data is as follows:
tempo point data of audio stored in the robot, which may be music audio, may be acquired separately.
Fig. 1 is a schematic diagram of pulse code modulation data provided in an embodiment of the present application, as shown in fig. 1,
the rhythm of the music may be extracted by decoding music audio of a corresponding format into pulse-code modulation data (pcm) through corresponding Fast Forward Mpeg (Fast Forward Mpeg), and then generating an original audio list (list) as shown in fig. 2.
Further, list data of the signals in the music original audio list converted from the frequency domain to the time domain may be obtained based on a corresponding audio processing library (e.g., Librosa library), and filtering is performed according to a set threshold BPM140(Beat Per Minute, beats Per Minute), so as to obtain an original rhythm time list as shown in fig. 3.
Further in accordance withThe steering engine response performance filters similar rhythm points in an original rhythm moment list, and the step is mainly used for matching the support of different hardware on dance actions, so that the conditions that the dance rhythm is too fast, and the motor runs and is overloaded or cannot keep up with the speed are avoided. Specifically, the original rhythm moment list is filtered according to a robot response performance threshold, wherein the robot response performance threshold is t +0.1, t is pi nJ/30Tar, n represents a rotating speed, n is 1000r/min, and J represents a moment of inertia kgm2Where tar denotes an average starting torque kgfm of 0.4. The sampling period of the original rhythm moment list is preset, the preset threshold value is more than 0.3s, for example, the sampling period of the original rhythm moment list is 16000Hz, that is, each motor control instruction can only be executed once in 16000/3 sampling points.
After data of one packet is acquired from the original rhythm moment list, the amplitude + frequency of the sampling point is calculated, the amplitude is the average value of data filtered and removed by a median value in 16000 sampling points, and the frequency is the length of the time axis of the sampling point, so that the first rhythm moment list shown in fig. 4 is obtained.
The first rhythm moment list shown in fig. 4 includes m rhythm moments corresponding to music, which are t1, t2, t3.. Further, the weight value Wn of the music genre may be determined according to the music genre of the current music, for example, the music stored in the robot corresponds to 10 music genres, and corresponding music genre identifiers may be set, specifically, music genre 1, music genre 2 … …, music genre 9, and music genre 10. The weighting value corresponding to the music genre may be Wn-10-tn, where tn represents a genre value of a certain music genre, and tn takes values of 1, 2, 3 …, 9, and 10, so that it can be determined that the weighting values corresponding to the music genres 1 to 10 are 9 to 0, respectively.
Furthermore, a target rhythm moment is extracted every Wn rhythm moments from the first rhythm moment in the first rhythm moment list, and the rhythm point data includes the first rhythm moment in the first rhythm moment list and all the target rhythm moments. For example, when the music genre (type) is music genre 2(type 2), the weight value W corresponding to the music genre is 10 "2, that is, the weight value corresponding to the music genre 2 is 8. Extracting rhythm moments from a first rhythm moment list containing m rhythm moments, firstly extracting a first rhythm moment t1, including the extracted rhythm moments, extracting the next rhythm moment as a target rhythm moment every 8 rhythm moments, namely extracting t9 after t1, then extracting t17, t25 … and the like. And the tempo point data comprises a first tempo moment and all target tempo moments. Wherein, the higher the weight value is, the less the extracted rhythm moment is, the lower the weight value is, the more the extracted rhythm moment is.
In order to generate the motion data packet according to the motion data, basic motion data can be created in advance, wherein the basic motion data can be basic dance motion data.
The creation flow for the base action may include the following operations:
the basic dance of the corresponding type can be recorded according to different music types, the matching degree of the dance and the music rhythm is not concerned when the basic dance is recorded, and only the integral recording is carried out.
The specific recording method of the basic dance can adopt robot arm track dragging, recording and learning, when recording is started, the master control can send an inquiry instruction to a steering engine encoder on a bus in a bus mode every set detection period (such as 1ms), and the current absolute position value of a target steering engine is inquired.
For example, a set of 10s basic dance is recorded, and the 10s basic dance requires the 10 steering engines of the robot to operate together, the master controller sends an inquiry instruction to the steering engine encoder on the bus every 1ms, 10000 point data sets are generated within 10s, and then the position array generated by each steering engine is obtained according to the specific steering engine ID.
If a set of 2.5s basic dance is recorded by dragging the left arm and the right arm of the robot, each arm comprises 4 steering engines, the position array of each steering engine of the 2.5s basic dance robot is shown in fig. 6, and the action array of each steering engine comprises a plurality of position information sequenced according to the movement time, namely a plurality of position information sequenced according to the movement time in 2.5 s.
According to the mode, a plurality of sets of basic dances of corresponding types are recorded for each music type, each set of basic dances comprises a position array of each steering engine of the robot, and the position arrays are stored in a basic action library of corresponding types according to the dance types, so that the basic action library is constructed. When the basic dance movements are basic dance movements, the basic dance movement library can be regarded as a basic dance movement library or a basic dance library.
Upon completion of the above-described preliminary operation, a new dance-action generating operation may be performed as follows.
Fig. 7 is a schematic flow chart of a method for generating a dancing action of a robot according to an embodiment of the present application, and as shown in fig. 7, the method for generating a dancing action of a robot includes the following steps:
step 701: the input audio information is acquired, and first motion data is acquired from a motion database matched with the audio type of the audio information, and the first motion data can be used as initial motion data of the loaded music.
The input audio information may be: in response to the user-selected music, the bot loads the user-selected music, treating the loaded music as the input audio. First motion data is acquired from a motion database matched with the audio type of the audio information, and the first motion data may be used as initial motion data of the loaded music. Specifically, when the user selects music and the robot is required to dance according to the music selected by the user, the robot may acquire a music type of the music selected by the user, where the corresponding music type has been bound at each music entry. After the robot acquires the music type of the music selected by the user, a section of basic dance data can be acquired in the basic dance library of the corresponding type as initial dance data, wherein the selection mode of the initial dance data can be selected in a default mode such as according to the numbering sequence of the basic dance data in the basic dance library, for example, when the user selects and controls the robot to jump the dance of the music type 1 last time, the robot selects the basic dance data with the number of 003 in the basic dance library of the corresponding type as the initial dance data, and then the robot can select the basic dance data with the number of 004 in the basic dance library of the corresponding type as the initial dance data.
In another implementation manner, the robot may further randomly acquire the first motion data in a motion database matched with the audio type of the audio information, wherein if the motion data is dance motion data and the audio is music audio, the robot may randomly acquire a section of dance motion data in the dance motion database matched with the music type of the music audio, and further may further use the acquired section of dance motion data as initial dance data.
In this embodiment, before a section of basic dance is acquired in the basic dance library of the corresponding type as the initial dance, the audio time length of the loaded music may also be acquired, and the number N of basic dances that needs to be acquired may be acquired according to the audio time length of the loaded music.
Step 702: and generating an action data packet according to the first action data.
The action data packet comprises at least two segments of action data which are adjacent in sequence in execution order, the at least two segments of action data are action data in an action database matched with the audio type of the audio information, any two segments of action data in the at least two segments of action data are different, the matching degree of the two segments of action data is in accordance with a matching condition, and the at least two segments of action data comprise the first action data. In other words, the action data packet includes the 1 st to nth sections of action data which are adjacent in sequence, the 1 st section of action data may be initial action data, in the 1 st to nth sections of action data, the matching degree of the i-th section of dance and the i-1 st section of action data meets the matching condition, N is an integer greater than 1, the value of i is 2, 3, …, N, the 1 st to nth sections of action data are all action data in the basic action library of the corresponding type, and any two pieces of action data in the 1 st to nth sections of action data are different.
In one embodiment, the action data may be dance action data, and the action data packet may be a dance action data packet.
Specifically, the dance action data packet is generated as follows:
after the initial dance data are obtained, determining the characteristic value of the initial dance data according to the position array of each steering engine in the basic dance data serving as the initial dance data. The characteristic value can comprise a starting position point, an ending position point and/or an activity frequency of each steering engine of the robot. In one implementation, the characteristic values of the acquired initial dance data may be a starting position point, an ending position point and an activity frequency of each steering engine of the robot in the initial dance.
(1) The starting position points of the steering engine are obtained in the following mode:
the basic dance as the initial dance (dance 1) comprises a position array of each steering engine, wherein the position array comprises n pieces of position information, the former set (former 10) pieces of position information of a target steering engine are obtained from the n pieces of position information in the position array, the first average position information of the former set pieces of position information is obtained, and the first average position information is used as a starting position point of the target steering engine.
The position array of each steering engine comprises a plurality of position information sequenced according to the movement time, the first 10 position information of the left arm steering engine 1 of the robot is obtained according to the movement time sequencing of the position array, and the calculation mode of the starting position points of the left arm steering engine 1 of the robot is as follows:
startL1P=(motorL1[0]+motorL1[1]+...+motorL1[9])/10
wherein, startL1P represents the starting position point of the left arm steering engine 1 of the robot, motorL1[0] represents the starting position of the left arm steering engine 1 of the robot in the position array, motorL1[1] is the second position of the left arm steering engine 1 of the robot in the position array, and so on.
And acquiring the starting position point of each steering engine related to the initial dance according to the mode.
(2) The acquisition mode of the end position point of the steering engine is as follows:
the basic dance serving as the initial dance (dance 1) comprises a position array of each steering engine, the position array comprises n pieces of position information, in the n pieces of position information in the position array, the last set (last 10) pieces of position information of the target steering engine are obtained according to the movement time sequence of the position array, second average position information of the last set pieces of position information is obtained, and the second average position information is used as an end position point of the target steering engine.
Taking the example of obtaining the last 10 pieces of position information of the left arm steering engine 1 of the robot, the calculation mode of the ending position point of the left arm steering engine 1 of the robot is as follows:
stopL1P=(motorL1[n-9]+motorL1[n-8]+...+motorL1[n-1]+motorL1[n])/10
wherein stopL1P represents the starting position point of the left arm steering engine 1 of the robot, motorL1[ n ] represents the ending position of the left arm steering engine 1 of the robot in the position array, motorL1[ n-1] is the last to last position of the left arm steering engine 1 of the robot in the position array, and so on.
And acquiring the end position point of each steering engine related to the initial dance according to the mode.
(3) The acquisition mode of the activity frequency of the steering engine is as follows:
the dance device comprises a position array of each steering engine, wherein the position array comprises n pieces of position information, absolute position differences between every two adjacent positions in the n pieces of position information in the position array are obtained, a first absolute position difference quantity and a second absolute position difference quantity in each absolute position difference are obtained, the first absolute position difference is larger than a set difference value (for example, the position difference is 10), and the second absolute position difference is smaller than or equal to the set difference value.
For example, the position array includes 30 pieces of position information { n1, n2, n3, n4, n5, … … n28, n29, n30 }, and an absolute position difference between two adjacent positions in the position array is determined, and specifically, 29 absolute position differences can be calculated. Further, of the 29 absolute position differences, the one greater than the set position difference (e.g., the position difference is 10) is the first absolute position difference, and the one smaller than or equal to the set position difference (e.g., the position difference is 10) is the second absolute position difference. Further, the number N1 of first absolute position differences and the number N2 of second absolute position differences among the 29 absolute position differences are obtained.
Obtaining the activity frequency according to the following formula:
fsn=Abs(N1–N2)/(N3)
wherein N represents a steering engine ID, fsn represents an activity frequency of a steering engine sn when a current basic dance is executed, Abs represents an absolute value function, N1 represents the first absolute position difference number, N2 represents the second absolute position difference number, and N3 represents the number of groups of adjacent positions in the position array.
Taking the position array of the left arm steering engine 1 of the robot as an example, the calculation mode of the activity frequency of the left arm steering engine 1 of the robot is as follows:
motorL1f=Abs(motorL1N1-motorL1N2)/(n-1)
the motorL1f represents the activity frequency of the left arm steering engine 1 of the robot, Abs represents an absolute value function, the motorL1N1 represents the number of first absolute position differences of the left arm steering engine 1 of the robot, the motorL1N2 represents the number of second absolute position differences of the left arm steering engine 1 of the robot, N represents the number of position information in a position array, and N-1 represents the number of adjacent positions in the position array.
Continuing with the example of the position array, if the position array of the left arm steering engine 1 of the robot includes 30 pieces of position information, and it is determined that 29 absolute position differences of the position array of the left arm steering engine 1 include 10 first absolute position differences N1 and 20 second absolute position differences N2, the calculation method of the activity frequency of the left arm steering engine 1 of the robot is as follows:
motorL1f=Abs(10-20)/29
and acquiring the activity frequency of each steering engine related to the initial dance according to the mode.
After the starting position point, the ending position point and the activity frequency of each steering engine about the initial dance are obtained by adopting the operations (1), (2) and (3), the characteristic value of the initial dance is obtained.
Taking the example that the initial dance needs two arms (a left arm and a right arm) of the robot to execute, and each arm has 4 steering engines, the characteristic array of the initial dance can be obtained by adopting the operations (1), (2) and (3), and the characteristic array is as follows:
motor[3][8]=
{startL1P...startL4P,startR1P..startR4P,
stopL1P...stopL4P,stopR1P...stopR4P,
motorL1f...motorL4f,motorR1f...motorR4f
}
and the characteristic array motor [3] [8] represents starting position points, ending position points and activity frequencies of 8 steering engines on the left arm and the right arm of the robot when the initial dance is executed. Specifically, startl1p.. startL4P and startr1p.. startR4P are respectively starting position points of 8 steering engines on the left and right arms, stopl1p.. stopL4P and stoppr1p.. stopR4P are respectively ending position points of 8 steering engines on the left and right arms, motorl1f.. motorL4f and motor1f.. motorR4f are respectively activity frequencies of 8 steering engines on the left and right arms.
After the characteristic array (the characteristic values) of the initial dance (dance section 1) is obtained, the i-th dance after the initial dance (dance section 1) is matched in the basic dance library of the corresponding type according to the characteristic values of the initial dance. Wherein, the absolute value of the sum of the position differences of the starting position point of each steering engine in the characteristic value of the i-th dance and the ending position point of the corresponding steering engine in the characteristic value of the i-1 th dance is a first parameter (x 1); the absolute value of the sum of the frequency difference between the activity frequency of each steering engine in the feature value of the i-th dance and the activity frequency of the corresponding steering engine in the feature value of the i-1 th dance is a second parameter (x 2); and when x1 is smaller than the first set value and x2 is smaller than the second set value, the matching degree of the characteristic value of the i-1 th dance and the characteristic value of the i-th dance accords with the matching condition.
Specifically, according to the characteristic value of the i-1 th dance, the selection rule matched with the i-th dance in the rest dance movements is as follows:
Motorstatus[3][8]=motor(i)[3][8]–motor(i-1)[3][8]
wherein [3] represents a set of the starting position point [0], the ending position point [1] and the activity frequency [2] of the three characteristic values, and [8] represents 8 steering gears on the left arm and the right arm of the robot, and the 8 steering gears are a steering gear [0], a steering gear [1], a steering gear [2], an … steering gear [6] and a steering gear [7] respectively. The motor (i-1) [3] [8] represents a feature array formed by feature values of the 8 steering engines when the robot executes the i-1 th dance action, the motor (i) 3] [8] represents a feature array formed by feature values of the 8 steering engines when the robot executes the dance action to be selected, and the Motorstatus [3] [8] represents the matching degree of the feature values of two adjacent dance segments.
The matching mode for determining whether the matching degree of the two adjacent sections of motion data meets the matching condition is specifically as follows:
regarding x 1:
and calculating the absolute value x1 of the sum of the position differences of the starting position point of each steering engine in the characteristic value of the i-th dance and the ending position point of the corresponding steering engine in the characteristic value of the i-1 th dance. It is determined whether x1 is less than a first set point (e.g., the first set point is 80). The first setting value can be dynamically set according to the actual situation (such as the motion performance of the robot).
x1=Abs[(Motorstatus(i)[0][0]-Motorstatus(i-1)[1][0])+(Motorstatus(i)[0][1]-Motorstatus(i-1)[1][1])+…+(Motorstatus(i)[0][6]-Motorstatus(i-1)[1][6])+(Motorstatus(i)[0][7]-Motorstatus(i-1)[1][7])]
Wherein Motorstatus (i) 0 represents a starting position point 0 of the steering engine 0 when executing the ith dance action, Motorstatus (i-1) 1 represents an ending position point 1 of the steering engine 0 when executing the ith dance action to obtain a first starting position point; therefore, (Motorstatus (i) (0) to Motorstatus (i-1) (0)) represents the position difference between the starting position point of the steering engine [0] when executing the i-th dance motion and the ending position point of the steering engine [0] when executing the i-1 th dance motion, and so on, the position differences of the other steering engines are determined, and the absolute value of each position difference is taken after the position differences are summed to obtain x 1.
Regarding x 2:
and calculating the absolute value x2 of the sum of the frequency difference of the activity frequency of each steering engine in the characteristic value of the i-th dance and the activity frequency of the corresponding steering engine in the characteristic value of the i-1 th dance. It is determined whether x2 is less than a second set point (e.g., the second set point is 0.5). The second setting value can be dynamically set according to the actual situation (such as the motion performance of the robot).
X2=Abs[(Motorstatus(i)[2][0]-Motorstatus(i-1)[2][0])+(Motorstatus(i)[2][1]-Motorstatus(i-1)[2][1])+…+(Motorstatus(i)[2][6]-Motorstatus(i-1)[2][6])+(Motorstatus(i)[2][7]-Motorstatus(i-1)[2][7])]
Wherein Motorstatus (i) 2 < 0 > represents the activity frequency 2 of the steering engine 0 when executing the ith dance action, and Motorstatus (i-1) 2 < 0 > represents the activity frequency 2 of the steering engine 0 when executing the ith-1 dance action. Therefore, (Motorstatus (i) [2] [0] -Motorstatus (i-1) [2] [0]) represents the frequency difference between the frequency of the motion of the steering engine [0] for the ith dance and the frequency of the motion of the steering engine [0] for the ith-1 dance. And so on, determining the frequency difference of each of the rest steering engines, summing the frequency differences, and taking the absolute value of the sum to obtain x 2.
Based on the above operation, the feature values of two adjacent segments of motion (i-1 segment and i-1 segment) are matched, and in the case that x1 is less than 80 and x2 is less than 0.5, the matching degree of the feature value of the i-segment motion data and the feature value of the i-1 segment motion data can be determined to meet the matching condition.
It should be noted that, in the subsequent matching process, the matched action data is not selected, that is, any two pieces of action data in the at least two pieces of action data are different.
Specifically, the process of matching the ith action (dance action) according to the feature value of the (i-1) th action (dance action) may include a first matching mode and a second matching mode, which are as follows:
first matching mode
And matching a plurality of dance motion data meeting the matching conditions in the corresponding type motion database for the i-1 th dance motion data based on the matching conditions, and selecting one section of motion data in the dance motion data meeting the matching conditions as the i-th dance motion according to a set sequence or randomly.
Second matching mode
And based on the matching conditions, stopping the matching operation of the current wheel immediately after matching dance action data meeting the matching conditions for the i-1 th section of dance action data in a polling mode, and taking the dance action data meeting the matching conditions as the i-th section of dance action data. And then the next round of matching operation is started.
And obtaining N sections of basic dances by adopting the first matching mode or the second matching mode, wherein the action data packet (which can be a dance action data packet) comprises 1 st section to N section basic dances which are adjacent in sequence. The dance action data packet comprises position arrays in the 1 st section to the Nth section of basic dance, and time self-adaptive adjustment is carried out on each position information in the position arrays in the 1 st section to the Nth section of basic dance, specifically, the starting time of the ith section of dance is the ending time of the ith-1 st section of dance.
Step 703: obtaining rhythm point data of input audio information, and fusing the rhythm point data and an action data packet to obtain an action data packet to be executed.
Wherein the input audio information may be music loaded after the user selects. The acquired rhythm point data of the loaded music includes time information of a plurality of rhythm points, specifically, rhythm point data of a certain music is shown in fig. 5, and the rhythm points of the music are respectively 2.31s, 3.44s … … 184.87s, 186.26 s. Furthermore, the position information of each steering engine of the robot at the corresponding moment in the position array of the dance action data packet can be obtained according to the time information of a plurality of rhythm points in the rhythm point data.
And further, acquiring the running speed between all the adjacent rhythm points of the corresponding steering engine according to the time information of the adjacent rhythm points in the plurality of rhythm points and the position information of each steering engine in the time information of the adjacent rhythm points.
For example, the time information of the adjacent rhythm points of the steering engine 1 of the left arm of the robot is 15.75s and 17.30s respectively, the position of the steering engine 1 of the left arm of the robot in the dance action data packet corresponding to 15.75s is 0x812, and the position of the steering engine 1 of the left arm of the robot in the dance action data packet corresponding to 17.30s is 0x818, namely, the steering engine 1 of the left arm of the robot in the interval is changed from 0x812 to 0x 818.
Further, the speed of the steering engine 1 of the left arm of the robot is matched from 0x812 to 0x818 according to the following formula:
(0x818-0x812)/(17.3-15.75)*Wn;
wherein Wn is a weight value corresponding to the music type of the currently loaded music.
After the operating speed of the steering engine 1 of the left arm of the robot from 0x812 to 0x818 is obtained in the above manner, the speed is matched according to the operating speed when the steering engine 1 of the left arm of the robot moves from 0x812 to 0x 818.
After the position information corresponding to each adjacent rhythm point in the dance action data packet is paced in the above mode, the dance data to be executed is obtained, and the obtained dance data to be executed can be sent to the robot to be executed for action execution, so that the dance rhythm of the robot is matched with the music rhythm.
In this embodiment, after the dance data to be executed is acquired, the position data of the dance data to be executed may be filtered according to the actual limited range of the robot, so as to filter out the position data of the dance data to be executed, which may damage the motion of the robot, and then perform corresponding adaptive adjustment after filtering. For example, adaptive reconfiguration of the operating speed.
Fig. 8 is a schematic diagram of a robot interaction process according to another embodiment of the present application, and as shown in fig. 8, the robot interaction process includes the following steps:
step 801: responding to the user operation, and triggering to enter a dance mode;
step 802: generating dance data to be executed of a corresponding type according to the music type selected by the user;
step 803: playing music and dancing along with music rhythm;
step 804: and after the music is played, the dance stops, and the robot acts and returns.
In step 801-804, it should be noted that the operation flow for generating the dance data to be executed of the corresponding type according to the music type selected by the user in step 802 may be the operation flow generated by the corresponding dance action in the embodiments shown in steps 701-703.
Fig. 9 is a schematic structural diagram of a robot motion generating apparatus according to another embodiment of the present application, and as shown in fig. 9, the apparatus may include a processor 901 and a memory 902, where the memory 902 is used to store at least one instruction, and the instruction is loaded by the processor 901 and executed to implement the robot motion generating method according to the embodiment shown in fig. 7.
Another embodiment of the present application provides a robot including a robot body and a robot motion generating apparatus according to the embodiment shown in fig. 9.
Another embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the robot motion generation method provided in the embodiment shown in fig. 7.
It should be understood that the application may be an application program (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not limited in this embodiment of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1. A robot motion generation method, characterized in that the method comprises:
acquiring input audio information;
acquiring first action data from an action database matched with the audio type of the audio information;
generating an action data packet according to the first action data, wherein the action data packet comprises at least two segments of action data which are adjacent in sequence in execution order, the at least two segments of action data are action data in an action database matched with the audio type of the audio information, the matching degree of the adjacent two segments of action data in the at least two segments of action data meets the matching condition, and the at least two segments of action data comprise the first action data;
and fusing the rhythm point data of the audio information and the action data packet to obtain an action data packet to be executed.
2. The method of claim 1, wherein any two pieces of motion data in at least two pieces of motion data are different.
3. The method of claim 1, wherein the obtaining first action data from an action database that matches an audio type of the audio information comprises:
randomly acquiring a piece of motion data from a motion database matched with the audio type of the audio information.
4. The method according to claim 1, wherein the matching degree of the two adjacent segments of motion data meets the matching condition comprises:
matching degree of characteristic values of two adjacent sections of the action data meets matching conditions;
the characteristic values comprise starting position points, ending position points and/or activity frequencies of each steering engine of the robot.
5. The method of claim 4, wherein generating an action data packet from the first action data comprises:
acquiring a starting position point, an ending position point and an activity frequency of each steering engine of each section of action data in at least two sections of action data;
the determining step of the starting position point includes:
after a position array of a steering engine in current action data is obtained, first average position information of previously set position information in the position array is obtained, and the first average position information is used as a starting position point;
the determining of the end position point includes:
after the position array of the steering engine in the current action data is obtained, second average position information of later set position information in the position array is obtained, and the second average position information is used as the end position point;
the determining of the activity frequency comprises:
after a position array of a steering engine in current action data is obtained, absolute position differences between every two adjacent positions in the position array are obtained, and a first absolute position difference quantity and a second absolute position difference quantity in each absolute position difference are obtained, wherein the first absolute position difference is larger than a set difference value, and the second absolute position difference is smaller than or equal to the set difference value;
and determining the activity frequency of the target steering engine when executing the current action data according to the first absolute position difference quantity, the second absolute position difference quantity and the group number of the adjacent positions in the position array.
6. The method of claim 5, wherein the activity frequency is determined by the following calculation:
fsn=Abs(N1–N2)/(N3)
wherein n represents steering engine ID, fsnIndicating steering engine snThe frequency of activity when executing the current motion data, Abs representing an absolute value function, N1 representing the first absolute position difference number, N2 representing the second absolute position difference number, N3 representing the number of groups of adjacent positions in the position array.
7. The method of claim 4, wherein prior to obtaining the first action data from the action database that matches the audio type of the audio information, further comprising:
and acquiring the audio time length of the audio information, and determining the quantity information of at least two sections of the action data according to the audio time length of the audio information.
8. The method according to claim 5, wherein the matching degree of the two adjacent segments of motion data meets the matching condition comprises:
in the two adjacent sections of action data, the absolute value of the sum of the position differences between the starting position point of each steering engine in the characteristic value of the latter section of action data and the ending position point of the corresponding steering engine in the characteristic value of the former section of action data is a first parameter;
in the two adjacent sections of action data, the absolute value of the sum of the frequency difference between the active frequency of each steering engine in the characteristic value of the latter section of action data and the active frequency of the corresponding steering engine in the characteristic value of the former section of action data is a second parameter;
and if the first parameter is smaller than a first set value and the second parameter is smaller than a second set value, determining that the matching degree of the two adjacent sections of action data meets the matching condition.
9. The method of claim 1, further comprising, prior to said obtaining first action data from an action database that matches an audio type of the audio information:
generating rhythm point data of the audio information, the generating rhythm point data of the audio information comprising:
acquiring an original rhythm moment list of audio data of the audio information;
filtering the original rhythm moment list according to a response performance threshold of the robot to obtain a first rhythm moment list;
obtaining a weighted value of a corresponding type of audio according to the audio type of the audio information, wherein a first rhythm moment in the first rhythm moment list comprises the first rhythm moment, a target rhythm moment is extracted after every rhythm moment corresponding to the weighted value, and the rhythm point data comprises the first rhythm moment and the extracted target rhythm moment.
10. The method according to any one of claims 1-9, wherein the rhythm point data includes time information of a plurality of rhythm points;
the step of fusing the rhythm point data of the audio information and the action data packet to obtain an action data packet to be executed comprises the following steps:
acquiring the position information of each steering engine of the robot at the corresponding moment in the action data packet according to the moment information of the plurality of rhythm points;
acquiring the running speeds of all the adjacent rhythm points of the corresponding steering engine according to the time information of the adjacent rhythm points in the plurality of rhythm points and the position information of the time information of each steering engine at the adjacent rhythm points;
and matching the speed of the position information corresponding to each adjacent rhythm point in the action data packet based on the running speed between all the adjacent rhythm points to obtain the action data packet to be executed.
11. A robot motion generating apparatus, characterized in that the apparatus comprises:
a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the robot action generating method according to any of claims 1-10.
12. A robot comprising a robot body and the robot motion generation device according to claim 11.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, is adapted to carry out a robot motion generation method according to any one of claims 1-10.
CN202110202762.XA 2021-02-23 2021-02-23 Robot motion generation method and device, robot, and storage medium Pending CN112894855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110202762.XA CN112894855A (en) 2021-02-23 2021-02-23 Robot motion generation method and device, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110202762.XA CN112894855A (en) 2021-02-23 2021-02-23 Robot motion generation method and device, robot, and storage medium

Publications (1)

Publication Number Publication Date
CN112894855A true CN112894855A (en) 2021-06-04

Family

ID=76124447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110202762.XA Pending CN112894855A (en) 2021-02-23 2021-02-23 Robot motion generation method and device, robot, and storage medium

Country Status (1)

Country Link
CN (1) CN112894855A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080075275A (en) * 2007-02-12 2008-08-18 박진규 Robot for dancing by music
CN107009371A (en) * 2017-06-14 2017-08-04 上海思依暄机器人科技股份有限公司 A kind of method and device for automatically adjusting machine people's dance movement
CN108527376A (en) * 2018-02-27 2018-09-14 深圳狗尾草智能科技有限公司 Control method, apparatus, equipment and the medium of robot dance movement
CN109176541A (en) * 2018-09-06 2019-01-11 南京阿凡达机器人科技有限公司 A kind of method, equipment and storage medium realizing robot and dancing
CN110955786A (en) * 2019-11-29 2020-04-03 网易(杭州)网络有限公司 Dance action data generation method and device
CN111086007A (en) * 2018-10-23 2020-05-01 广州奥睿智能科技有限公司 Robot dance rhythm self-adaptation system and robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080075275A (en) * 2007-02-12 2008-08-18 박진규 Robot for dancing by music
CN107009371A (en) * 2017-06-14 2017-08-04 上海思依暄机器人科技股份有限公司 A kind of method and device for automatically adjusting machine people's dance movement
CN108527376A (en) * 2018-02-27 2018-09-14 深圳狗尾草智能科技有限公司 Control method, apparatus, equipment and the medium of robot dance movement
CN109176541A (en) * 2018-09-06 2019-01-11 南京阿凡达机器人科技有限公司 A kind of method, equipment and storage medium realizing robot and dancing
CN111086007A (en) * 2018-10-23 2020-05-01 广州奥睿智能科技有限公司 Robot dance rhythm self-adaptation system and robot
CN110955786A (en) * 2019-11-29 2020-04-03 网易(杭州)网络有限公司 Dance action data generation method and device

Similar Documents

Publication Publication Date Title
US10286321B2 (en) Time-shifted multiplayer game
US7333090B2 (en) Method and apparatus for analysing gestures produced in free space, e.g. for commanding apparatus by gesture recognition
CN110955786A (en) Dance action data generation method and device
US10828784B2 (en) Method and apparatus for controlling dancing of service robot
CN109876444A (en) Method for exhibiting data and device, storage medium and electronic device
CN111127598B (en) Animation playing speed adjusting method and device, electronic equipment and medium
CN107479872B (en) Android animation set playing method, storage medium, electronic device and system
JP2014522528A (en) Method and apparatus for automatically reproducing facial expressions with virtual images
WO2014051584A1 (en) Character model animation using stored recordings of player movement interface data
US10421017B2 (en) Balancing multiple team based games
CN112894855A (en) Robot motion generation method and device, robot, and storage medium
CN110333839A (en) A kind of audio data processing method, device and medium
CN108415773A (en) A kind of efficient Method for HW/SW partitioning based on blending algorithm
CN112975963B (en) Robot action generation method and device and robot
JP6198375B2 (en) Game program and game system
JP4068087B2 (en) Robot, robot action plan execution device, and action plan execution program
WO2007046613A1 (en) Method of representing personality of mobile robot based on navigation logs and mobile robot apparatus therefor
CN111061366A (en) Method for robot to autonomously decide current behavior decision
JP2019220187A5 (en) Service providing system and program
JP7060829B1 (en) Information processing equipment, information processing methods, and programs
CN105549846B (en) A kind of method for splitting and mobile terminal of voice box group
JP5954288B2 (en) Information processing apparatus and program
WO2023037507A1 (en) Gameplay control learning device
CN113218399B (en) Maze navigation method and device based on multi-agent layered reinforcement learning
JP5954287B2 (en) Information processing apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210728

Address after: No. 5055, Tianhe Road, Guangzhou

Applicant after: Guangdong Youbisheng Technology Co.,Ltd.

Address before: B2, No.1 Bochuang Road, Beijiao Town, Shunde District, Foshan City, Guangdong Province

Applicant before: Guangdong Zhiyuan Robot Technology Co.,Ltd.

TA01 Transfer of patent application right