CN110216681B - Intelligent robot analysis control system based on big data - Google Patents

Intelligent robot analysis control system based on big data Download PDF

Info

Publication number
CN110216681B
CN110216681B CN201910667479.7A CN201910667479A CN110216681B CN 110216681 B CN110216681 B CN 110216681B CN 201910667479 A CN201910667479 A CN 201910667479A CN 110216681 B CN110216681 B CN 110216681B
Authority
CN
China
Prior art keywords
action
data
module
signal
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910667479.7A
Other languages
Chinese (zh)
Other versions
CN110216681A (en
Inventor
陈立光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Xinjiaqi Technology Co.,Ltd.
Original Assignee
Guangdong Jaki Technology And Education Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Jaki Technology And Education Co ltd filed Critical Guangdong Jaki Technology And Education Co ltd
Priority to CN201910667479.7A priority Critical patent/CN110216681B/en
Publication of CN110216681A publication Critical patent/CN110216681A/en
Application granted granted Critical
Publication of CN110216681B publication Critical patent/CN110216681B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Abstract

The invention discloses an intelligent robot analysis control system based on big data, which comprises a data acquisition module, a signal transfer module, a controller, an action execution module, a database, an image acquisition module, an analysis processing module, a touch sensing module and a search updating module, wherein the data acquisition module is used for acquiring the data of a robot; the data acquisition module is used for acquiring human body induction information and voice element information of a user in real time and transmitting the human body induction information and the voice element information to the signal transfer module, wherein the human body induction information comprises head movement data, hand movement data and foot movement data, and the voice element information comprises pitch data, tone intensity data, tone length data and tone color data; the invention can associate different touch points which are favored by the user to the intelligent robot with actions and music combinations with different priorities, and continuously carries out self-adaptive updating on the content according to data analysis, thereby improving the intelligent degree of the intelligent robot and the attaching degree which is favored by the user.

Description

Intelligent robot analysis control system based on big data
Technical Field
The invention relates to the technical field of intelligent robots, in particular to an intelligent robot analysis control system based on big data.
Background
In general, a smart robot is provided with various types of internal information sensors and external information sensors, for example, visual, auditory, tactile, or olfactory sensors and effectors, to move a hand, a foot, a head, a neck, a joint, and the like, and thus the smart robot should have at least sensory elements, reaction elements, and thinking elements.
In the file with the publication number of CN104882143A, the aim of man-machine barrier-free interaction is effectively and preliminarily achieved only according to a cloud voice recognition and cloud data processing mode, the user is solved and puzzled and growth knowledge is obtained by using huge resources of the internet, and a database of a cloud server is enhanced through joint feedback of a huge number of terminal products, so that the cloud server is more intelligent; when the intelligent robot analysis control system is combined with the existing intelligent robot analysis control system based on big data, the favorite actions and music of the user are still difficult to accurately judge according to the combination mode of gestures, voice and images, and the favorite actions and music are subjected to collective and graded distribution combination so as to improve the matching rationality of the two and the satisfaction degree of the user; and different touch points which are favored by the user to the intelligent robot are difficult to be associated with action and music combinations with different priorities, and the content of the intelligent robot is continuously updated in a self-adaptive manner according to data analysis, so that the intelligent degree of the intelligent robot and the fit degree which is favored by the user are improved.
In order to solve the above-mentioned drawbacks, a technical solution is now provided.
Disclosure of Invention
The invention aims to provide an intelligent robot analysis control system based on big data, which prioritizes human body induction information, identifies human body information according to voice element information, calls various music tracks from a database to be randomly played by the intelligent robot after the human body induction information is judged as the person, and calls corresponding turning action type gestures, walking action type gestures, dance action type gestures and jumping action type gestures from the database to be executed by the intelligent robot with different priorities;
recording the playing execution information of the intelligent robot, calibrating, assigning and distributing weighting analysis to the dynamic image information of the user to obtain a favorite signal and a flat signal, when the flat signal is obtained, not transmitting any information and randomly replacing each action type gesture and each type of music track, when the favorite signal is obtained, calibrating and integrating the playing execution information to obtain a first priority signal, a second priority signal, a third priority signal and a fourth priority signal, namely transmitting the combination of each action type gesture favored by the user and each type of music track, randomly replacing and configuring the combination of each action type gesture not favored by the user and each type of music track, and integrating and distributing the distinguished actions type gesture favored by the user and each type of music track together, so as to improve the rationality of the matching of the two and the satisfaction degree of the user;
the invention carries out calibration and regionalization processing on the touch induction information of the intelligent robot by a user to obtain a first level domain, a second level domain, a third level domain and a fourth level domain, simultaneously combining the first priority signal, the second priority signal, the third priority signal and the fourth priority signal with the respective level domain configuration, when the recorded total times of switching music and actions of each part is too large, the content related to the music and action name of the part is searched and updated again, and feeds back and guides the search result to the part, and simultaneously deletes the original music and action in the part, i.e. combining different priority combinations of actions and music with different touch points preferred by the user to the intelligent robot, and the intelligent degree of the intelligent robot and the degree of fitting with the user preference are improved according to continuous content updating.
The technical problems to be solved by the invention are as follows:
(1) how to accurately judge the favorite actions and music of the user according to the combination mode of gestures, voice and images, and performing set-level distribution combination on the favorite actions and music;
(2) how to associate different touch points which are favored by a user to the intelligent robot with action and music combinations with different priorities and continuously and adaptively update the content of the intelligent robot according to data analysis.
The purpose of the invention can be realized by the following technical scheme:
an intelligent robot analysis control system based on big data comprises a data acquisition module, a signal transfer module, a controller, an action execution module, a database, an image acquisition module, an analysis processing module, a touch sensing module and a search updating module;
the data acquisition module is used for acquiring human body sensing information and voice element information of a user in real time and transmitting the human body sensing information and the voice element information to the signal transfer module, the human body sensing information comprises head movement data, hand movement data and foot movement data and is measured by a distance sensor or a laser sensor and other common modes, the voice element information comprises pitch data, tone intensity data, tone length data and tone color data, different sound producing personnel belong to different sound producing bodies, and the sound produced by each sound producing body is different in pitch, tone intensity, tone length and tone color;
the signal transfer module starts to perform signal generation operation after receiving real-time human body induction information and voice element information to obtain turning action signals, walking action signals and dance action signals with different priorities, and conducting playing signals or interrupting playing signals, and the turning action signals, the walking action signals and the dance action signals are transmitted to the action execution module through the controller;
when the action execution module receives a real-time playing interruption signal, the action execution module interrupts the receiving and the transmission of data or signals; when the action execution module receives a real-time conduction playing signal, various music tracks are called from the database to be randomly played by the intelligent robot, and the various music tracks comprise quiet music tracks, lyric music tracks, impairment music tracks, sweet music tracks and the like;
when receiving real-time turning action signals, walking action signals and dance action signals, the action execution module calls corresponding turning action type gestures, walking action type gestures and dance action type gestures from a database to execute the intelligent robot with different priorities; the database is pre-recorded with various action gestures and various music tracks;
the action execution module is also used for recording playing execution information of the intelligent robot in real time, and the playing execution information comprises long data when each action type gesture is matched with each type of music track, the occurrence frequency of each action type gesture and the occurrence frequency of each type of music track; the action execution module is also used for extracting dynamic image information from the image acquisition module, the image acquisition module is used for acquiring the dynamic image information of a user in real time, the dynamic image information comprises sound decibel data, arm movement data and turn-around frequency data and carries out preference analysis operation on the dynamic image information, when a favorite signal is obtained, the action execution module transmits playing execution information corresponding to a first time period to the analysis processing module, and when a flat signal is obtained, the action execution module does not transmit any information and randomly changes various action gestures and various music tracks, namely, the action gestures favored by the user are transmitted with the combination of various music tracks, and the action gestures not favored by the user are randomly changed and configured with the combination of various music tracks;
the analysis processing module is used for carrying out combination distribution operation on the playing execution information which is received in real time and corresponds to the first time period, and transmitting the obtained first priority signal, the obtained second priority signal, the obtained third priority signal and the obtained fourth priority signal to the touch sensing module;
the touch sensing module is used for acquiring touch sensing information of a user on the intelligent robot in real time, when the intelligent robot touches different parts of the intelligent robot, different music playing and different action indication can be carried out by touch sensing parts in all the parts, the music and the action can be switched, and the touch sensing information comprises touch times and touch interval duration and is subjected to grading processing operation to obtain a first grade domain, a second grade domain, a third grade domain and a fourth grade domain;
the touch sensing module is further used for combining the first priority signal, the second priority signal, the third priority signal and the fourth priority signal received in real time with corresponding level domains in a configuration mode, recording the total music and action switching times of all parts in a third time period, wherein the third time period is represented as twenty minutes, and when the total music and action switching times of all the parts are larger than a preset value, the music and action names of the parts are transmitted to the searching updating module, otherwise, no information is transmitted;
the searching and updating module is used for searching according to the music and action names of the part, feeding the searching result back to the touch sensing module, leading the feedback result into the part, deleting the original music and action in the part, combining the action and music with different priorities with different touch points preferred by the user for the intelligent robot, and improving the intelligent degree of the intelligent robot and the attaching degree preferred by the user according to continuous content updating.
Further, the specific steps of the signal generating operation are as follows:
the method comprises the following steps: acquiring real-time human body induction information, respectively marking head movement data, hand movement data and foot movement data as A, B and C, sequentially arranging differences between A, B and C and respective preset values in the same time period from large to small, and generating turning action signals corresponding to A, walking action signals corresponding to B and dance action signals corresponding to C with different priorities;
step two: acquiring real-time voice element information, respectively marking pitch data, tone intensity data, duration data and tone data as a, b, c and d, comparing the a, b, c and d in the same time period with respective preset ranges, and generating a conducting playing signal when the a, b, c and d are all located in the respective preset ranges, and generating an interrupting playing signal under other conditions; and the same time period is represented as any time of ten seconds, and the respective preset values and the respective preset ranges are pre-recorded.
Further, the preference analysis operation comprises the following specific steps:
the method comprises the following steps: firstly, acquiring dynamic image information in a first time period, then calibrating a sound coefficient Qa according to the sound decibel data change total amount, wherein the sound decibel data change total amount is the difference between maximum sound decibel data and minimum sound decibel data, dividing the sound decibel data change total amount into three levels of a first change level, a second change level and a third change level, assigning the sound coefficients Qa of the three levels to Q1, Q2 and Q3 in sequence, and Q1 is greater than Q2 and greater than Q3;
step two: firstly, acquiring dynamic image information in a first time period, then calibrating a movement coefficient Wb according to the total amount of arm movement data in the dynamic image information, dividing the movement coefficient Wb into three grades of a first total grade, a second total grade and a third total grade, and sequentially assigning the movement coefficients Wb of the three grades to W1, W2 and W3, wherein W1 is larger than W2 and larger than W3;
step three: acquiring dynamic image information in a first time period, calibrating a turning coefficient Ec according to the total data amount of turning times, dividing the turning coefficient Ec into three grades of a first grade, a second grade and a third grade, and sequentially assigning the turning coefficients Ec of the three grades to E1, E2 and E3, wherein E1 is larger than E2 and is larger than E3;
step four: firstly, sequentially distributing weighted values q, w and e to a sound coefficient Qa, a moving coefficient Wb and a turning coefficient Ec, wherein q is greater than w and is greater than e, and q + w + e is 1, then obtaining a preference degree coefficient in a first time period according to a formula R, wherein the preference degree coefficient R is greater than a preset value R, generating a preference signal, and otherwise generating a flat signal; and the first time period is expressed as a time of thirty minutes;
the first variation level, the second variation level and the third variation level respectively correspond to more than 70 decibels and 30 decibels to 70 decibels and include 30 decibels, 70 decibels and less than 30 decibels; and the first total magnitude, the second total magnitude and the third total magnitude correspond to more than 10 meters, 4 meters to 10 meters and include 4 meters and 10 meters, less than 4 meters, respectively; and the first, second and third grading levels correspond to more than 20 times, 10 times to 20 times and include 10 times, 20 times and less than 10 times, respectively.
Further, the specific steps of the combined allocation operation are as follows:
the method comprises the following steps: acquiring playing execution information corresponding to the first time period, and respectively marking Tsj as matching duration data of each action type gesture and each music track, wherein s is 1.. 3, j is 1.. m, the occurrence frequency of each action type gesture is Us, and s is 1.. 3, according to the turning action type gesture, the walking action type gesture and the dance action type gesture mentioned above, i.e. s takes only 1 to 3 and T1j and U1 when s is 1 indicate long data and the number of occurrences of the first type of motion gesture when the first type of motion gesture is coordinated with each type of music track, respectively, the number of occurrences of each type of music track being designated Pj, j being 1.. m, us and Pj are in one-to-one correspondence with Tsj, and Ts1 and P1 when j is 1 indicate the number of occurrences of long data and first-type music pieces when each action-type gesture is matched with the first-type music piece, respectively;
step two: comparing Us and Pj with preset values u and p respectively, and when Us is larger than u, placing each action type gesture corresponding to Us in an X set, and when Us is smaller than or equal to u, placing each action type gesture corresponding to Us in a Y set; when Pj is larger than p, placing various music tracks corresponding to Pj in an X set, and when Pj is smaller than or equal to p, placing various music tracks corresponding to Pj in a Z set;
step three: sequentially combining all action gestures in the X set with all music tracks, and generating a first priority signal by the combination when the corresponding Tsj is greater than a preset value t, otherwise generating a second priority signal by the combination; sequentially combining all the action type postures in the X set with all the music tracks in the Z set, sequentially combining all the music tracks in the X set with all the action type postures in the Y set, and generating a second priority signal by combining the combination when the corresponding Tsj is larger than a preset value t, otherwise generating a third priority signal by combining the combination; and finally, sequentially combining each action gesture in the Y set with each music track in the Z set, and generating a third priority signal by combining the gestures when the corresponding Tsj is greater than a preset value t, otherwise generating a fourth priority signal by combining the gestures, namely, after distinguishing each action gesture and each music track favored by the user, performing set-level distribution combination on the actions and the music tracks to improve the matching rationality of the actions and the music tracks and the satisfaction degree of the user.
Further, the specific steps of the classification processing operation are as follows:
the method comprises the following steps: acquiring touch induction information in a second time period, then marking the touch times of all parts as Gi, i is 1.. n, and marking the average touch interval duration of all parts as Hi, i is 1.. n, and Gi and Hi correspond one to one;
step two: comparing Gi with a preset value g, and placing each part corresponding to Gi in an area A when Gi is larger than the preset value g, or placing each part corresponding to Gi in an area B; comparing Hi with a preset range h, when Hi is larger than the maximum value of the preset range h, placing each part corresponding to Hi in the region C, when Hi is within the preset range h, placing each part corresponding to Hi in the region D, and when Hi is smaller than the minimum value of the preset range h, placing each part corresponding to Hi in the region E;
step three: comparing each part in the area A with each part in the area C, the area D and the area E, and sequentially generating a first level area, a second level area and a third level area from consistent parts; and comparing each part in the B region with each part in the C region, the D region and the E region, and sequentially generating a second level region, a third level region and a fourth level region from consistent parts, wherein the second time period is represented as one hour.
The invention has the beneficial effects that:
1. the data acquisition module transmits the real-time acquired human body sensing information and voice element information of the user to the signal transfer module, and performs signal generation operation on the signal transfer module according to the signal transfer module to obtain turning action signals, walking action signals and dance action signals with different priorities, and conducting playing signals or interrupting playing signals, and the turning action signals, the walking action signals and the dance action signals are transmitted to the action execution module through the controller;
when receiving a real-time interruption playing signal, the action execution module interrupts the receiving and transmission of data or signals; when the action execution module receives a real-time conduction playing signal, various music tracks are called from the database to be randomly played by the intelligent robot; when receiving real-time turning action signals, walking action signals and dance action signals, the action execution module calls corresponding turning action type gestures, walking action type gestures and dance action type gestures from the database to execute the intelligent robot with different priorities;
the action execution module also records playing execution information of the intelligent robot in real time, extracts dynamic image information from the image acquisition module, performs preference analysis operation on the dynamic image information to obtain a preference degree coefficient R after calibration, assignment and weight distribution, and compares the preference degree coefficient R with a preset value R to generate a preference signal and a flat signal; when the action execution module acquires a favorite signal, the playing execution information corresponding to the first time period is transmitted to the analysis processing module, and when the action execution module acquires a flat signal, the action execution module does not transmit any information and randomly changes all action postures and various music tracks, namely, all action postures favored by the user are transmitted with the combination of various music tracks, and the random change configuration is carried out on the combination of all action postures not favored by the user and various music tracks;
the analysis processing module performs combined distribution operation on the playing execution information which is received in real time and corresponds to the first time period to obtain a first priority signal, a second priority signal, a third priority signal and a fourth priority signal which are calibrated and subjected to set grading processing, and then performs set grading distribution combination on the actions and postures favored by the user and various music tracks after distinguishing the actions and postures favored by the user, so as to improve the matching rationality of the actions and the music tracks and the satisfaction degree of the user;
2. the method comprises the steps that a touch sensing module carries out grading processing operation on touch sensing information of an intelligent robot by a user, wherein the touch sensing information is collected in real time, so that a first level domain, a second level domain, a third level domain and a fourth level domain are obtained through calibration and regionalization, a first priority signal, a second priority signal, a third priority signal and a fourth priority signal are combined with corresponding level domains in a configuration mode, and when the recorded total number of times of switching music and actions of each part is larger than a preset value, the music and the action name of the part are transmitted to a searching and updating module; the searching and updating module searches according to the music and action names of the part, feeds the searching result back to the touch sensing module, and deletes the original music and action in the part while guiding the feedback result into the part, namely, the combination of the action and the music with different priorities is combined with different touch points which are favored by the user to the intelligent robot, and the intelligent degree of the intelligent robot and the attaching degree which is favored by the user are improved according to continuous content updating.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a block diagram of the system of the present invention;
Detailed Description
As shown in fig. 1, an intelligent robot analysis control system based on big data includes a data acquisition module, a signal transfer module, a controller, an action execution module, a database, an image acquisition module, an analysis processing module, a touch sensing module, and a search updating module;
the data acquisition module is used for acquiring human body sensing information and voice element information of a user in real time and transmitting the human body sensing information and the voice element information to the signal transfer module, the human body sensing information comprises head movement data, hand movement data and foot movement data and is measured by a distance sensor or a laser sensor and other common modes, the voice element information comprises pitch data, tone intensity data, tone length data and tone color data, different sound producing personnel belong to different sound producing bodies, and the sound produced by each sound producing body is different in pitch, tone intensity, tone length and tone color;
the signal transfer module starts to perform signal generation operation after receiving real-time human body induction information and voice element information to obtain turning action signals, walking action signals and dance action signals with different priorities, and turn-on playing signals or interrupt playing signals, and the turning action signals, the walking action signals and the dance action signals are transmitted to the action execution module through the controller;
when receiving a real-time interruption playing signal, the action execution module interrupts the receiving and transmission of data or signals; when the action execution module receives a real-time conduction playing signal, various music tracks are called from the database to be randomly played by the intelligent robot, and the various music tracks comprise quiet music tracks, lyric music tracks, impairment music tracks, sweet music tracks and the like;
when receiving real-time turning action signals, walking action signals and dance action signals, the action execution module calls corresponding turning action type gestures, walking action type gestures and dance action type gestures from the database to execute the intelligent robot with different priorities; the database is pre-recorded with various action gestures and various music tracks;
the action execution module is also used for recording playing execution information of the intelligent robot in real time, and the playing execution information comprises long data when each action type gesture is matched with each type of music track, the occurrence frequency of each action type gesture and the occurrence frequency of each type of music track; the action execution module is also used for extracting dynamic image information from the image acquisition module, the image acquisition module is used for acquiring the dynamic image information of a user in real time, the dynamic image information comprises sound decibel data, arm movement data and turn-around frequency data and carries out preference analysis operation on the dynamic image information, when a favorite signal is obtained, the action execution module transmits playing execution information corresponding to a first time period to the analysis processing module, and when a flat signal is obtained, the action execution module does not transmit any information and randomly changes various action gestures and various music tracks, namely, the combination of the action gestures favored by the user and various music tracks is transmitted, and the combination of the action gestures not favored by the user and various music tracks is randomly changed and configured;
the analysis processing module is used for carrying out combination distribution operation on the playing execution information which is received in real time and corresponds to the first time period, and transmitting the obtained first priority signal, the obtained second priority signal, the obtained third priority signal and the obtained fourth priority signal to the touch sensing module;
the touch sensing module is used for acquiring touch sensing information of a user on the intelligent robot in real time, when the intelligent robot touches different parts of the intelligent robot, different music playing and different action indication can be carried out by touch sensing parts in all the parts, the music and the action can be switched, and the touch sensing information comprises touch times and touch interval duration and is subjected to grading processing operation to obtain a first grade domain, a second grade domain, a third grade domain and a fourth grade domain;
the touch sensing module is also used for combining the first priority signal, the second priority signal, the third priority signal and the fourth priority signal received in real time with corresponding level domain configuration, recording the total music and action switching times of each part in a third time period, wherein the third time period is represented as twenty minutes, and when the total music and action switching times of each part are greater than a preset value, the music and action name of the part are transmitted to the searching updating module, otherwise, no information is transmitted;
the searching and updating module is used for searching according to the music and action names of the part, feeding the searching result back to the touch sensing module, leading the feedback result into the part, deleting the original music and action in the part, combining the action and music with different priorities with different touch points favored by the user on the intelligent robot, and improving the intelligent degree of the intelligent robot and the fit degree favored by the user according to continuous content updating.
And the specific steps of the signal generation operation are as follows:
the method comprises the following steps: acquiring real-time human body induction information, respectively marking head movement data, hand movement data and foot movement data as A, B and C, sequentially arranging differences between A, B and C and respective preset values in the same time period from large to small, and generating turning action signals corresponding to A, walking action signals corresponding to B and dance action signals corresponding to C with different priorities;
step two: acquiring real-time voice element information, respectively marking pitch data, tone intensity data, duration data and tone data as a, b, c and d, comparing the a, b, c and d in the same time period with respective preset ranges, and generating a conducting playing signal when the a, b, c and d are all located in the respective preset ranges, and generating an interrupting playing signal under other conditions; and the same time period is represented as any time of ten seconds, and the respective preset values and the respective preset ranges are pre-recorded.
And the specific steps of the preference analysis operation are as follows:
the method comprises the following steps: firstly, acquiring dynamic image information in a first time period, then calibrating a sound coefficient Qa according to the sound decibel data change total amount, wherein the sound decibel data change total amount is the difference between maximum sound decibel data and minimum sound decibel data, dividing the sound decibel data change total amount into three levels of a first change level, a second change level and a third change level, assigning the sound coefficients Qa of the three levels to Q1, Q2 and Q3 in sequence, and Q1 is greater than Q2 and greater than Q3;
step two: firstly, acquiring dynamic image information in a first time period, then calibrating a movement coefficient Wb according to the total amount of arm movement data in the dynamic image information, dividing the movement coefficient Wb into three grades of a first total grade, a second total grade and a third total grade, and sequentially assigning the movement coefficients Wb of the three grades to W1, W2 and W3, wherein W1 is larger than W2 and larger than W3;
step three: acquiring dynamic image information in a first time period, calibrating a turning coefficient Ec according to the total data amount of turning times, dividing the turning coefficient Ec into three grades of a first grade, a second grade and a third grade, and sequentially assigning the turning coefficients Ec of the three grades to E1, E2 and E3, wherein E1 is larger than E2 and is larger than E3;
step four: firstly, sequentially distributing weighted values q, w and e to a sound coefficient Qa, a moving coefficient Wb and a turning coefficient Ec, wherein q is greater than w and is greater than e, and q + w + e is 1, then obtaining a preference degree coefficient in a first time period according to a formula R, wherein the preference degree coefficient R is greater than a preset value R, generating a preference signal, and otherwise generating a flat signal; and the first time period is expressed as a time of thirty minutes;
the first variation level, the second variation level and the third variation level respectively correspond to more than 70 decibels and 30 decibels to 70 decibels and include 30 decibels, 70 decibels and less than 30 decibels; and the first total magnitude, the second total magnitude and the third total magnitude correspond to more than 10 meters, 4 meters to 10 meters and include 4 meters and 10 meters, less than 4 meters, respectively; and the first, second and third grading levels correspond to more than 20 times, 10 times to 20 times and include 10 times, 20 times and less than 10 times, respectively.
And the specific steps of the combined allocation operation are as follows:
the method comprises the following steps: acquiring playing execution information corresponding to the first time period, respectively marking Tsj as matching duration data of each action type gesture and each music track, s is 1.. 3, j is 1.. m, Us is marked as the occurrence frequency of each action type gesture, s is 1.. 3, and according to the turning action type gesture, the walking action type gesture, the dance action type gesture and the jumping action type gesture mentioned above, i.e. s takes only 1 to 3 and T1j and U1 when s is 1 indicate long data and the number of occurrences of the first type of motion gesture when the first type of motion gesture is coordinated with each type of music track, respectively, the number of occurrences of each type of music track being designated Pj, j being 1.. m, us and Pj are in one-to-one correspondence with Tsj, and Ts1 and P1 when j is 1 indicate the number of occurrences of long data and first-type music pieces when each action-type gesture is matched with the first-type music piece, respectively;
step two: comparing Us and Pj with preset values u and p respectively, and when Us is larger than u, placing each action type gesture corresponding to Us in an X set, and when Us is smaller than or equal to u, placing each action type gesture corresponding to Us in a Y set; when Pj is larger than p, placing various music tracks corresponding to Pj in an X set, and when Pj is smaller than or equal to p, placing various music tracks corresponding to Pj in a Z set;
step three: sequentially combining all action gestures in the X set with all music tracks, and generating a first priority signal by the combination when the corresponding Tsj is greater than a preset value t, otherwise generating a second priority signal by the combination; sequentially combining all the action type postures in the X set with all the music tracks in the Z set, sequentially combining all the music tracks in the X set with all the action type postures in the Y set, and generating a second priority signal by combining the combination when the corresponding Tsj is larger than a preset value t, otherwise generating a third priority signal by combining the combination; and finally, sequentially combining each action gesture in the Y set with each music track in the Z set, and generating a third priority signal by combining the gestures when the corresponding Tsj is greater than a preset value t, otherwise generating a fourth priority signal by combining the gestures, namely, after distinguishing each action gesture and each music track favored by the user, performing set-level distribution combination on the actions and the music tracks to improve the matching rationality of the actions and the music tracks and the satisfaction degree of the user.
And the specific steps of the classification processing operation are as follows:
the method comprises the following steps: acquiring touch induction information in a second time period, then marking the touch times of all parts as Gi, i is 1.. n, and marking the average touch interval duration of all parts as Hi, i is 1.. n, and Gi and Hi correspond one to one;
step two: comparing Gi with a preset value g, and placing each part corresponding to Gi in an area A when Gi is larger than the preset value g, or placing each part corresponding to Gi in an area B; comparing Hi with a preset range h, when Hi is larger than the maximum value of the preset range h, placing each part corresponding to Hi in the region C, when Hi is within the preset range h, placing each part corresponding to Hi in the region D, and when Hi is smaller than the minimum value of the preset range h, placing each part corresponding to Hi in the region E;
step three: comparing each part in the area A with each part in the area C, the area D and the area E, and sequentially generating a first level area, a second level area and a third level area from consistent parts; and comparing each part in the B region with each part in the C region, the D region and the E region, and sequentially generating a second level region, a third level region and a fourth level region from consistent parts, wherein the second time period is represented as one hour.
During working, human body induction information and voice element information of a user, which are collected in real time, are transmitted to a signal transfer module by a data collection module, the signal transfer module carries out signal generation operation on the human body induction information, namely head movement data, hand movement data and foot movement data in the human body induction information are respectively marked as A, B and C, differences between A, B and C and respective preset values in the same time period are sequentially arranged from large to small, turning action signals corresponding to A, walking action signals corresponding to B and dance action signals corresponding to C are generated according to the turning action signals, the walking action signals and the dance action signals, pitch data, tone intensity data, tone length data and tone data in the voice element information are respectively marked as a, B, C and d, and a, B, C and d in the same time period are respectively marked, b. c and d are compared with respective preset ranges, and when the signals are both positioned in the respective preset ranges, a conducting playing signal is generated, and under other conditions, an interrupting playing signal is generated, and the signals are transmitted to the action execution module through the controller;
when receiving a real-time interruption playing signal, the action execution module interrupts the receiving and transmission of data or signals; when the action execution module receives a real-time conduction playing signal, various music tracks are called from the database to be randomly played by the intelligent robot; when receiving real-time turning action signals, walking action signals and dance action signals, the action execution module calls corresponding turning action type gestures, walking action type gestures and dance action type gestures from the database to execute the intelligent robot with different priorities;
the action execution module also records playing execution information of the intelligent robot in real time, extracts dynamic image information from the image acquisition module and performs preference analysis operation on the dynamic image information, namely, the total change amount of sound decibel data, the total amount of arm movement data and the total amount of turn number data in the dynamic image information are sequentially calibrated into a sound coefficient Qa, a movement coefficient Wb and a turn coefficient Ec, then the sound coefficient Qa, the movement coefficient Wb and the turn number data are respectively subjected to three grades and assigned with weights, finally, a preference degree coefficient in a first time period is obtained according to a formula R (Qa + q + Wb + w + Ec), and when the preference degree coefficient R is greater than a preset value R, a preference signal is generated, otherwise, a flat signal is generated; when the action execution module acquires a favorite signal, the playing execution information corresponding to the first time period is transmitted to the analysis processing module, and when the action execution module acquires a flat signal, the action execution module does not transmit any information and randomly changes all action postures and various music tracks, namely, all action postures favored by the user are transmitted with the combination of various music tracks, and the random change configuration is carried out on the combination of all action postures not favored by the user and various music tracks;
the analysis processing module carries out combined allocation operation on the playing execution information corresponding to the first time period received in real time, namely long data, the occurrence frequency of each action type posture and the occurrence frequency of each type of music track in the matching of each action type posture in the playing execution information corresponding to the first time period and each type of music track are sequentially marked as Tsj, Us and Pj, and when Us is larger than u, each action type posture corresponding to Us is placed in an X set, when Us is smaller than or equal to u, each action type posture corresponding to Us is placed in a Y set, when Pj is larger than p, each type of music track corresponding to Pj is placed in an X set, and when Pj is smaller than or equal to p, each type of music track corresponding to Pj is placed in a Z set; sequentially combining each action type posture in the X set with various music tracks, when the Tsj corresponding to the action type posture is larger than a preset value t, generating a first priority signal by the combination, otherwise generating a second priority signal by the combination, sequentially combining each action type posture in the X set with various music tracks in the Z set, sequentially combining each music track in the X set with each action type posture in the Y set, when the Tsj corresponding to the action type posture is larger than the preset value t, generating the second priority signal by the combination, otherwise generating a third priority signal by the combination, sequentially combining each action type posture in the Y set with various music tracks in the Z set, when the Tsj corresponding to the action type posture is larger than the preset value t, generating the third priority signal by the combination, otherwise generating a fourth priority signal by the combination, after various actions and postures favored by the user and various music tracks are distinguished, the actions and postures are subjected to collective and graded distribution combination so as to improve the matching rationality of the actions and the postures and the satisfaction degree of the user; the obtained first priority signal, the second priority signal, the third priority signal and the fourth priority signal are transmitted to the touch sensing module together;
the touch sensing module carries out grading processing operation on the touch sensing information of the intelligent robot by a user which is collected in real time, namely, the touch times of all parts in the touch sensing information and the average touch interval duration of all the parts are sequentially marked as Gi and Hi, when Gi is larger than a preset value g, all the parts corresponding to Gi are arranged in an area A, otherwise, all the parts corresponding to Gi are arranged in an area B, when Hi is larger than the maximum value of a preset range h, all the parts corresponding to Hi are arranged in an area C, when Hi is within the preset range h, all the parts corresponding to Hi are arranged in an area D, when Hi is smaller than the minimum value of the preset range h, all the parts corresponding to Hi are arranged in an area E, and all the parts in the area A and the consistent parts in the area C, the area D and the area E sequentially generate a first level area, a second level area and a third level area, sequentially generating a second level domain, a third level domain and a fourth level domain by each part in the B region and the consistent part in the C region, the D region and the E region, and simultaneously configuring and combining the first priority signal, the second priority signal, the third priority signal and the fourth priority signal with the corresponding level domains;
when the recorded total times of music and action switching of each part is greater than a preset value, the music and action names of the part are transmitted to a searching and updating module; the searching and updating module searches according to the music and action names of the part, feeds the searching result back to the touch sensing module, and deletes the original music and action in the part while guiding the feedback result into the part, namely, the combination of the action and the music with different priorities is combined with different touch points which are favored by the user to the intelligent robot, and the intelligent degree of the intelligent robot and the attaching degree which is favored by the user are improved according to continuous content updating.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (6)

1. An intelligent robot analysis control system based on big data is characterized by comprising a data acquisition module, a signal transfer module, a controller, an action execution module, a database, an image acquisition module, an analysis processing module, a touch sensing module and a search updating module;
the data acquisition module is used for acquiring human body induction information and voice element information of a user in real time and transmitting the human body induction information and the voice element information to the signal transfer module, wherein the human body induction information comprises head movement data, hand movement data and foot movement data, and the voice element information comprises pitch data, tone intensity data, tone length data and tone color data;
the signal transfer module starts to perform signal generation operation after receiving real-time human body induction information and voice element information to obtain turning action signals, walking action signals and dance action signals with different priorities, and conducting playing signals or interrupting playing signals, and the turning action signals, the walking action signals and the dance action signals are transmitted to the action execution module through the controller;
when the action execution module receives a real-time playing interruption signal, the action execution module interrupts the receiving and the transmission of data or signals; when the action execution module receives a real-time conduction playing signal, various music tracks are called from the database to be randomly played by the intelligent robot;
when receiving real-time turning action signals, walking action signals and dance action signals, the action execution module calls corresponding turning action type gestures, walking action type gestures and dance action type gestures from a database to execute the intelligent robot with different priorities; the database is pre-recorded with various action gestures and various music tracks;
the action execution module is also used for recording playing execution information of the intelligent robot in real time, and the playing execution information comprises long data when each action type gesture is matched with each type of music track, the occurrence frequency of each action type gesture and the occurrence frequency of each type of music track; the action execution module is also used for extracting dynamic image information from the image acquisition module, the image acquisition module is used for acquiring the dynamic image information of a user in real time, the dynamic image information comprises sound decibel data, arm movement data and turn-around frequency data and carries out preference analysis operation on the dynamic image information, when a favorite signal is obtained, the action execution module transmits playing execution information corresponding to a first time period to the analysis processing module, and when a flat signal is obtained, the action execution module does not transmit any information and randomly changes various action postures and various music tracks;
the analysis processing module is used for carrying out combination distribution operation on the playing execution information which is received in real time and corresponds to the first time period, and transmitting the obtained first priority signal, the obtained second priority signal, the obtained third priority signal and the obtained fourth priority signal to the touch sensing module;
the touch sensing module is used for acquiring touch sensing information of a user on the intelligent robot in real time, and the touch sensing information comprises touch times and touch interval duration and is subjected to grading processing operation to obtain a first grade domain, a second grade domain, a third grade domain and a fourth grade domain;
the touch sensing module is also used for configuring and combining the first priority signal, the second priority signal, the third priority signal and the fourth priority signal which are received in real time with corresponding level domains, recording the total times of music and action switching of each part in a third time period, and transmitting the music and action names of each part to the searching and updating module when the total times of music and action switching of each part is greater than a preset value;
the searching and updating module is used for searching according to the music and the action name of the part, feeding back the searching result to the touch sensing module, importing the feedback result into the part, and deleting the original music and action in the part.
2. The intelligent robot analysis and control system based on big data as claimed in claim 1, wherein the specific steps of the signal generation operation are as follows:
the method comprises the following steps: acquiring real-time human body induction information, respectively marking head movement data, hand movement data and foot movement data as A, B and C, sequentially arranging differences between A, B and C and respective preset values in the same time period from large to small, and generating turning action signals corresponding to A, walking action signals corresponding to B and dance action signals corresponding to C with different priorities;
step two: the method comprises the steps of acquiring real-time voice element information, respectively marking pitch data, tone intensity data, duration data and tone data as a, b, c and d, comparing a, b, c and d in the same time period with respective preset ranges, generating a conducting playing signal when the a, b, c and d are all located in the respective preset ranges, and generating an interrupting playing signal under other conditions.
3. The intelligent big data-based robot analysis and control system according to claim 1, wherein the preference analysis operation comprises the following specific steps:
the method comprises the following steps: firstly, acquiring dynamic image information in a first time period, then calibrating a sound coefficient Qa according to the total sound decibel data change amount, dividing the sound coefficient Qa into three levels of a first change level, a second change level and a third change level, and sequentially assigning Q1, Q2 and Q3 to the sound coefficients Qa of the three levels, wherein Q1 is greater than Q2 and greater than Q3;
step two: firstly, acquiring dynamic image information in a first time period, then calibrating a movement coefficient Wb according to the total amount of arm movement data in the dynamic image information, dividing the movement coefficient Wb into three grades of a first total grade, a second total grade and a third total grade, and sequentially assigning the movement coefficients Wb of the three grades to W1, W2 and W3, wherein W1 is larger than W2 and larger than W3;
step three: acquiring dynamic image information in a first time period, calibrating a turning coefficient Ec according to the total data amount of turning times, dividing the turning coefficient Ec into three grades of a first grade, a second grade and a third grade, and sequentially assigning the turning coefficients Ec of the three grades to E1, E2 and E3, wherein E1 is larger than E2 and is larger than E3;
step four: the method comprises the steps of firstly, sequentially distributing weighted values q, w and e to a sound coefficient Qa, a moving coefficient Wb and a turning coefficient Ec, wherein q is larger than w and larger than e, and q + w + e is 1, then, obtaining a preference degree coefficient in a first time period according to a formula R, wherein the preference degree coefficient R is larger than a preset value R, generating a preference signal, and otherwise, generating a flat signal.
4. The intelligent robot analysis and control system based on big data according to claim 3, wherein the first, second and third variation levels correspond to 70 db or more, 30 db to 70 db and include 30 db and 70 db, 30 db or less, respectively; the first total magnitude, the second total magnitude and the third total magnitude correspond to more than 10 meters, 4 meters to 10 meters and include 4 meters and 10 meters, less than 4 meters, respectively; the first, second and third levels correspond to 20 times or more, 10 times to 20 times and include 10 times and 20 times, 10 times or less, respectively.
5. The intelligent big data-based robot analysis and control system according to claim 1, wherein the specific steps of the combined allocation operation are as follows:
the method comprises the following steps: acquiring playing execution information corresponding to a first time period, and respectively marking the matching time length data of each action type gesture and each type of music tracks as Tsj, wherein s is 1.. 3, j is 1.. m, the occurrence frequency of each action type gesture is marked as Us, s is 1.. 3, the occurrence frequency of each type of music tracks is marked as Pj, j is 1.. m, and Us and Pj correspond to Tsj one by one;
step two: comparing Us and Pj with preset values u and p respectively, and when Us is larger than u, placing each action type gesture corresponding to Us in an X set, and when Us is smaller than or equal to u, placing each action type gesture corresponding to Us in a Y set; when Pj is larger than p, placing various music tracks corresponding to Pj in an X set, and when Pj is smaller than or equal to p, placing various music tracks corresponding to Pj in a Z set;
step three: sequentially combining all action gestures in the X set with all music tracks, and generating a first priority signal by the combination when the corresponding Tsj is greater than a preset value t, otherwise generating a second priority signal by the combination; sequentially combining all the action type postures in the X set with all the music tracks in the Z set, sequentially combining all the music tracks in the X set with all the action type postures in the Y set, and generating a second priority signal by combining the combination when the corresponding Tsj is larger than a preset value t, otherwise generating a third priority signal by combining the combination; and finally, sequentially combining the action gestures in the Y set with the music tracks in the Z set, and generating a third priority signal by combining the gestures when the corresponding Tsj is greater than a preset value t, otherwise generating a fourth priority signal by combining the gestures.
6. The intelligent big data-based robot analysis and control system according to claim 1, wherein the hierarchical processing operation comprises the following specific steps:
the method comprises the following steps: acquiring touch induction information in a second time period, then marking the touch times of all parts as Gi, i is 1.. n, and marking the average touch interval duration of all parts as Hi, i is 1.. n, and Gi and Hi correspond one to one;
step two: comparing Gi with a preset value g, and placing each part corresponding to Gi in an area A when Gi is larger than the preset value g, or placing each part corresponding to Gi in an area B; comparing Hi with a preset range h, when Hi is larger than the maximum value of the preset range h, placing each part corresponding to Hi in the region C, when Hi is within the preset range h, placing each part corresponding to Hi in the region D, and when Hi is smaller than the minimum value of the preset range h, placing each part corresponding to Hi in the region E;
step three: comparing each part in the area A with each part in the area C, the area D and the area E, and sequentially generating a first level area, a second level area and a third level area from consistent parts; and comparing each part in the B region with each part in the C region, the D region and the E region, and sequentially generating a second level region, a third level region and a fourth level region from consistent parts.
CN201910667479.7A 2019-07-23 2019-07-23 Intelligent robot analysis control system based on big data Active CN110216681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910667479.7A CN110216681B (en) 2019-07-23 2019-07-23 Intelligent robot analysis control system based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910667479.7A CN110216681B (en) 2019-07-23 2019-07-23 Intelligent robot analysis control system based on big data

Publications (2)

Publication Number Publication Date
CN110216681A CN110216681A (en) 2019-09-10
CN110216681B true CN110216681B (en) 2020-08-14

Family

ID=67813937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910667479.7A Active CN110216681B (en) 2019-07-23 2019-07-23 Intelligent robot analysis control system based on big data

Country Status (1)

Country Link
CN (1) CN110216681B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110806757B (en) * 2019-11-08 2023-01-03 合肥佳讯科技有限公司 Unmanned aerial vehicle system based on 5G network remote control
CN111077806B (en) * 2019-11-25 2020-08-18 广西科技师范学院 Electric quantity management system for mobile robot
CN111802963B (en) * 2020-07-10 2022-01-11 小狗电器互联网科技(北京)股份有限公司 Cleaning equipment and interesting information playing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9623561B2 (en) * 2012-10-10 2017-04-18 Kenneth Dean Stephens, Jr. Real time approximation for robotic space exploration
CN102981622A (en) * 2012-11-29 2013-03-20 广东欧珀移动通信有限公司 External control method and system of mobile terminal
CN108197115B (en) * 2018-01-26 2022-04-22 上海智臻智能网络科技股份有限公司 Intelligent interaction method and device, computer equipment and computer readable storage medium
CN208744840U (en) * 2018-09-27 2019-04-16 安徽昱康智能科技有限公司 Robot instruction's motion control system
CN109814436B (en) * 2018-12-26 2020-01-21 重庆青年职业技术学院 Agricultural planting management system based on Internet of things

Also Published As

Publication number Publication date
CN110216681A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110216681B (en) Intelligent robot analysis control system based on big data
CN105573536B (en) Processing method, the device and system of touch-control interaction
US8323191B2 (en) Stressor sensor and stress management system
CN108632658B (en) Bullet screen display method and terminal
US7950278B2 (en) Atmosphere control device
EP3734576A1 (en) Information processing device, information processing method, and program
CN104169847A (en) Localized haptic feedback
CN103426438A (en) Method and system for analyzing baby crying
CN105989196A (en) Method and system for carrying out social contact on the basis of movement information acquisition
CN204596404U (en) A kind of piano playing collector
US20220398937A1 (en) Information processing device, information processing method, and program
CN104463916B (en) Eye movement fixation point measurement method based on random walk
CN102549531A (en) Processor interface
CN103785160A (en) Table tennis table touching sensor and table tennis competition judgment system
Liu et al. Assessing the manipulative potentials of monkeys, apes and humans from hand proportions: implications for hand evolution
CN110179471A (en) Fall down detection method, device, equipment and storage medium
CN110245707A (en) Human body walking posture vibration information recognition methods and system based on scorpion positioning
CN106775721A (en) Interface interaction assembly control method, device and wearable device
CN110008705A (en) A kind of recognition methods of malice domain name, device and electronic equipment based on deep learning
CN110227249A (en) A kind of upper limb training system
US20200074880A1 (en) Output control device, output controlling method and program
WO2011107459A1 (en) Control device for a game console and method for controlling a game console
CN108919962A (en) Auxiliary piano training method based on brain machine Data Centralized Processing
CN105323316A (en) Intelligent fitness guidance system and method based on earphone
JP6830685B1 (en) Apple quality estimation program and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230316

Address after: 515000 Floors 1 to 8, General Workshop A, Building 6, Linghai Small and Micro Enterprise Entrepreneurship Park, East Side of Zhongyang Avenue, South Side of Qingyi Road, Linghai Industrial Park, Fengxiang Street, Chenghai District, Shantou City, Guangdong Province

Patentee after: Shantou Xinjiaqi Technology Co.,Ltd.

Address before: 515000 Xingda Industrial Zone, Fengxiang street, Chenghai District, Shantou City, Guangdong Province

Patentee before: GUANGDONG JAKI TECHNOLOGY AND EDUCATION Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 515000 Floors 1 to 8, General Workshop A, Building 6, Linghai Small and Micro Enterprise Entrepreneurship Park, East Side of Zhongyang Avenue, South Side of Qingyi Road, Linghai Industrial Park, Fengxiang Street, Chenghai District, Shantou City, Guangdong Province

Patentee after: Guangdong Xinjiaqi Technology Co.,Ltd.

Address before: 515000 Floors 1 to 8, General Workshop A, Building 6, Linghai Small and Micro Enterprise Entrepreneurship Park, East Side of Zhongyang Avenue, South Side of Qingyi Road, Linghai Industrial Park, Fengxiang Street, Chenghai District, Shantou City, Guangdong Province

Patentee before: Shantou Xinjiaqi Technology Co.,Ltd.