CN108345251A - Processing method, system, equipment and the medium of robot perception data - Google Patents
Processing method, system, equipment and the medium of robot perception data Download PDFInfo
- Publication number
- CN108345251A CN108345251A CN201810247150.0A CN201810247150A CN108345251A CN 108345251 A CN108345251 A CN 108345251A CN 201810247150 A CN201810247150 A CN 201810247150A CN 108345251 A CN108345251 A CN 108345251A
- Authority
- CN
- China
- Prior art keywords
- data
- information
- robot
- sensor
- perception data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0423—Input/output
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/25—Pc structure of the system
- G05B2219/25257—Microcontroller
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The present invention provides a kind of processing method, system, equipment and the medium of robot perception data, and method includes:Receive extraneous perception data;Determine the classification for the sensor for receiving extraneous perception data;According to classification, extraneous perception data is converted into formatted data;Formatted data is handled, and processing result data is stored in by area in buffer, action decision is carried out according to the processing result data in buffer for the action decision package of robot.Processing method, system, equipment and the medium of robot perception data provided by the invention are, it can be achieved that hoisting machine people perceives intelligence and cognition intelligence, the development for promoting robot to have class people intelligent.
Description
Technical field
The present invention relates to robot control field more particularly to a kind of processing method of robot perception data, system, set
Standby and medium.
Background technology
The basic function that the generally acknowledged intelligent robot of industry has at present includes perception, cognition, action three classes;But have
Intelligent robot often cross perception and the cognitive ability of robot, and directly by the action of rule control robot,
That is existing intelligent robot is " seeming " intelligence mostly, the intelligence such as perception, the cognition of class people are not had actually
Energy property, and the prior art includes without the intelligent reason of class people:
First, with the rapid propagation of " intelligence " word and its caused huge economic results in society, many enterprises are inciting somebody to action
During the core of research or product is drawn close to " intelligence machine ", does not put into enough energy in time in order to pursue speed and ram
The basis of real " intelligence ".
Second, it is not also very complete to contribute to the algorithm of hoisting machine people intelligence at present, the rule control of hard coded is utilized
Robotic movement becomes a kind of universal selection.
Third, since attention degree of the enterprise generally in terms of the perception intelligence of robot and cognition intelligence is inadequate, machine
People's product is limited to the extraneous and hardware sensing capability of itself, causes robot to be difficult to possess without the defect of enough data sources
The perception intelligence and cognition intelligence of class people.
To sum up, the perception intelligence and cognition intelligence for needing hoisting machine people, to promote robot to have class people intelligent
Development.
Invention content
The technical problem to be solved in the present invention is to provide a kind of processing method of robot perception data, system, equipment and
Medium, to realize that hoisting machine people perceives intelligence and cognition intelligence, the development for promoting robot to have class people intelligent.
In order to solve the above technical problems, technical solution provided by the invention is:
In a first aspect, an embodiment of the present invention provides a kind of processing method of robot perception data, method includes:
Receive extraneous perception data;
Determine the classification for the sensor for receiving extraneous perception data;
According to classification, extraneous perception data is converted into formatted data;
Formatted data is handled, and processing result data is stored in by area in buffer, for moving for robot
Unit of making decision carries out action decision according to the processing result data in buffer.
Further, formatted data is communication protocol data, and the data head of the data portion of communication protocol data includes:
Robot ID, protocol version, information type, default bit;
The data field of the data portion of communication protocol data includes:According to the extraneous perception data of certain sequential delivery and outer
The index of boundary's perception data identifies.
Further, information type includes the daily information of robot, interactive information;
Default bit includes robot sensor category identities;
Index mark, includes the number of extraneous perception data, size.
Further, the classification of sensor, including:The first kind sensor exported again after handling the information of acquisition
The the second class sensor directly exported with the information to acquisition;Wherein,
First kind sensor includes gyroscope, environment temperature sensor, ambient humidity sensor;
Second class sensor includes camera and microphone.
Further, formatted data is handled, including:
When the data that formatted data is directed toward are image information, the object information in image information is extracted;Utilize emotion point
Class device obtains the corresponding emotion information of image information;
Formatted data be directed toward data be video information when, extract video information in object information, action message and
Object change information;The corresponding emotion information of video information is obtained using emotion classifiers;
When the data that formatted data is directed toward are audio-frequency information, audio-frequency information is converted into text information, and word is believed
Breath is converted to syntactic analysis tree;The corresponding emotion information of audio-frequency information is obtained using emotion classifiers.
Further, object information includes scene, object and personage.
Second aspect, an embodiment of the present invention provides a kind of processing system of robot perception data, system includes:
Sensor unit, for receiving extraneous perception data;
Kind judging unit, the classification for determining the sensor for receiving extraneous perception data;
Format conversion unit, for according to classification, extraneous perception data to be converted into formatted data;
Data processing unit is stored in buffer for handling formatted data, and by processing result data by area
In, action decision is carried out according to the processing result data in buffer for the action decision package of robot.
Further, data processing unit is local data processing equipment or Cloud Server.
The third aspect, an embodiment of the present invention provides a kind of computer equipments, including:At least one processor, at least one
A memory and computer program instructions stored in memory, are realized when computer program instructions are executed by processor
Such as the method for first aspect in the above embodiment.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage mediums, are stored thereon with computer journey
Sequence instructs, and the method such as first aspect in the above embodiment is realized when computer program instructions are executed by processor.
Processing method, system, equipment and the medium of robot perception data provided in an embodiment of the present invention, according to sensor
Classify to the processing procedure of the data of acquisition, and the extraneous perception data that sensor exports is first converted into formatted data again
Handled, and processing result data deposited in into buffer by area, in order to robot action decision package according to caching
Processing result data in device carries out action decision, it can be achieved that hoisting machine people perceives intelligence and cognition intelligence.
Description of the drawings
Fig. 1 is the flow chart of the processing method of robot perception data provided in an embodiment of the present invention;
Fig. 2 is the flow chart of the processing system of robot perception data provided in an embodiment of the present invention;
Fig. 3 shows the hardware architecture diagram of computer equipment provided in an embodiment of the present invention.
Specific implementation mode
It is further illustrated the present invention below by specific embodiment, it should be understood, however, that, these embodiments are only
It is used for specifically describing in more detail, and is not to be construed as limiting the present invention in any form.
Embodiment one
In conjunction with Fig. 1, the processing method of robot perception data provided in this embodiment, method includes:
Step S1 receives extraneous perception data;
Step S2 determines the classification for the sensor for receiving extraneous perception data;
Extraneous perception data is converted into formatted data by step S3 according to classification;
Step S4, handles formatted data, and processing result data is stored in by area in buffer, for machine
The action decision package of people carries out action decision according to the processing result data in buffer.
The processing method of robot perception data provided in an embodiment of the present invention, according to sensor to the place of the data of acquisition
Reason process is classified, and the extraneous perception data that sensor exports is first converted into formatted data and is handled again, and will place
Reason result data deposit in buffer by area, in order to robot action decision package according to the handling result number in buffer
According to carry out action decision, it can be achieved that hoisting machine people perceive intelligence with cognition intelligence.
Preferably, formatted data is communication protocol data, and the data head of the data portion of communication protocol data includes:Machine
Device people ID, protocol version, information type, default bit;
The data field of the data portion of communication protocol data includes:According to the extraneous perception data of certain sequential delivery and outer
The index of boundary's perception data identifies.
It should be noted that communication protocol data follows communications protocol format, for example, when being communicated using Transmission Control Protocol, communication
The head of agreement be the head of TCP, communication protocol data portion include data head and data field.
Specifically, information type includes the daily information of robot, interactive information;
Default bit includes robot sensor category identities;
Index mark, includes the number of extraneous perception data, size.
In the present embodiment, robot may be used Cloud Server and handle formatted data, and robot is to Cloud Server
The message structure of the formatted data of transmission, is described as follows:
1) communication mode used between robot and Cloud Server and communication protocol are not limited;
2) data head of data portion includes the essential information of robot:Robot ID, protocol version, information type
(daily information and interactive information), default bit (sensor that identified machine people has);
3) digital information and unstructured information that the data field of data portion is exported according to certain sequential delivery sensor
Index mark (including the number of unstructured information, size);
4) unstructured information (for example, image, audio and video etc.) of data portion in transmission process according to current
Common audio video transmission mode and agreement are transmitted, for example, after Cloud Server receives audio and video, according to the index of data field
The unstructured information that marker extraction includes to the data.
It is further preferred that the classification of sensor, including:The first kind exported again after handling the information of acquisition passes
Sensor and the second class sensor that the information of acquisition is directly exported;Wherein,
First kind sensor includes but not limited to gyroscope, environment temperature sensor, ambient humidity sensor;
Second class sensor includes but not limited to camera and microphone.
In the present embodiment, first kind sensor, can will be at the information that received after receiving extraneous relevant information
Reason then output treated data.Such as:Environment temperature sensor, humidity sensor, red for sensing external environment variation
Outside, velocity sensor, acceleration transducer, gyroscope etc..
Second class sensor can directly export the information being collected into, and need individually to locate these information
Reason.Such as:Camera, microphone etc..
The sensor can be that default setting is uploaded to Cloud Server when robot does not have respective sensor
The sensor corresponding informance is null.
It should be noted that the information that each sensor collection arrives is to the disturbance degree difference of robot motion decision, i.e. weight
Difference, and the determination of disturbance degree need to be obtained by a series of summary of experience.In addition, the information that camera and microphone are collected into
It is the robot behavior action most important information source of decision, weight highest;And the letter that sensing external environment sensor collection arrives
Breath (being actually the data of sensor output) is the secondary information source of robot behavior action decision, and weight is more slightly lower;
And the status information of robot itself, whether normal, on the other hand may be used if on the one hand can be used for the current state of analysis robot
Auxiliary information source for acting decision as robot behavior, weight are minimum.
It is further preferred that formatted data is handled, including:
When the data that formatted data is directed toward are image information, the object information in image information is extracted;Utilize emotion point
Class device obtains the corresponding emotion information of image information;
Formatted data be directed toward data be video information when, extract video information in object information, action message and
Object change information;The corresponding emotion information of video information is obtained using emotion classifiers;
When the data that formatted data is directed toward are audio-frequency information, audio-frequency information is converted into text information, and word is believed
Breath is converted to syntactic analysis tree;The corresponding emotion information of audio-frequency information is obtained using emotion classifiers.
Further, object information includes scene, object and personage.
In the present embodiment, first kind sensor can direct output numerical value, there is no need to complicated processing is carried out to it
Operation, and mostly more basic numerical transformation generic operation.And image, video, audio that the second class sensor collection arrives etc. is non-
Structural data then needs, according to after setting rule process, to extract decision available information.
In the present embodiment, specifically, the processing of unstructured information is needed to be carried out according to the classification of unstructured information:
1) it for image, video information, extracts to obtain pair for including in image, video first by image, video content
As (including scene, object, personage etc.), particularly, for video information, need to obtain action message in successive frame, object
Change information;
2) it for audio-frequency information, is obtained by ASR (automatic speech recognition, effect are translated audio into as corresponding word)
After word, it converts word to syntactic analysis tree.
3) sentiment analysis in unstructured information first classifies the emotion of humans and animals according to psychological study,
Then sentiment analysis grader is trained by machine learning, is obtained in specific separation structure information using grader analysis and includes
Emotion.
Embodiment two
In conjunction with Fig. 2, the processing system of robot perception data provided in an embodiment of the present invention, system includes:
Sensor unit 10, for receiving extraneous perception data;
Kind judging unit 20, the classification for determining the sensor for receiving extraneous perception data;
Format conversion unit 30, for according to classification, extraneous perception data to be converted into formatted data;
Data processing unit 40 is stored in caching for handling formatted data, and by processing result data by area
In device, action decision is carried out according to the processing result data in buffer for the action decision package of robot.
The processing system of robot perception data provided in an embodiment of the present invention, according to sensor to the place of the data of acquisition
Reason process is classified, and the extraneous perception data that sensor exports is first converted into formatted data and is handled again, and will place
Reason result data deposit in buffer by area, in order to robot action decision package according to the handling result number in buffer
According to carry out action decision, it can be achieved that hoisting machine people perceive intelligence with cognition intelligence.
Preferably, data processing unit 40 is local data processing equipment or Cloud Server, and formatted data is communication protocol
Data, and the data head of the data portion of communication protocol data includes:It is robot ID, protocol version, information type, default
Position;
The data field of the data portion of communication protocol data includes:According to the extraneous perception data of certain sequential delivery and outer
The index of boundary's perception data identifies.
It should be noted that communication protocol data follows communications protocol format, for example, when being communicated using Transmission Control Protocol, communication
The head of agreement be the head of TCP, communication protocol data portion include data head and data field.
Specifically, information type includes the daily information of robot, interactive information;
Default bit includes robot sensor category identities;
Index mark, includes the number of extraneous perception data, size.
In the present embodiment, when data processing unit 40 is Cloud Server, format number that robot is sent to Cloud Server
According to message structure, be described as follows:
1) communication mode used between robot and Cloud Server and communication protocol are not limited;
2) data head of data portion includes the essential information of robot:Robot ID, protocol version, information type
(daily information and interactive information), default bit (sensor that identified machine people has);
3) digital information and unstructured information that the data field of data portion is exported according to certain sequential delivery sensor
Index mark (including the number of unstructured information, size);
4) unstructured information (for example, image, audio and video etc.) of data portion in transmission process according to current
Common audio video transmission mode and agreement are transmitted, for example, after Cloud Server receives audio and video, according to the index of data field
The unstructured information that marker extraction includes to the data.
It is further preferred that the classification of sensor, including:The first kind exported again after handling the information of acquisition passes
Sensor and the second class sensor that the information of acquisition is directly exported;Wherein,
First kind sensor includes but not limited to gyroscope, environment temperature sensor, ambient humidity sensor;
Second class sensor includes but not limited to camera and microphone.
In the present embodiment, first kind sensor, can will be at the information that received after receiving extraneous relevant information
Reason then output treated data.Such as:Environment temperature sensor, humidity sensor, red for sensing external environment variation
Outside, velocity sensor, acceleration transducer, gyroscope etc..
Second class sensor can directly export the information being collected into, and need individually to locate these information
Reason.Such as:Camera, microphone etc..
The sensor can be that default setting is uploaded to Cloud Server when robot does not have respective sensor
The sensor corresponding informance is null.
It should be noted that the information that each sensor collection arrives is to the disturbance degree difference of robot motion decision, i.e. weight
Difference, and the determination of disturbance degree need to be obtained by a series of summary of experience.In addition, the information that camera and microphone are collected into
It is the robot behavior action most important information source of decision, weight highest;And the letter that sensing external environment sensor collection arrives
Breath (being actually the data of sensor output) is the secondary information source of robot behavior action decision, and weight is more slightly lower;
And the status information of robot itself, whether normal, on the other hand may be used if on the one hand can be used for the current state of analysis robot
Auxiliary information source for acting decision as robot behavior, weight are minimum.
It is further preferred that formatted data is handled, including:
When the data that formatted data is directed toward are image information, the object information in image information is extracted;Utilize emotion point
Class device obtains the corresponding emotion information of image information;
Formatted data be directed toward data be video information when, extract video information in object information, action message and
Object change information;The corresponding emotion information of video information is obtained using emotion classifiers;
When the data that formatted data is directed toward are audio-frequency information, audio-frequency information is converted into text information, and word is believed
Breath is converted to syntactic analysis tree;The corresponding emotion information of audio-frequency information is obtained using emotion classifiers.
Further, object information includes scene, object and personage.
In the present embodiment, first kind sensor can direct output numerical value, therefore data processing unit 40 be not necessarily to it
Complicated processing operation is carried out, and is mostly more basic numerical transformation generic operation.And image that the second class sensor collection arrives,
The unstructured datas such as video, audio then need data processing unit 40 (for example, Cloud Server) according to setting rule process after,
Extract decision available information.
In the present embodiment, specifically, the processing of unstructured information is needed to be carried out according to the classification of unstructured information:
1) it for image, video information, extracts to obtain pair for including in image, video first by image, video content
As (including scene, object, personage etc.), particularly, for video information, need to obtain action message in successive frame, object
Change information;
2) it for audio-frequency information, is obtained by ASR (automatic speech recognition, effect are translated audio into as corresponding word)
After word, it converts word to syntactic analysis tree.
3) sentiment analysis in unstructured information first classifies the emotion of humans and animals according to psychological study,
Then sentiment analysis grader is trained by machine learning, is obtained in specific separation structure information using grader analysis and includes
Emotion.
In addition, data processing unit 40 is deposited in data processed result in caching by area, the result in a certain subregion is waited for
After using, which may be deleted, may be stored on since information is important in the long-term memory of robot, may be due to
Information deficiency needs according to decision-making module need that raw information is extracted and analyzed again.Furthermore, it is necessary to explanation,
The result data that data processing unit 40 obtains is used to be used for the action decision package of robot, and acts decision package
There are interface between data processing unit 40, which fetches evidence for acting decision package from data processing unit 40,
And the sequence of maneuvers such as instruction are sent, and including control data processing unit 40 deletes used data in certain subregion
It removes, store or executes and raw information is extracted and analyzed.
Embodiment three
In conjunction with Fig. 3 describe the embodiment of the present invention robot perception data processing method can by computer equipment Lai
It realizes.Fig. 3 shows the hardware architecture diagram of computer equipment provided in an embodiment of the present invention.
Realize that the computer equipment of the processing method of robot perception data may include processor 401 and be stored with meter
The memory 402 of calculation machine program instruction.
Specifically, above-mentioned processor 401 may include central processing unit (CPU) or specific integrated circuit
(Application Specific Integrated Circuit, ASIC), or may be configured to implement implementation of the present invention
One or more integrated circuits of example.
Memory 402 may include the mass storage for data or instruction.For example unrestricted, memory
402 may include hard disk drive (Hard Disk Drive, HDD), floppy disk, flash memory, CD, magneto-optic disk, tape or logical
With the combination of universal serial bus (Universal Serial Bus, USB) driver or two or more the above.It is closing
In the case of suitable, memory 402 may include the medium of removable or non-removable (or fixed).In a suitable case, it stores
Device 402 can be inside or outside data processing equipment.In a particular embodiment, memory 402 is nonvolatile solid state storage
Device.In a particular embodiment, memory 402 includes read-only memory (ROM).In a suitable case, which can be mask
The ROM of programming, programming ROM (PROM), erasable PROM (EPROM), electric erasable PROM (EEPROM), electrically-alterable ROM
(EAROM) or the combination of flash memory or two or more the above.
Processor 401 is by reading and executing the computer program instructions stored in memory 402, to realize above-mentioned implementation
The processing method of any one robot perception data in example.
In one example, computer equipment may also include communication interface 403 and bus 410.Wherein, as shown in figure 3, place
Reason device 401, memory 402, communication interface 403 are connected by bus 410 and complete mutual communication.
Communication interface 403 is mainly used for realizing in the embodiment of the present invention between each unit, device, unit and/or equipment
Communication.
Bus 410 includes hardware, software or both, and the component of computer equipment is coupled to each other together.For example
And it is unrestricted, bus may include that accelerated graphics port (AGP) or other graphics bus, enhancing Industry Standard Architecture (EISA) are total
Line, front side bus (FSB), super transmission (HT) interconnection, the interconnection of Industry Standard Architecture (ISA) bus, infinite bandwidth, low pin count
(LPC) bus, memory bus, micro- channel architecture (MCA) bus, peripheral component interconnection (PCI) bus, PCI-Express
(PCI-X) bus, Serial Advanced Technology Attachment (SATA) bus, Video Electronics Standards Association part (VLB) bus or other conjunctions
The combination of suitable bus or two or more the above.In a suitable case, bus 410 may include one or more
Bus.Although specific bus has been described and illustrated in the embodiment of the present invention, the present invention considers any suitable bus or interconnection.
Example IV
In addition, in conjunction with the processing method of the robot perception data in above-described embodiment, the embodiment of the present invention can provide one
Computer readable storage medium is planted to realize.It is stored with computer program instructions on the computer readable storage medium;The calculating
Machine program instruction realizes the processing method of any one robot perception data in above-described embodiment when being executed by processor.
It should be clear that the invention is not limited in specific configuration described above and shown in figure and processing.
For brevity, it is omitted here the detailed description to known method.In the above-described embodiments, several tools have been described and illustrated
The step of body, is as example.But procedure of the invention is not limited to described and illustrated specific steps, this field
Technical staff can be variously modified, modification and addition after the spirit for understanding the present invention, or suitable between changing the step
Sequence.
Functional block shown in structures described above block diagram can be implemented as hardware, software, firmware or their group
It closes.When realizing in hardware, it may, for example, be electronic circuit, application-specific integrated circuit (ASIC), firmware appropriate, insert
Part, function card etc..When being realized with software mode, element of the invention is used to execute program or the generation of required task
Code section.Either code segment can be stored in machine readable media program or the data-signal by being carried in carrier wave is passing
Defeated medium or communication links are sent." machine readable media " may include any medium for capableing of storage or transmission information.
The example of machine readable media includes electronic circuit, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), soft
Disk, CD-ROM, CD, hard disk, fiber medium, radio frequency (RF) link, etc..Code segment can be via such as internet, inline
The computer network of net etc. is downloaded.
It should also be noted that, the exemplary embodiment referred in the present invention, is retouched based on a series of step or device
State certain methods or system.But the present invention is not limited to the sequence of above-mentioned steps, that is to say, that can be according in embodiment
The sequence referred to executes step, may also be distinct from that the sequence in embodiment or several steps are performed simultaneously.
The above description is merely a specific embodiment, it is apparent to those skilled in the art that,
For convenience of description and succinctly, the specific work process of the system of foregoing description, unit and unit can refer to preceding method
Corresponding process in embodiment, details are not described herein.It should be understood that scope of protection of the present invention is not limited thereto, it is any to be familiar with
Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or substitutions,
These modifications or substitutions should be covered by the protection scope of the present invention.
Although present invention has been a degree of descriptions, it will be apparent that, do not departing from the spirit and scope of the present invention
Under the conditions of, the appropriate variation of each condition can be carried out.It is appreciated that the present invention is not limited to the embodiments, and it is attributed to right
It is required that range comprising the equivalent replacement of each factor.
Claims (10)
1. a kind of processing method of robot perception data, which is characterized in that the method includes:
Receive extraneous perception data;
Determine the classification for the sensor for receiving the extraneous perception data;
According to the classification, the extraneous perception data is converted into formatted data;
The formatted data is handled, and processing result data is stored in by area in buffer, for moving for robot
Unit of making decision carries out action decision according to the processing result data in the buffer.
2. the processing method of robot perception data according to claim 1, which is characterized in that the formatted data is logical
Believe protocol data, and the data head of the data portion of the communication protocol data includes:Robot ID, protocol version, information
Type, default bit;
The data field of the data portion of the communication protocol data includes:According to the extraneous perception data of certain sequential delivery and outer
The index of boundary's perception data identifies.
3. the processing method of robot perception data according to claim 2, which is characterized in that described information type packet
It includes, the daily information of robot, interactive information;
The default bit includes robot sensor category identities;
The index mark, includes the number of extraneous perception data, size.
4. the processing method of robot perception data according to claim 1, which is characterized in that the sensor it is described
Classification, including:The first kind sensor that is exported again after handling the information of acquisition and the information of acquisition is directly exported
Second class sensor;Wherein,
The first kind sensor includes gyroscope, environment temperature sensor, ambient humidity sensor;
The second class sensor includes camera and microphone.
5. the processing method of robot perception data according to claim 1, which is characterized in that described to the format number
According to being handled, including:
When the data that the formatted data is directed toward are image information, the object information in described image information is extracted;Utilize feelings
Feel grader and obtains the corresponding emotion information of described image information;
When the data that the formatted data is directed toward are video information, the object information in the video information is extracted, action is believed
Breath and object change information;The corresponding emotion information of the video information is obtained using emotion classifiers;
When the data that the formatted data is directed toward are audio-frequency information, the audio-frequency information is converted into text information, and will be literary
Word information is converted to syntactic analysis tree;The corresponding emotion information of the audio-frequency information is obtained using emotion classifiers.
6. the processing method of robot perception data according to claim 5, which is characterized in that the object information packet
It includes, scene, object and personage.
7. a kind of processing system of robot perception data, which is characterized in that the system comprises:
The sensor unit, for receiving extraneous perception data;
The kind judging unit, the classification for determining the sensor for receiving the extraneous perception data;
The format conversion unit, for according to the classification, the extraneous perception data to be converted into formatted data;
The data processing unit for handling the formatted data, and processing result data is stored in by area slow
In storage, action decision is carried out according to the processing result data in the buffer for the action decision package of robot.
8. the processing system of robot perception data according to claim 7, which is characterized in that the data processing unit
For local data processing equipment or Cloud Server.
9. a kind of computer equipment, which is characterized in that including:It at least one processor, at least one processor and is stored in
Computer program instructions in the memory realize such as right when the computer program instructions are executed by the processor
It is required that the method described in any one of 1-6.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that when the calculating
The method as described in any one of claim 1-6 is realized when machine program instruction is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810247150.0A CN108345251B (en) | 2018-03-23 | 2018-03-23 | Method, system, device and medium for processing robot sensing data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810247150.0A CN108345251B (en) | 2018-03-23 | 2018-03-23 | Method, system, device and medium for processing robot sensing data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108345251A true CN108345251A (en) | 2018-07-31 |
CN108345251B CN108345251B (en) | 2020-10-13 |
Family
ID=62956787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810247150.0A Active CN108345251B (en) | 2018-03-23 | 2018-03-23 | Method, system, device and medium for processing robot sensing data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108345251B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109275048A (en) * | 2018-11-21 | 2019-01-25 | 北京猎户星空科技有限公司 | It is a kind of applied to the data processing method of robot, device, equipment and medium |
CN111267096A (en) * | 2020-01-19 | 2020-06-12 | 广东工业大学 | Robot translation skill training method and device, electronic equipment and storage medium |
CN112287181A (en) * | 2020-10-29 | 2021-01-29 | 上海对外经贸大学 | Method for representing information and information storage relation and control system based on method |
CN113409449A (en) * | 2021-06-22 | 2021-09-17 | 杭州群核信息技术有限公司 | Method and device for generating robot simulation scene based on three-dimensional scene data and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1380846A (en) * | 2000-03-31 | 2002-11-20 | 索尼公司 | Robot device, robot device action control method, external force detecting device and method |
WO2014151875A1 (en) * | 2013-03-14 | 2014-09-25 | Ebay Inc. | Utilizing an intra-body area network |
CN104360633A (en) * | 2014-10-10 | 2015-02-18 | 南开大学 | Human-computer interaction system for service robot |
CN105116785A (en) * | 2015-06-26 | 2015-12-02 | 北京航空航天大学 | Multi-platform remote robot general control system |
CN106325113A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Robot control engine and system |
CN106325228A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
US20170019317A1 (en) * | 2012-02-09 | 2017-01-19 | Rockwell Automation Technologies, Inc. | Cloud-based operator interface for industrial automation |
CN106808482A (en) * | 2015-12-02 | 2017-06-09 | 中国科学院沈阳自动化研究所 | A kind of crusing robot multisensor syste and method for inspecting |
CN107053134A (en) * | 2017-03-15 | 2017-08-18 | 谢立波 | Intelligent emergent rescue robot and its intelligent control method |
CN107272607A (en) * | 2017-05-11 | 2017-10-20 | 上海斐讯数据通信技术有限公司 | A kind of intelligent home control system and method |
-
2018
- 2018-03-23 CN CN201810247150.0A patent/CN108345251B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1380846A (en) * | 2000-03-31 | 2002-11-20 | 索尼公司 | Robot device, robot device action control method, external force detecting device and method |
US20170019317A1 (en) * | 2012-02-09 | 2017-01-19 | Rockwell Automation Technologies, Inc. | Cloud-based operator interface for industrial automation |
WO2014151875A1 (en) * | 2013-03-14 | 2014-09-25 | Ebay Inc. | Utilizing an intra-body area network |
CN104360633A (en) * | 2014-10-10 | 2015-02-18 | 南开大学 | Human-computer interaction system for service robot |
CN105116785A (en) * | 2015-06-26 | 2015-12-02 | 北京航空航天大学 | Multi-platform remote robot general control system |
CN106325113A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Robot control engine and system |
CN106325228A (en) * | 2015-06-26 | 2017-01-11 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
CN106808482A (en) * | 2015-12-02 | 2017-06-09 | 中国科学院沈阳自动化研究所 | A kind of crusing robot multisensor syste and method for inspecting |
CN107053134A (en) * | 2017-03-15 | 2017-08-18 | 谢立波 | Intelligent emergent rescue robot and its intelligent control method |
CN107272607A (en) * | 2017-05-11 | 2017-10-20 | 上海斐讯数据通信技术有限公司 | A kind of intelligent home control system and method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109275048A (en) * | 2018-11-21 | 2019-01-25 | 北京猎户星空科技有限公司 | It is a kind of applied to the data processing method of robot, device, equipment and medium |
CN111267096A (en) * | 2020-01-19 | 2020-06-12 | 广东工业大学 | Robot translation skill training method and device, electronic equipment and storage medium |
CN112287181A (en) * | 2020-10-29 | 2021-01-29 | 上海对外经贸大学 | Method for representing information and information storage relation and control system based on method |
CN113409449A (en) * | 2021-06-22 | 2021-09-17 | 杭州群核信息技术有限公司 | Method and device for generating robot simulation scene based on three-dimensional scene data and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108345251B (en) | 2020-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108345251A (en) | Processing method, system, equipment and the medium of robot perception data | |
US10565983B2 (en) | Artificial intelligence-based acoustic model training method and apparatus, device and storage medium | |
US20190171904A1 (en) | Method and apparatus for training fine-grained image recognition model, fine-grained image recognition method and apparatus, and storage mediums | |
CN107818077A (en) | A kind of sensitive content recognition methods and device | |
CN107690659A (en) | A kind of image identification system and image-recognizing method | |
CN109473119B (en) | Acoustic target event monitoring method | |
CN108280542A (en) | A kind of optimization method, medium and the equipment of user's portrait model | |
CN109978034B (en) | Sound scene identification method based on data enhancement | |
CN108172213A (en) | Tender asthma audio identification methods, device, equipment and computer-readable medium | |
CN108229481B (en) | Screen content analysis method and device, computing equipment and storage medium | |
CN110620760A (en) | FlexRay bus fusion intrusion detection method and detection device for SVM (support vector machine) and Bayesian network | |
CN112434178A (en) | Image classification method and device, electronic equipment and storage medium | |
CN110168527B (en) | Information processing device, information processing method, and information processing program | |
CN107910006A (en) | Audio recognition method, device and multiple source speech differentiation identifying system | |
CN111385659A (en) | Video recommendation method, device, equipment and storage medium | |
CN115631482B (en) | Driving perception information acquisition method and device, electronic equipment and readable medium | |
CN110889717A (en) | Method and device for filtering advertisement content in text, electronic equipment and storage medium | |
CN113688953B (en) | Industrial control signal classification method, device and medium based on multilayer GAN network | |
CN115622787A (en) | Abnormal flow detection method and device, electronic equipment and storage medium | |
CN110380801A (en) | The method that collaborative sensing algorithm and more USRP based on LSTM are realized | |
CN113347637B (en) | Radio Frequency (RF) fingerprint identification method and device based on embedded wireless equipment | |
CN107283429A (en) | Control method, device, system and terminal based on artificial intelligence | |
CN112541542B (en) | Method and device for processing multi-classification sample data and computer readable storage medium | |
CN112464857A (en) | Video classification model training and video classification method, device, medium and equipment | |
CN115632801A (en) | Method and device for detecting malicious traffic and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 301, Building 39, 239 Renmin Road, Gusu District, Suzhou City, Jiangsu Province, 215000 Applicant after: Suzhou Dogweed Intelligent Technology Co., Ltd. Address before: 518000 Dongfang Science and Technology Building 1307-09, 16 Keyuan Road, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province Applicant before: Shenzhen green bristlegrass intelligence Science and Technology Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right | ||
PP01 | Preservation of patent right |
Effective date of registration: 20220228 Granted publication date: 20201013 |