WO2017215297A1 - Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor - Google Patents

Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor Download PDF

Info

Publication number
WO2017215297A1
WO2017215297A1 PCT/CN2017/076274 CN2017076274W WO2017215297A1 WO 2017215297 A1 WO2017215297 A1 WO 2017215297A1 CN 2017076274 W CN2017076274 W CN 2017076274W WO 2017215297 A1 WO2017215297 A1 WO 2017215297A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognition
signal
cloud
pressure
intelligent robot
Prior art date
Application number
PCT/CN2017/076274
Other languages
French (fr)
Chinese (zh)
Inventor
刘若鹏
舒良轩
Original Assignee
深圳光启合众科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光启合众科技有限公司 filed Critical 深圳光启合众科技有限公司
Publication of WO2017215297A1 publication Critical patent/WO2017215297A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means

Definitions

  • the invention relates to a robot, in particular to a cloud interactive system and a multi-sense intelligent robot and a sensing interaction method thereof.
  • the traditional intelligent electronic pet is a virtual pet running on a smart platform. This type of electronic pet lacks realism, and the user can only feel its presence in the virtual world through the display device. There is also a class of toy electronic pets with interactive functions. Although the authenticity is strong, it is limited by the processing ability and other factors, and the intelligence displayed is often limited.
  • family companion robots can interact with humans in simple interactions, such as imitation of movements and imitation of sounds, but their interaction behavior is far from that of real pets.
  • the ability of the robot to perceive complex external parameters and to interact and process accordingly is relatively weak.
  • the technical problem to be solved by the present invention is to provide a cloud interactive system and a multi-perceptive intelligent robot and a sensing interaction method thereof, which have multiple sensing capabilities and have stronger processing and interaction capabilities.
  • the invention provides a multi-sense intelligent robot with cloud interaction function, which cooperates with an external cloud server.
  • the intelligent robot includes: a password recognition processing unit for performing local password recognition on the externally input voice signal and generating a password recognition processing result; and a local image recognition processing unit for The externally input scene image performs local image recognition and generates a local image recognition result; the pressure signal recognition processing unit is configured to identify and process the external pressure signal and generate a pressure-sensing emotional signal; and a cloud recognition unit for using the voice signal Sending to the cloud server and performing at least one of cloud speech recognition and cloud semantic understanding by the cloud server, and receiving a cloud speech recognition processing result sent by the cloud server; and sending the scene image to the The cloud server performs facial recognition by the cloud server, and receives a cloud face recognition result sent by the cloud server; the controller is configured to perform at least the password recognition processing result and the cloud voice recognition processing result according to the cloud server And performing, by the at least one of the local image recognition result and the cloud face recognition result and/or the pressure-
  • the multi-sense intelligent robot further includes an identification selecting unit configured to determine an externally input voice signal, thereby selecting to transmit the externally input voice signal to the password recognition processing unit. Or transmitting to the cloud recognition unit, and/or judging the externally input scene image, thereby selecting whether to transmit the externally input scene image to the local image recognition processing unit or to the cloud recognition unit.
  • an identification selecting unit configured to determine an externally input voice signal, thereby selecting to transmit the externally input voice signal to the password recognition processing unit. Or transmitting to the cloud recognition unit, and/or judging the externally input scene image, thereby selecting whether to transmit the externally input scene image to the local image recognition processing unit or to the cloud recognition unit.
  • the multi-sense intelligent robot further includes a voice collection unit for obtaining an externally input voice signal.
  • the voice collection unit is a microphone, and the number of the microphones is two, which are respectively installed at the left and right ears of the intelligent robot.
  • the multi-sense intelligent robot further includes a preset password storage unit, where the preset password storage unit is configured to store preset password data; and the password recognition processing unit is configured to The password data is set to perform local password recognition on the voice signal and generate a password recognition processing result.
  • the multi-sense intelligent robot further includes a voiceprint recognition unit, configured to perform identity verification according to the pre-stored voiceprint data before performing the recognition process on the voice signal.
  • the multi-sense intelligent robot further includes an image acquisition unit for capturing more than one scene image of the external input.
  • the multi-sense intelligent robot further includes a face image acquiring unit configured to acquire a face image having the recognized feature point from the externally input scene image; the local image recognition processing unit For performing local image recognition on the face image with the identified feature points The image recognition result is obtained by the cloud recognition unit; the cloud recognition unit is configured to send the face image having the identification feature point to the cloud server for cloud face recognition.
  • the face image acquiring unit is further configured to: after acquiring a face image having the recognized feature point from the externally input scene image, excluding the face image not having the recognized feature point.
  • the multi-sense intelligent robot further includes a preset image storage unit configured to store preset image data; the local image recognition processing unit is configured to use the preset image data to The externally input scene image performs local image recognition and generates a local image recognition result.
  • the multi-sense intelligent robot further includes a pressure signal acquisition unit for acquiring an external pressure signal.
  • the pressure signal acquisition unit is a resistive pressure sensor.
  • the pressure signal acquisition unit includes a pressure sensing chip array distributed on a surface of the intelligent robot and an analog to digital conversion circuit connected to the pressure sensing chip array, the pressure sensing The chip array senses the pressure change of the surface of the intelligent robot and converts it into a pressure analog signal, which converts the pressure analog signal into a pressure digital signal.
  • the pressure signal recognition processing unit includes: a pressure type determining unit, configured to calculate a pressure change rate of the external pressure signal, according to the pressure change rate and a preset change threshold ratio Determining a type of the external pressure signal; a pressure position determining unit configured to determine a pressure generating position based on the external pressure signal; and a pressure sensing type emotion signal generating unit configured to generate a position and an external pressure signal according to the pressure The type is compared with a preset map list, and a pressure-aware emotion signal corresponding to the pressure generation position and the type of the external pressure signal is generated.
  • the pressure signal recognition processing unit further includes a connection with the pressure type determining unit and the pressure sensing type emotion signal generating unit, respectively, for storing a preset change threshold and a preset mapping list. Data storage unit.
  • the multi-sense intelligent robot further includes a motion sensing unit coupled to the controller, configured to sense a motion state of the smart robot to generate a motion state parameter.
  • the motion sensing unit is a gravity acceleration sensor, a gyroscope or a tilt sensor mounted on the torso of the smart robot.
  • the multi-sense intelligent robot further includes a network determining unit, configured to determine a connection state of the smart robot and the cloud server, and generate a connection state according to the connection state. The result of the network judgment.
  • the intelligent robot and the cloud server are connected through a wireless network interface.
  • the controller is configured with an impact model, the password recognition processing result, the cloud voice recognition processing result, the local image recognition result, the cloud face recognition result, the The pressure-aware emotional signal, the motion state parameter is an input parameter of the impact model, and the impact model outputs the interaction decision according to the input parameter.
  • the controller activates the intelligent robot in response to a start command.
  • the startup instruction is included in a voice signal
  • the password recognition processing unit or the cloud recognition unit is further configured to identify a startup instruction in the voice signal
  • the startup instruction includes
  • the pressure signal recognition processing unit is further configured to identify a start command in the external pressure signal
  • the start command is included in a wireless signal
  • the smart robot further includes a wireless communication unit and a wireless signal An identification unit, the wireless communication unit is configured to receive an externally transmitted wireless signal, and the wireless signal identification unit is configured to identify a startup instruction in the wireless signal.
  • the invention also provides a method for perceptual interaction of a multi-sense intelligent robot with cloud interaction function, which comprises: performing local password recognition on an externally input voice signal and generating a password recognition processing result, or transmitting the voice signal
  • Receiving at least one of cloud speech recognition and cloud semantic understanding by the cloud server receiving a cloud speech recognition processing result sent by the cloud server; performing local image recognition on the externally input scene image and generating a local Image recognition result, or transmitting the scene image to the cloud server for face recognition and receiving the cloud face recognition result returned by the cloud server; identifying and processing the external pressure signal and generating a pressure-sensing emotion signal; And making the intelligent robot according to at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and/or a pressure-aware emotion signal Interactive decision The execution of the interactive decision.
  • the method for perceptual interaction further includes: determining, by the externally input voice signal, whether to perform local password recognition on the externally input voice signal or transmitting to the cloud server, and/or Determining the externally input scene image to select whether to perform local image recognition on the externally input scene image or to transmit to the cloud server.
  • the method for perceptual interaction further includes obtaining an externally input voice message. number.
  • the method for perceptual interaction further includes storing a preset password data, and performing local password recognition on the externally input voice signal and generating a password recognition processing result according to the preset password data Local password recognition is performed on the voice signal and a password recognition processing result is generated.
  • the local password recognition is performed on the externally input voice signal and the password recognition processing result is generated, or the voice signal is sent to the cloud server, and further includes a pre-stored voiceprint.
  • the data authenticates the voice signal.
  • the method for perceptual interaction further includes capturing more than one scene image of an external input.
  • the method for perceptual interaction further includes acquiring a face image having the identified feature point from more than one scene image; and performing local image recognition on the externally input scene image and generating a local image recognition result
  • the step of performing local image recognition on the face image having the identification feature point and generating a local image recognition result; and the step of transmitting the scene image to the cloud server for face recognition is to have the identification feature point
  • the face image is sent to the cloud server for cloud face recognition.
  • the step of acquiring the face image having the recognized feature point from the externally input scene image further includes excluding the face image not having the recognized feature point.
  • the method for perceptually interacting further includes storing the preset image data, and performing local image recognition on the externally input scene image and generating a local image recognition result according to the preset image data Performing local image recognition on the face image having the identification feature point and generating a local image recognition result.
  • the method before the sending the voice signal or the scene image to the cloud server, the method further includes: determining whether the network status is normal, and sending the voice signal or the scene image to the cloud when the network is normal. server.
  • the method for perceptual interaction further includes acquiring an external pressure signal.
  • the step of performing an identification process on the external pressure signal and generating the pressure-sensing emotion signal includes: calculating a pressure change rate of the external pressure signal, according to the pressure change rate and a preset change threshold Aligning determines the type of the external pressure signal; based on the external pressure signal a constant pressure generating position; and comparing the type of the pressure generating position and the external pressure signal with a preset mapping list, and generating a pressure sensing type emotion signal corresponding to the pressure generating position and the type of the external pressure signal.
  • the method for perceptual interaction further includes storing a preset change threshold and a preset mapping list.
  • determining the type of the external pressure signal as tapping if the pressure change rate is greater than a preset first change threshold, determining the type of the external pressure signal as tapping; otherwise, determining the type of the external pressure signal as a stroke .
  • determining the type of the external pressure signal as a beat comprises: if the pressure change rate is greater than the first change The threshold is less than or equal to the second change threshold, and the type of the pressure signal is determined to be a slight tap; and if the pressure change rate is greater than the second change threshold, the type of the pressure signal is determined to be a hard tap.
  • the calculating a pressure change rate of the external pressure signal is: calculating a duration value of the external pressure signal, and selecting the preset according to the preset time period within the continuous time value
  • the digital signal corresponding to the time period calculates a pressure change rate according to the preset time period and the digital signal corresponding to the preset time period.
  • the preset time period is 0.5-1.5 seconds.
  • the method for perceptual interaction further includes sensing a motion state of the smart robot to generate a motion state parameter.
  • the interactive decision includes an emotional expression location and an emotional expression instruction.
  • the emotion expression part includes an upper limb, a lower limb, a trunk, a head, a face, and/or a mouth of the intelligent robot;
  • the emotion expression instruction includes executing a corresponding motion instruction, and playing a corresponding prompt Voice and / or display the corresponding prompt information.
  • the action instruction comprises a mechanical action command and/or a facial expression command.
  • the mechanical action command includes action type information, action amplitude information, action frequency information, and/or action duration information corresponding to the emotion expression portion.
  • the method for perceptual interaction further includes: in response to a startup command The smart robot is activated; the start command is included in a voice signal, in an external pressure signal, or in a wireless signal.
  • the invention also provides a cloud interaction system, comprising the above-mentioned multi-aware intelligent robot with cloud interaction function and a cloud server, and the intelligent robot performs wireless communication with the cloud server.
  • the invention has the following significant advantages: by configuring a plurality of sensing devices, comprehensively acquiring environmental signals and making interactive decisions, thereby improving the interactive capability of the robot.
  • the cloud recognition unit communicates with external processing resources, which improves the processing power of the robot and makes more complex interactive decisions possible.
  • FIG. 1 is a system block diagram of a multi-sense intelligent robot with cloud interaction function according to a first embodiment of the present invention.
  • FIG. 2 is a system block diagram of a multi-sense intelligent robot with cloud interaction function according to a second embodiment of the present invention.
  • FIG. 3 is a flow chart of a method for perceptual interaction of a multi-sense intelligent robot with cloud interaction function according to an embodiment of the invention.
  • FIG. 4 is a flowchart of a cloud voice recognition method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a cloud speech recognition method according to another embodiment of the present invention.
  • FIG. 6 is a flow chart of a cloud speech recognition method according to another embodiment of the present invention.
  • Fig. 7 is a schematic view showing the use of a pressure sensor according to an embodiment of the present invention.
  • FIG. 8 is a flow chart of a haptic sensing method according to an embodiment of the present invention.
  • FIG. 9 is a flow chart of a haptic sensing method according to another embodiment of the present invention.
  • Figure 10 is a schematic diagram of an impact model in accordance with an embodiment of the present invention.
  • FIG. 11 is a flow chart of a face recognition method according to an embodiment of the present invention.
  • FIG. 12 is a flowchart of a face recognition method according to another embodiment of the present invention.
  • FIG. 13 is a flowchart of a face recognition method according to another embodiment of the present invention.
  • FIG. 14 is a block diagram showing the structure of a pressure signal processing unit of the multi-sense intelligent robot shown in FIG. 1.
  • Embodiments of the present invention describe a multi-sense intelligent robot with cloud interaction function and an interactive method thereof, and the method and system are particularly suitable for a home companion robot. It will of course be understood that the method and system are also applicable to other robots with high interaction requirements, such as commercial service robots.
  • the robot's processing and decision-making ability is improved by giving the robot multiple sensing functions and performing interactive decision-making and motion control based on these sensing functions.
  • the intelligent robot 100 of the present embodiment includes a voice collection unit 101, an image acquisition unit 102, a pressure signal acquisition unit 103, a motion sensing unit 104, a password recognition processing unit 105, a local image recognition processing unit 106, and a pressure.
  • the various components can be connected to controller 108 as needed.
  • the cloud identification unit 110 is configured to communicate with the external cloud server 200.
  • the power management unit 111 is for supplying power to the entire smart robot 100.
  • the power management unit 111 provides a stably adapted power supply to each unit through the DC-DC module. At the same time, the power management unit 111 can configure the overload protection circuit to avoid overloading of the motion actuator.
  • the cloud identification unit 110 can communicate with the cloud server 200 in a variety of ways.
  • the cloud server 200 can be a cluster of one server or multiple servers, and the manufacturer of the smart robot 100 can set up a cloud server or obtain a service interface provided by a network provider.
  • the cloud identification unit 110 can communicate with the cloud server through a wireless local area network that accesses the Internet. Alternatively, the cloud identification unit 110 can also communicate with the cloud server via the mobile internet.
  • the voice collection unit 101 is configured to collect voice signals from the environment.
  • An embodiment of the voice acquisition unit 101 is a microphone that can acquire voice signals.
  • the microphone can be mounted at the left and right ears of the head of the smart robot 100.
  • the two microphones of the two ears are used as a voice input source, and the collected sound information is converted into a voice signal in the form of an electrical signal.
  • the voice signal is audio information of a natural language, and needs to be subjected to noise reduction, filtering, and the like.
  • a microphone employing an intelligent digital array noise canceling pickup having two noise reduction modes reduces noise by up to 45 dB.
  • the microphones are respectively placed at the ears of the penguin, and the acquired audio signals are dispersed to ensure the accuracy of the acquired audio signals. Sex and integrity.
  • the voice collection unit 101 may also have a voice pre-processing function.
  • the externally input voice signal may be affected by factors such as environment, scene, relative position, etc., and needs to perform modulation, demodulation, voice noise reduction, audio amplification, and the like on the audio information. Pretreatment. Among them, voice noise reduction can use DSP noise reduction algorithm to reduce noise, which can remove background noise, suppress external vocal interference, suppress echo, and suppress reverberation.
  • the DSP noise reduction algorithm has a strong ability to suppress both steady-state and non-steady-state noise as well as mechanical noise.
  • the combination of dual microphone and voice pre-processing eliminates noise almost completely, while ensuring the clarity and naturalness of normal speech and output without delay.
  • the preprocessed speech signal is transmitted through the harness to the identification selection unit 109 located in the intelligent robot cavity for processing.
  • the voice signal contains various passwords that the robot is interested in. For example, the password of the robot is called, and the robot is allowed to complete the passwords for running, jumping, and the like.
  • the identification selection unit 109 receives the speech signal and determines an appropriate speech recognition unit in accordance with a predetermined policy.
  • speech recognition refers to extracting text content through a series of sound algorithms according to the input sound signal.
  • the two voice recognition modes provided in the embodiment of the present invention include local identification and cloud recognition. After the identification selection unit 109 determines a specific voice recognition mode, the voice signal is sent to the corresponding identification unit, and the processing result is received.
  • the Local identification is to send a voice signal to the password recognition processing unit 105.
  • the cloud recognition is sent to the cloud server 200 by the cloud recognition unit 110 and at least one of cloud speech recognition and cloud semantic understanding is performed by the cloud server 200, and the cloud speech recognition processing result sent by the cloud server 200 is received.
  • the identification selection unit 109 can set a plurality of types of predetermined policies, for example, designating an identification unit in a voice signal, or performing local recognition by default, and then performing cloud recognition, or vice versa.
  • the choice of strategy can reduce the time of useless identification and improve the efficiency of intelligent robots. For example, in general, the processing efficiency of local recognition is higher than the processing efficiency of cloud recognition. Therefore, the voice signal is usually locally identified and then cloud-recognized.
  • the identification selection unit 109 determines whether to transmit a voice signal to the cloud server 200 for cloud recognition based on the password recognition processing result. Further, the identification selecting unit 109 determines whether the voice signal is successfully recognized by the local password according to the result of the password recognition processing, and if so, performs subsequent processing, for example, responding to the password; if not, transmitting the voice signal to the cloud server 200 for cloud identification. In another example, the identification selection unit 109 determines whether to perform local password recognition of the speech signal based on the result of the cloud speech recognition processing.
  • the identification selecting unit 109 determines whether the voice signal is successfully recognized by the cloud according to the cloud voice recognition processing result, and if so, performs subsequent processing, for example, responding to the password, and if not, the voice signal is locally password-recognized.
  • the identification selection unit 109 can perform the above selection operation autonomously. In another embodiment, the identification selection unit 109 can perform the above selection operation under the control of the controller 108.
  • the password recognition processing unit 105 executes locally, reads the voice signal from the controller 108, compares the predefined password data with the voice signal, and executes an appropriate processing module based on the comparison result.
  • the password recognition processing unit 105 also returns the recognition processing result to the controller 108.
  • the predefined password data can be understood as a series of voice signals stored locally, and the processing modules of the voice signals are integrated in the password recognition processing unit 105.
  • These processing modules are implemented by software or circuit form. For example, enter the greeting password "Hello", which corresponds to the Q&A module and gives an answer "Hello.” Of course, these processing modules can be integrated or implemented separately.
  • the exemplary description herein is not intended to limit the invention itself.
  • the intelligent robot may further include a preset password storage unit 115 for storing preset password data.
  • the password recognition processing unit 105 can perform local password recognition on the voice signal according to the preset password data and generate a password recognition processing result.
  • Cloud recognition can be one of cloud speech recognition and cloud semantic understanding or a combination of the two, and the cloud processing performs corresponding processing according to the extracted language information.
  • many Internet companies provide online cloud function services such as online speech recognition and semantic understanding. By accessing the APIs provided by these companies, they can obtain corresponding services. For example, if a voice signal of "Beijing-Hankou flight inquiry" is sent to an online flight service provider, the flight service provider performs speech recognition, speech analysis, semantic understanding, etc. on the voice signal, thereby obtaining a voice signal.
  • the logical meaning according to the logical meaning, returns to the current flight information of Hankou in Beijing, and returns the cloud speech recognition processing result to the controller 108.
  • the smart robot 100 can be normally in a standby or hibernation state, waiting for the user's activation (eg, a voice call).
  • the voice collection unit 101 collects a voice signal
  • the password recognition processing unit 105 or the cloud recognition unit 110 can recognize a startup command in the voice signal and transmit it to the controller 108, and the controller 108 responds to the startup command,
  • the intelligent robot 100 starts working.
  • the controller 108 can cause the intelligent robot 100 to start working under other conditions.
  • the controller 108 causes the smart robot 100 to start working in response to the user's on/off button.
  • the startup command can also be included in the wireless signal.
  • the intelligent robot includes a wireless communication unit for receiving an externally transmitted wireless signal and a wireless signal recognition unit (not shown) for identifying a startup command in the wireless signal.
  • FIG. 2 is a system block diagram of an intelligent robot with cloud interactive function according to a second embodiment of the present invention.
  • the smart robot shown in FIG. 2 adds the voiceprint identifying unit 113 and the network determining unit 114 as compared with the intelligent robot structure shown in FIG.
  • the voiceprint recognition unit 113 can be connected to the voice collection unit 101 and the controller 108, and the voiceprint recognition unit 113 is configured to perform identity verification on the person who sends the voice signal according to the pre-stored voiceprint data, wherein the voiceprint data can be stored locally. Also stored in the cloud (such as Cloud Server 200).
  • the voiceprint recognition allows the intelligent robot to respond only to the sound signals of specific people, thereby increasing the safety of the intelligent robot.
  • the network judging unit 114 can determine the connection state of the smart robot 100 and the cloud server 200 between the controller 108 and the cloud recognizing unit 110 and generate a network judgment result based on the connection state. For this reason, before the voice signal is sent to the cloud server 200 for cloud identification processing, the current network state is obtained first, and the voice signal is sent to the cloud server 200 for recognition processing only when the network judgment result is that the network is normal.
  • the existing network connection technologies have wireless and wired connections. Considering the characteristics that intelligent robots need to move, the preferred method is wireless connection, which is connected to the Internet through WIFI or Bluetooth.
  • the controller 108 determines whether to call another recognition processing unit based on the recognition processing result of the current recognition processing unit.
  • the intelligent robot 100 integrates offline password recognition and cloud online recognition, and can determine an applicable identification unit and an execution sequence according to actual scenarios or other strategies, and expands the scope of use of the robot.
  • the cloud recognition processing function can be extended as needed to enhance the intelligence of the intelligent robot.
  • FIG. 4 shows a flow chart of one embodiment of a cloud speech recognition method.
  • the cloud speech recognition method includes steps 410-460.
  • an externally input speech signal is obtained.
  • an externally input sound signal is received through a microphone mounted on the body part of the intelligent robot.
  • a microphone employing an intelligent digital array noise canceling pickup having two noise reduction modes reduces noise by up to 45 dB.
  • the microphones are respectively placed at the ears of the penguin-shaped intelligent robot, and the accuracy and integrity of the acquired audio signals are ensured by dispersing the collected sound signals.
  • the voice signal is sent to the cloud server to perform cloud recognition processing.
  • cloud software services and cloud voice storage functions to realize cloud speech recognition and cloud semantic understanding, to ensure that voice signals are recognized to the maximum extent and to obtain corresponding services according to the language information extracted from voice signals. information.
  • many Internet companies currently provide cloud software function services such as online speech recognition and semantic understanding. By accessing the APIs provided by these companies, they can obtain corresponding services.
  • step 430 it is determined whether the voice signal is capable of cloud recognition processing.
  • the cloud speech recognition result of step 420 is judged. If the recognition is successful, the password is responded to and executed in step 460. Otherwise, step 440 is performed to perform local password recognition processing.
  • a local password recognition process is performed.
  • the local password recognition process is a supplement to the cloud recognition. After the cloud identification fails, the local password recognition process is started, the password stored in the local and the input password are compared, the corresponding processing module is called, and the processing result is obtained.
  • step 450 it is determined whether the password can be identified. In this step, if the password recognition process is successful, it is determined based on the processing result that the actuator is restarted. If the password recognition process fails, no action is taken.
  • the password is responded to.
  • an actuator that drives an intelligent robot performs mechanical actions or provides information.
  • the actuator can include a speaker, a display, and a moving component for playing voice prompt information, displaying text or graphics, and performing mechanical actions. For example, answer the user's greeting message, or answer the question according to the pre-edited question and answer list, or do some simple actions according to the user's request.
  • FIG. 5 is a flow chart showing another embodiment of the cloud speech recognition method of the present invention. As shown in FIG. 5, the cloud speech recognition method includes steps 510-560.
  • the cloud speech recognition method shown in FIG. 5 and the cloud speech recognition method shown in FIG. 4 differ only in the execution order.
  • the local password recognition is first performed.
  • Figure 4 is the opposite. Only steps 520-550 that are different from FIG. 4 are described herein.
  • step 520 a local password recognition process is performed.
  • the comparison is performed according to the pre-stored password and the entered password, and the corresponding processing module is called, and the password recognition processing result is obtained.
  • step 530 it is determined whether the speech signal can be identified.
  • the password recognition processing result of step 520 is judged. If the recognition is successful, it is determined that the execution mechanism is restarted, and the processing is performed to step 560, otherwise step 540 is performed.
  • the voice signal is sent to the cloud server for cloud identification.
  • the cloud software service and cloud voice storage function are used to realize cloud speech recognition and cloud semantic understanding, to ensure that the voice signal is recognized to the maximum extent and to obtain corresponding service or information according to the language information extracted from the voice signal.
  • many Internet companies currently provide cloud software function services such as online speech recognition and semantic understanding. By accessing the APIs provided by these companies, they can obtain corresponding services.
  • step 550 it is determined whether the voice signal is capable of cloud recognition processing.
  • the cloud speech recognition result of step 540 is judged. If the recognition is successful, it is determined that the execution mechanism is restarted, and the process is performed in step 560. If the cloud recognition processing fails, no action is taken.
  • FIG. 6 is a flow chart showing another embodiment of the cloud speech recognition method of the present invention.
  • the cloud interaction method includes steps 610-670.
  • step 640 "determining the cloud network status" is added.
  • the voice signal is submitted to the cloud server for identification processing. This implementation is to improve the efficiency of cloud recognition and reduce network latency.
  • the recognition execution priority may also be determined based on a predefined preference policy. For example, it can be determined by fuzzy matching whether those voice signals are first sent to the cloud server for processing, or are processed locally first. For another example, the processing priority may be determined by enumeration, and the local processing password information is relatively limited, and the voice information not in the range is sent to the cloud server for processing.
  • the voice signal before the voice signal is sent to the server to perform cloud recognition, the voice signal is pre-processed, including pre-processing the voice modulation, demodulation, voice noise reduction, and audio amplification.
  • the person who sent the voice signal may also be authenticated according to the pre-stored voiceprint data.
  • the image acquisition unit 102 is configured to capture more than one scene image of an external input.
  • An example of image acquisition unit 102 is a camera.
  • the camera can be mounted on the eyes of the smart robot 100.
  • the image acquisition unit 102 may continuously acquire images, or may acquire one or several frames of images at regular intervals, depending on the specific occasion.
  • the acquired scene image is processed by the harness transmission to the identification selection unit 109 located in the intelligent robot cavity.
  • the recognition selection unit 109 receives the scene image and determines an appropriate image recognition unit according to a predetermined policy.
  • the two image recognition modes provided in the embodiment of the present invention include local recognition and cloud recognition.
  • the identification selection unit 109 determines a specific image recognition mode, and then sends the scene image to the corresponding identification unit, and receives the processing result.
  • the local recognition is to transmit the scene image to the ontology image recognition processing unit 106.
  • the cloud recognition is sent to the cloud server 200 through the cloud recognition unit 110 and the face recognition is performed by the cloud server 200, and the face recognition result sent by the cloud server 200 is received.
  • the identification selection unit 109 can set a plurality of types of predetermined policies, for example, specifying a recognition unit in the scene image, or defaulting first Line local identification, then perform cloud recognition, or vice versa.
  • the choice of strategy can reduce the time of useless identification and improve the efficiency of intelligent robots.
  • the processing efficiency of local recognition is higher than the processing efficiency of cloud recognition. Therefore, the scene image is usually locally identified and then cloud-recognized.
  • the identification selection unit 109 determines whether to send the scene image to the cloud server 200 for cloud recognition based on the local image recognition result.
  • the identification selecting unit 109 determines whether the scene image is successfully recognized by the local password according to the local image recognition result, and if so, performs subsequent processing; if not, sends the scene image to the cloud server 200 for cloud recognition.
  • the recognition selection unit 109 determines whether to perform local image recognition on the scene image based on the cloud face recognition processing result. Further, the identification selecting unit 109 determines whether the scene image is successfully recognized by the cloud according to the cloud face recognition result, and if so, performs subsequent processing, and if not, performs local image recognition on the scene image.
  • the intelligent robot 100 further includes a face image acquisition unit 116 that connects the image acquisition unit 102 and the recognition selection unit 109 for externally input scene images.
  • the face image acquisition unit 116 may include an algorithm capable of making a preliminary selection. This algorithm is designed to capture images of faces that have identified feature points while removing images that are not recognizable or obscured. If the face images having the recognition feature points are not acquired, the face image acquisition unit 116 excludes the face images that do not have the recognition feature points, and notifies the image acquisition unit 102 to continue capturing the scene images.
  • the local image recognition processing unit 106 can perform local image recognition on the face image having the recognized feature point and generate a local image recognition result.
  • the cloud recognition unit 110 may also send the face image having the identification feature point to the cloud server 200 and perform face recognition by the cloud server 200 and receive the cloud face recognition result sent by the cloud server 200. This operation can save processing resources and transfer resources.
  • the intelligent robot 100 further includes a preset image storage unit 117 for storing preset image data.
  • the local image recognition processing unit 106 can perform local image recognition on the face image having the recognized feature point according to the preset image data and generate a local image recognition result.
  • a feature of this embodiment is that some complex operations and processing can be done without utilizing the internal resources of the intelligent robot, but rely on an external server.
  • the processing steps performed by the intelligent robot 100 on the captured scene image are preliminary processing, in order to select a face image from which a face exists, and then send the face image and the face recognition request to the cloud.
  • the server 200 requests execution of face recognition.
  • the cloud server 200 is equipped with a program for executing a face recognition algorithm, which can respond to a face recognition request.
  • the feature points are analyzed on the image and compared with the face database to obtain face recognition information.
  • the face recognition algorithm in the cloud server 200 can use a known algorithm, and is not expanded in detail here.
  • the face image acquiring unit 116 it is further determined whether a scene image includes a face image having the recognized feature point, and if so, the face image is acquired, and if not, the image capturing unit 102 is notified to continue capturing the scene. image.
  • the cloud identification unit 110 can transmit the face image to the cloud server 200 through the wireless local area network accessing the Internet.
  • the cloud server 200 can obtain and establish a face library of family members in advance for comparison identification.
  • the cloud recognition unit 110 may also transmit a face image to the cloud server via the mobile internet.
  • Commercial service robots typically use cloud server 200 to store a face library of sufficient capacity and to provide sufficiently powerful processing resources.
  • FIG. 11 is a flowchart of a face recognition method according to an embodiment of the present invention. As shown in FIG. 11, the face recognition method includes steps 1110-1170.
  • step 1110 more than one scene image of the external input is captured.
  • an external image is captured by the image acquisition unit 102 installed in the intelligent robot.
  • the cameras as the image acquisition unit 102 are respectively placed at the eyes of the intelligent robot in the form of a penguin.
  • a face image having the recognized feature point is acquired from the externally input scene image.
  • the face image recognition unit 106 acquires a face image having the recognition feature point from the scene image.
  • the face image having the identification feature point is transmitted to the cloud server 200.
  • Cloud-based software services and cloud face storage are used to realize cloud face recognition and ensure that face images are recognized to the maximum extent.
  • many Internet companies currently provide cloud software function services such as online face recognition, and access to the APIs provided by these companies can obtain corresponding services.
  • step 1140 it is determined whether the face image is capable of cloud recognition processing.
  • the cloud face recognition result of step 1130 is determined. If the recognition is successful, the process proceeds to step 1170, otherwise step 1150 is performed to perform local image recognition processing.
  • a local image recognition process is performed.
  • the local image recognition process is a supplement to the cloud recognition. After the cloud recognition fails, the local image recognition process is started, and the face image having the recognized feature points is transmitted to the local image recognition processing unit 106.
  • the local image recognition processing unit 106 performs local image recognition on the face image having the recognized feature point based on the preset image data and generates a local image recognition result.
  • step 1160 it is determined whether the face image can be recognized. In this step, if the image If the recognition process is successful, then proceed to step 1170. If the image recognition processing fails, no operation is performed.
  • step 1170 the recognition result is saved.
  • the recognition results can be used by controller 108 along with other results.
  • FIG. 12 is a flowchart of a face recognition method according to another embodiment of the present invention. As shown in FIG. 12, the face recognition method includes steps 1210-1270.
  • the face recognition method shown in FIG. 12 and the face recognition method shown in FIG. 11 differ only in the execution order.
  • the local image is first performed.
  • Recognition processing, and then cloud face recognition processing Figure 11 is the opposite. Only steps 1230-1250 that differ from FIG. 11 are described herein.
  • step 1230 local image recognition processing is performed.
  • the local image recognition processing is to transmit the face image having the recognition feature point to the local image recognition processing unit 106.
  • the local image recognition processing unit 106 performs local image recognition on the face image having the recognized feature point based on the preset image data and generates a local image recognition result.
  • step 1240 it is determined whether the face image can be recognized. In this step, if the image recognition processing is successful, then proceed to step 1270. If the image recognition processing fails, step 1250 is executed to perform cloud face recognition processing.
  • the face image with the identified feature points is transmitted to the cloud server 200.
  • Cloud-based software services and cloud face storage are used to realize cloud face recognition and ensure that face images are recognized to the maximum extent.
  • many Internet companies currently provide cloud software function services such as online face recognition, and access to the APIs provided by these companies can obtain corresponding services.
  • step 1260 it is determined whether the face image is capable of cloud recognition processing.
  • the cloud face recognition result of step 1250 is determined. If the recognition is successful, the process proceeds to step 1270, otherwise no operation is performed.
  • step 1270 the recognition result is saved.
  • the recognition results can be used by controller 108 along with other results.
  • FIG. 13 is a flowchart of a face recognition method according to another embodiment of the present invention.
  • the face recognition method includes steps 1310-1380.
  • step 1350 is added to "determine the cloud network status".
  • the cloud network is normal, the face image is submitted to the cloud server for identification processing. This implementation is to improve the efficiency of cloud recognition and reduce network latency.
  • the recognition execution priority may also be determined based on a predefined preference policy. For example, it is possible to determine which face images are first sent to the cloud server for processing and which must be processed locally by means of fuzzy matching. For another example, the processing priority may be determined by enumeration, and the locally processed image information is relatively limited, and the face images not in the range are sent to the cloud server for processing.
  • the pressure signal acquisition unit 103 is for sensing an external pressure signal of the surface of the intelligent robot.
  • the pressure signal acquisition unit unit 103 typically includes a thin film pressure sensor sheet array and an analog to digital (A/D) conversion circuit.
  • the membrane pressure sensor can be distributed in the area of the front chest, forelegs, head and back of the intelligent robot.
  • the film pressure sensor of this solution has adhesive on the back and is directly attached to a certain part of the body of the intelligent robot.
  • a long strip sensor can be mounted on the back, front chest, abdomen and/or forelegs of the intelligent robot to sense the force state in the strip area.
  • a square sensor is mounted on the head of the intelligent robot to sense the force state in the block area.
  • the film pressure sensor in the present embodiment is preferably a resistive pressure sensor.
  • the pressure sensor patch array is used to acquire an external pressure signal and transmit the pressure signal to an analog to digital conversion circuit.
  • the pressure sensor sheet array can use an ultra-thin resistive pressure sensor as an external force detecting device, and the sensor converts the pressure applied in the thin film region into a change in the resistance value, thereby obtaining a signal corresponding to the pressure information.
  • the larger the external pressure, the lower the resistance value, the change of the resistance value changed by the external pressure is converted into the voltage or current change by the internal circuit of the sensor, and the value of the voltage or current is converted into an analog signal output to the analog-to-digital conversion circuit. .
  • the analog to digital conversion circuit converts the external pressure signal into a digital signal and transmits it to the controller 108.
  • the controller 108 can pass these signals to the pressure signal recognition processing unit 107 for processing.
  • the pressure signal acquisition unit 103 may directly connect the pressure signal recognition processing unit 107 to directly transmit its signal to the pressure signal recognition processing unit 107.
  • the pressure signal recognition processing unit 107 is for acquiring an external pressure signal and processing it to generate a pressure-sensing emotion signal.
  • 14 is a block diagram showing the structure of a pressure signal recognition processing unit of the multi-sense type intelligent robot shown in FIG. 1.
  • the pressure signal recognition processing unit 107 includes a pressure type determination unit 205, a pressure position determination unit 206, and a pressure sensing type. Emotion signal generating unit 207, data storage unit 208.
  • the pressure type determining unit 205 is configured to calculate a time value and a pressure change rate of the external pressure signal, and determine the type of the external pressure signal according to the pressure change rate and the preset change threshold.
  • the smart robot 100 can be normally in a standby or hibernation state, waiting for the user's touch to start. Such as starting a smart machine
  • the instructions of the person 100 can be included in the pressure signal.
  • the pressure signal recognition processing unit 107 is for identifying a start command in the pressure signal.
  • the pressure position determining unit 206 is configured to determine a pressure generating position based on an external pressure signal.
  • the pressure-aware emotion signal generating unit 207 is configured to compare the pressure generating position and the type of the external pressure signal with a preset mapping list, and generate a pressure-sensing emotion signal corresponding to the pressure generating position and the type of the external pressure signal.
  • the controller 108 compares the received pressure-aware emotion signal with a preset mapping list to generate an emotion expression part and an emotion expression instruction corresponding to the pressure-aware emotion signal.
  • the emotion expression instructions herein are used to control the execution of corresponding mechanical actions, play corresponding prompt voices, and/or display corresponding prompt information.
  • the intelligent robot 100 further includes a data storage unit 208 connected to the pressure type determining unit 205 and the pressure sensing type emotion signal generating unit 207, respectively, for storing a preset change threshold. And a list of preset mappings.
  • FIG. 7 is a schematic view of the use of a pressure sensor.
  • the pressure sensor 700 includes a pressure sensitive layer 703 and an adhesive layer 702 through which the pressure sensor can be attached at any position of the smart robot housing 701.
  • the size and area of the pressure sensor can also be adjusted according to actual needs.
  • FIG. 8 is a flowchart of a haptic sensing method according to an embodiment of the present invention.
  • the haptic sensing method of the present embodiment includes steps 801-806.
  • step 801 an external pressure signal is acquired to convert the external pressure signal into a digital signal.
  • the sensing component is attached to each part of the body of the intelligent robot for acquiring the pressure signal at each part.
  • the acquired pressure signal is converted into a digital signal for subsequent processing.
  • a time value for the duration of the external pressure signal is calculated, and a rate of pressure change is calculated based on the time value and the digital signal.
  • a preset time period of 0.5-1.5 seconds is selected, and the change of the pressure signal (ie, the applied external force change) in the time period is calculated, and the two are The ratio of the difference to the time period is taken as the rate of change of pressure.
  • a period of 0.5-1.5 seconds is sufficient for the sensor to capture an accurate change in applied force to capture changes in the digital signal.
  • the applied external force at 1 second is 100 Newtons
  • the applied area is 0.026 square meters, passing 100/0.026 ⁇ 3846 Newtons per square meter, and 3846 Newtons per square meter is the value characterized by the rate of change of pressure.
  • step 803 the pressure change rate is compared to a preset first change threshold.
  • the pressure change rate is compared with a preset first change threshold, and the type of the external pressure signal is determined according to the comparison result.
  • step 804 it is determined that the type of the external pressure signal is determined to be tapping.
  • step 805 it is determined that the type of the external pressure signal is determined to be a stroke.
  • the pressure change rate in the above example is 3,846 Newtons per square meter. If the preset change threshold is greater than the value, it can be determined as a tap, otherwise it is a stroke.
  • step 806 the pressure generation position and the type of the external pressure signal are compared with a preset mapping list, and an emotion expression part and an emotion expression instruction corresponding to the type of the pressure generation position and the external pressure signal are generated, thereby triggering the emotion. expression.
  • the preset mapping list stores the mapping relationship between the pressure generating position, the type of the external pressure signal, and the robot feedback.
  • the mapping relationship is as shown in Table 1 below:
  • the emotion expression part and the emotion expression instruction are generated according to the pressure generation position and the type of the external pressure signal.
  • Emotional expression instructions are used to characterize robot feedback types, such as robot feedback in the above table.
  • the robot's executive mechanism can be triggered to perform certain actions and expressions, thereby expressing some anthropomorphic emotions such as happiness, anger, depression, and the like.
  • the actuator of the emotional expression may include various parts of the robot body, speakers mounted on the body of the robot, a display, and the like. For example, the action of dancing and dancing is performed by hands and feet, or the corresponding sounds are played by the sound synthesizing device and the speaker, or some emoticons, prompts, etc. are displayed through the display, or feedback is combined in several ways.
  • the intelligent robot can make different feedback according to different parts and the type of external force applied thereon, so that the intelligent robot is more anthropomorphic.
  • Step 9 is a flow chart of a haptic sensing method according to another embodiment of the present invention.
  • the haptic sensing method includes steps 901-907. Steps 901-902 are the same as steps 801-802 of FIG. 8, and are not described herein again.
  • step 903 the pressure change rate is compared with a preset first change threshold and a second change threshold. In this step, the pressure change rate and the first change threshold and the second change threshold are respectively compared. If the pressure signal change is greater than the second change threshold, step 904 is performed, if the pressure change rate is greater than the first change threshold and less than or equal to the second change If the threshold is reached, step 905 is performed, otherwise step 906 is performed.
  • steps 904, 905, 906 it is determined that the type of the external pressure signal is force tapping, tapping and touching, respectively.
  • Table 2 below is a new mapping table.
  • step 907 the type of the pressure generating position and the external pressure signal are compared with the preset mapping list, and an emotion expression part and an emotion expression instruction corresponding to the type of the pressure generating position and the external pressure signal are generated.
  • FIG. 9 the description of the second change threshold is added, so that the tapping is divided into hard tapping and slight tapping, which increases the diversity of intelligent robot processing and feedback, making it more anthropomorphic.
  • FIG. 8 and FIG. 9 are merely exemplary descriptions of the haptic sensing method of the present invention, and the types of pressure types should not be limited to the three types mentioned above, all passing signal change rates. The type of pressure determined in comparison with the preset change threshold should be within the scope of the present invention.
  • the present invention emphasizes that an emotional expression part and an emotional expression instruction are generated by a combination of a pressure type and a pressure position, wherein the emotional expression part and the emotional expression instruction are used to trigger various forms of emotional expression, and various pressure positions and pressure types can be defined.
  • the mapping relationship with the control signals (as in the above table), all of these definitions and implementations should be included in the scope of the present invention.
  • a person skilled in the art can make some reasonable modifications in the spirit of the present invention, and such modifications are also included in the scope of the present invention.
  • the pressure is transmitted to the tactile sensing units of the robot through the pressure sensing unit attached to various parts of the intelligent robot body, and each of the tactile senses is sensed by the tactile sense.
  • an emotion expression part and an emotion expression instruction are generated, and the control signal is used to drive the robot to make various emotion expressions.
  • a plurality of actuators 112 such as motors, speakers, displays, etc., are mounted on the body of the robot, such as hands, feet, chest, back, head, etc., and these components are electrically connected to the controller 108 and in accordance with the received emotions.
  • the expression site and the emotional expression instruction make corresponding emotional expressions.
  • the motion sensing unit 104 is configured to sense a motion state of the smart robot 100 to generate a motion state parameter.
  • Examples of the motion sensing unit 104 include a gravity acceleration sensor, a gyroscope, or a tilt sensor mounted on the robot's torso to measure data of acceleration and angular velocity during robot motion in real time.
  • the data of the motion sensing unit 104 is output to the controller 108.
  • the controller 108 acquires real-time data of the motion parameters through the motion sensing unit 104, and adjusts the motion by the adjustment algorithm.
  • the controller 108 senses the motion parameter such as the acceleration of the gravity sensor as the feedback, or uses the gyroscope or the tilt sensor mounted on the torso of the robot to sense the motion state of the robot, and the like as feedback.
  • the pattern recognition algorithm recognizes the current motion state and adjusts the motion through feedback to ensure the stability of the motion.
  • the controller 108 calculates the inclination of the robot through the motion sensing unit 104, and the simulation identifies whether it is in a state to fall; if it is close to the boundary of the fall, the joint is adjusted by feedback to avoid the occurrence of a fall.
  • the controller 108 is connected to the voice collection unit 101, the image acquisition unit 102, the pressure signal acquisition unit 103, the motion sensing unit 104, the password recognition processing unit 105, the local image recognition processing unit 106, the pressure signal recognition processing unit 107, and the identification selection unit 109. , cloud identification unit 110 and actuator 112.
  • the controller 108 can acquire a voice signal, a scene image or a face image, an external pressure signal, and a motion state parameter for controlling the overall operation of the robot.
  • the controller 108 can instruct the voice collection unit 101, the image acquisition unit 102, the pressure signal acquisition unit 103, the motion sensing unit 104 to capture external information, or the command password recognition processing unit 105, the local image recognition processing unit 106, and the pressure signal recognition processing.
  • Unit 107 begins to work to obtain the desired password recognition result, local image recognition result, pressure-aware emotion signal, and the like.
  • the controller 108 instructs the cloud identification unit 110 and the cloud server 200 to communicate to transmit data that needs further processing to the cloud server 200, and obtains from the cloud server 200.
  • Results such as cloud speech recognition results, cloud face recognition results.
  • Controller 108 can command, for example, actuator 112 to perform the corresponding action.
  • the controller 108 may make an intelligent robot according to a combination of at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and one or more of the pressure-sensing emotion signals.
  • An interactive decision of 100, and the controller 108 optionally adjusts the motion of the intelligent robot 100 based on the motion state parameters.
  • the smart robot 100 can be normally in a standby or hibernation state, waiting for the user's activation (eg, vocal call or tap waking).
  • the voice collection unit 101 collects the voice signal and transmits it to the controller 108.
  • the controller 108 sends it to the password recognition processing unit 105, it can recognize the start command in the voice signal, in response to the start command. This turns on the robot to start working.
  • the controller 108 responds to the pressure signal, thereby turning on the smart robot 100 to start working. It will of course be understood that the controller 108 can turn on the smart robot 100 under other conditions. For example, the controller 108 turns on the smart robot 100 in response to a user's switch button.
  • the real-time data about the motion, voice, and the like of the smart robot 100 obtained by each sensing device can also be transmitted to the cloud server 200 through the cloud recognition unit 110, thereby implementing the cloud server 200 to monitor the operation of the smart robot 100.
  • the cloud identification unit 110 and the cloud server 200 Through the connection of the cloud identification unit 110 and the cloud server 200, data is transmitted to the cloud for processing, and the real-time processing capability of the system can be improved.
  • the controller 108 is the core of the intelligent robot 100, and is mainly responsible for collecting signals and data of each sensing device, and analyzing and processing the signals and data, thereby performing interaction and motion decision.
  • the controller 108 can be internally configured with an impact model as shown in FIG. 10, and the input parameters are at least one of a password recognition processing result and a cloud speech recognition processing result, at least one of a local image recognition result and a cloud face recognition result, and a pressure-sensing emotion signal.
  • the impact model can make an interactive decision based on this, and the command executing agency 112 interacts to achieve interaction with the outside world.
  • the impact model can be a training model built on an artificial intelligence algorithm.
  • the training model can obtain the algorithm parameters of the training model according to the artificial intelligence algorithm, and the actual input parameters and the output parameters that the developer desires corresponding to the actual input parameters are used as training.
  • the controller 108 is capable of recognizing one of the processing result and the cloud speech recognition processing result, one of the local image recognition result and the cloud face recognition result, and the feeling of stress Obtaining emotional information of the user in the cognitive emotion signal and the motion state parameter, and determining the emotion type of the intelligent robot according to the emotional information of the user, and then determining the emotion of the intelligent robot corresponding to the emotion type according to the mapping list pre-stored in the impact model
  • the expression part and the emotion expression instruction and finally control the emotion expression part to execute the emotion expression instruction.
  • the facial expression of the user is determined based on the facial image of the user, and the emotional information of the user is determined according to the facial expression of the user. For example, when the facial expression of the user is a smile, the emotional information of the user is happy, and the emotional type of the intelligent robot determined according to the emotional information of the user is hi.
  • the volume and sound frequency of the user are acquired by the voice collecting unit 101, and the emotion information of the user is determined according to the volume and sound frequency of the user. For example, when the volume of the user is less than the first preset value, and the voice frequency of the user is less than the second preset value, it is determined that the emotional information of the user is sad, and the emotional type of the intelligent robot determined according to the emotional information of the user is sad.
  • the emotion information of the user is acquired by the pressure signal acquisition unit 103 and/or the motion sensing unit 104, and the emotion type of the intelligent robot is determined according to the emotion information of the user. For example, when the pressure signal acquisition unit 103 detects that the user embraces the smart robot, it is determined that the emotion type of the smart robot is hi; and, for example, when the pressure signal acquisition unit 103 and the motion sensing unit 104 detect that the user shakes the smart robot 100 with force, Determine the emotional type of the intelligent robot as anger.
  • the type of emotion includes joy, anger, sadness and/or music.
  • one type of emotion corresponds to at least one part of the emotional expression.
  • the emotion expression instruction corresponds to the emotion expression part; the emotion expression instruction is an action instruction and/or a facial expression instruction.
  • the emotional expression site includes a forelimb, a hind limb, a torso, a head and/or a face
  • the hind limb includes a leg and a foot.
  • the emotion expression part corresponding to the emotion type is a forelimb
  • the emotion expression instruction corresponding to the emotion type is a fore limb swinging up and down
  • the facial expression expression may also be performed at the same time, such as facial expression joy.
  • the emotion expression parts corresponding to the emotion type are a forelimb, a trunk, a right leg, and a right foot
  • the emotion expression instruction corresponding to the emotion type is that the forelimb is unfolded.
  • the torso is tilted slightly to the left, the right leg swings back and forth, and the right foot is stomped, and facial expressions can also be expressed at the same time, such as facial expressions showing anger.
  • the emotion expression part corresponding to the emotion type is a head
  • the emotion expression instruction corresponding to the emotion type is that the head is turned to the shoulder position, and the head is lowered
  • the facial expression can also be performed simultaneously. Expressions such as facial expressions with sad expressions.
  • the emotion expression part corresponding to the emotion type is a forelimb and a torso
  • the emotion expression instruction corresponding to the emotion type is that the forelimb swings up and down and the trunk swings left and right, and At the same time, facial expressions are expressed, such as a happy expression on the face.
  • the action command includes action type information, action amplitude information, action frequency information, and/or action duration information corresponding to the emotion expression portion.
  • the corresponding emotional expression part is the forelimb
  • the corresponding action type information is up and down swing
  • the action amplitude information refers to the amplitude of the fore limb swinging up and down
  • the action frequency information refers to the fore limb swinging up and down.
  • the frequency for example, once per second
  • the action duration information refers to the total length of time during which the forelimbs are controlled to swing up and down.
  • the type of emotion expression instruction can be sound.
  • the audio information corresponding to the emotion type is the joyous call; the audio information corresponding to the emotion type is the angry call; the audio type of the emotion type is the sad call; the emotion type is the audio corresponding to the music.
  • Information is the call of joy.
  • the intelligent robot actively acquires the emotional information of the user, determines the emotional type of the intelligent robot according to the emotional information of the user, and determines the emotional expression part and the emotional expression of the intelligent robot corresponding to the emotional type of the intelligent robot according to the pre-stored mapping list. Directing, controlling the emotional expression part to execute the emotional expression instruction, thereby actively sensing the emotional change of the external user and determining the emotional type of the intelligent robot through the emotional information of the user, and expressing the emotion of the intelligent robot through the physical motion of the intelligent robot, thereby The interaction between the intelligent robot and the user is improved, the emotional expression of the intelligent robot is improved, the interest is enhanced, and the user experience is improved.
  • Another part of the interactive decision is to perform actions based on the password.
  • the intelligent robot will walk in the direction of the user. Or when the user instructs the intelligent robot to sit down, shake his head, etc., the intelligent robot responds.
  • FIG. 3 is a flow chart showing a method for perceptual interaction according to an embodiment of the present invention. The method can be performed in the system shown in Figures 1 and 2, or in other systems.
  • a method for sensing interaction of an intelligent robot according to this embodiment includes the following steps:
  • step 301 speech recognition is performed.
  • step 302 face recognition is performed.
  • the externally input scene image is processed to generate a local image recognition result, or the externally input scene image is transmitted to the cloud server for face recognition and receives the cloud face recognition result returned by the cloud server.
  • an external pressure signal is identified and a pressure-aware emotional signal is generated.
  • determining the type of the external pressure signal and the pressure generating position calculating the duration value and the pressure change rate of the external pressure signal, determining the type of the external pressure signal according to the ratio of the pressure change rate and the preset change threshold value, and according to The external pressure signal determines a pressure generating position; the pressure generating position and the type of the external pressure signal are compared with a preset mapping list, and a pressure sensing type emotion signal corresponding to the type of the pressure generating position and the external pressure signal is generated.
  • step 304 an interactive decision of the intelligent robot is made according to at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and the pressure-aware emotion signal, thereby triggering the interactive decision.
  • the intelligent robot and the sensing interaction method thereof improve the interaction ability of the robot by configuring a plurality of sensing devices, comprehensively acquiring environmental signals and performing interactive decision making.
  • the cloud recognition unit communicates with external processing resources, which improves the processing power of the robot and makes more complex interactive decisions possible.

Abstract

A cloud interactive system, a multicognitive intelligent robot (100) of same, and cognitive interaction method therefor. The intelligent robot (100) has various cognitive devices (101, 102, 103, and 104) for voice, image, stress, and motion sensing and is provided with capabilities for voice recognition, facial recognition, stress sensing, and emotion recognition. The intelligent robot makes interactive decisions by means of a combination of various recognition results so as to respond to human behaviors. The intelligent robot is also capable of communicating with an external processing resource via a cloud recognition unit (110), thus enhancing the processing capability of the robot, and making possible interactive decision-making of increased complexity.

Description

云端互动系统及其多感知型智能机器人和感知互动方法Cloud interactive system and its multi-perceptive intelligent robot and sensing interaction method 技术领域Technical field
本发明涉及机器人,尤其涉及一种云端互动系统及其多感知型智能机器人和感知互动方法。The invention relates to a robot, in particular to a cloud interactive system and a multi-sense intelligent robot and a sensing interaction method thereof.
背景技术Background technique
随着人类城市化进程的推进,人们的工作节奏不断加快,人口老龄化和空巢老人的出现,对于宠物陪伴的需求与日俱增。但是真实的宠物照料护理非常费时,对于衰弱的老人来说反而成为负担。而且真实的宠物需要一定的活动空间,对于居住面积不大的家庭是个难题。With the advancement of human urbanization, the pace of people's work is accelerating, the aging of the population and the emergence of empty nesters, the demand for pet companionship is increasing. But real pet care is very time consuming, and it becomes a burden for the debilitated elderly. Moreover, real pets need a certain amount of activity space, which is a problem for families with small living areas.
传统的智能电子宠物是运行于智能平台上的虚拟宠物,该类型的电子宠物缺乏真实感,用户只能通过显示设备在虚拟世界中感受它的存在。还有一类具有互动功能的玩具类电子宠物,虽然真实性强,但受到处理能力等因素的制约,表现出的智能性往往是有限的。The traditional intelligent electronic pet is a virtual pet running on a smart platform. This type of electronic pet lacks realism, and the user can only feel its presence in the virtual world through the display device. There is also a class of toy electronic pets with interactive functions. Although the authenticity is strong, it is limited by the processing ability and other factors, and the intelligence displayed is often limited.
最近几年,机器人正朝着智能化方向发展,开始具有感觉和感知等功能。因此能够与人交流的家庭陪伴型机器人逐渐成为可能。In recent years, robots are moving toward intelligence and begin to have functions such as feeling and perception. Therefore, family companion robots that can communicate with people are gradually becoming possible.
目前家庭陪伴型机器人能够与人类进行简单的互动,例如动作的模仿和声音的模仿,但是其互动行为与真实的宠物相比还相去甚远。尤其是,机器人感知外界复杂参数,并据此进行互动和处理的能力还比较弱。At present, family companion robots can interact with humans in simple interactions, such as imitation of movements and imitation of sounds, but their interaction behavior is far from that of real pets. In particular, the ability of the robot to perceive complex external parameters and to interact and process accordingly is relatively weak.
发明内容Summary of the invention
本发明所要解决的技术问题是提供一种云端互动系统及其多感知型智能机器人和感知互动方法,具有多重感知能力并且具有更强的处理和互动能力。The technical problem to be solved by the present invention is to provide a cloud interactive system and a multi-perceptive intelligent robot and a sensing interaction method thereof, which have multiple sensing capabilities and have stronger processing and interaction capabilities.
本发明提出一种具有云端互动功能的多感知型智能机器人,其与外部的云端服务器配合。智能机器人包括有:口令识别处理单元,用于对外部输入的语音信号进行本地口令识别并生成口令识别处理结果;本地图像识别处理单元,用于对 外部输入的场景图像进行本地图像识别并生成本地图像识别结果;压力信号识别处理单元,用于对外部压力信号进行识别处理并生成压力感知型情绪信号;云端识别单元,用于将所述语音信号发送至所述云端服务器并由所述云端服务器执行云端语音识别和云端语义理解至少之一,并接收所述云端服务器发来的云端语音识别处理结果;及用于将所述场景图像发送至所述云端服务器并由所述云端服务器进行人脸识别,并接收所述云端服务器发来的云端人脸识别结果;控制器,用于根据所述口令识别处理结果和所述云端语音识别处理结果至少之一、所述本地图像识别结果和所述云端人脸识别结果至少之一和/或压力感知型情绪信号作出所述智能机器人的互动决策,从而触发所述互动决策的执行。The invention provides a multi-sense intelligent robot with cloud interaction function, which cooperates with an external cloud server. The intelligent robot includes: a password recognition processing unit for performing local password recognition on the externally input voice signal and generating a password recognition processing result; and a local image recognition processing unit for The externally input scene image performs local image recognition and generates a local image recognition result; the pressure signal recognition processing unit is configured to identify and process the external pressure signal and generate a pressure-sensing emotional signal; and a cloud recognition unit for using the voice signal Sending to the cloud server and performing at least one of cloud speech recognition and cloud semantic understanding by the cloud server, and receiving a cloud speech recognition processing result sent by the cloud server; and sending the scene image to the The cloud server performs facial recognition by the cloud server, and receives a cloud face recognition result sent by the cloud server; the controller is configured to perform at least the password recognition processing result and the cloud voice recognition processing result according to the cloud server And performing, by the at least one of the local image recognition result and the cloud face recognition result and/or the pressure-aware emotion signal, an interaction decision of the intelligent robot, thereby triggering execution of the interaction decision.
在本发明的一实施例中,上述的多感知型智能机器人还包括识别选择单元,用于对外部输入的语音信号进行判断,从而选择将外部输入的语音信号是传输给所述口令识别处理单元还是传输给所述云端识别单元,以及/或者对外部输入的场景图像进行判断,从而选择将外部输入的场景图像是传输给本地图像识别处理单元还是传输给所述云端识别单元。In an embodiment of the present invention, the multi-sense intelligent robot further includes an identification selecting unit configured to determine an externally input voice signal, thereby selecting to transmit the externally input voice signal to the password recognition processing unit. Or transmitting to the cloud recognition unit, and/or judging the externally input scene image, thereby selecting whether to transmit the externally input scene image to the local image recognition processing unit or to the cloud recognition unit.
在本发明的一实施例中,上述的多感知型智能机器人还包括语音采集单元,用于获得外部输入的语音信号。In an embodiment of the invention, the multi-sense intelligent robot further includes a voice collection unit for obtaining an externally input voice signal.
在本发明的一实施例中,所述语音采集单元为麦克风,所述麦克风的数量为两个,分别安装在所述智能机器人的左右耳处。In an embodiment of the invention, the voice collection unit is a microphone, and the number of the microphones is two, which are respectively installed at the left and right ears of the intelligent robot.
在本发明的一实施例中,上述的多感知型智能机器人还包括预设口令存储单元,所述预设口令存储单元用于存储预设的口令资料;所述口令识别处理单元用于根据预设的口令资料对所述语音信号进行本地口令识别并生成口令识别处理结果。In an embodiment of the present invention, the multi-sense intelligent robot further includes a preset password storage unit, where the preset password storage unit is configured to store preset password data; and the password recognition processing unit is configured to The password data is set to perform local password recognition on the voice signal and generate a password recognition processing result.
在本发明的一实施例中,上述的多感知型智能机器人还包括声纹识别单元,用于在对所述语音信号进行识别处理之前,根据预存储的声纹资料进行身份验证。In an embodiment of the invention, the multi-sense intelligent robot further includes a voiceprint recognition unit, configured to perform identity verification according to the pre-stored voiceprint data before performing the recognition process on the voice signal.
在本发明的一实施例中,上述的多感知型智能机器人还包括图像采集单元,用于捕捉外部输入的一个以上的场景图像。In an embodiment of the invention, the multi-sense intelligent robot further includes an image acquisition unit for capturing more than one scene image of the external input.
在本发明的一实施例中,上述的多感知型智能机器人还包括人脸图像获取单元,用于从外部输入的场景图像中获取具备识别特征点的人脸图像;所述本地图像识别处理单元用于对所述具备识别特征点的人脸图像进行本地图像识别并生 成本地图像识别结果;所述云端识别单元用于将具备识别特征点的人脸图像发送至所述云端服务器进行云端人脸识别。In an embodiment of the present invention, the multi-sense intelligent robot further includes a face image acquiring unit configured to acquire a face image having the recognized feature point from the externally input scene image; the local image recognition processing unit For performing local image recognition on the face image with the identified feature points The image recognition result is obtained by the cloud recognition unit; the cloud recognition unit is configured to send the face image having the identification feature point to the cloud server for cloud face recognition.
在本发明的一实施例中,所述人脸图像获取单元还用于从外部输入的场景图像中获取具备识别特征点的人脸图像之后排除不具备识别特征点的人脸图像。In an embodiment of the present invention, the face image acquiring unit is further configured to: after acquiring a face image having the recognized feature point from the externally input scene image, excluding the face image not having the recognized feature point.
在本发明的一实施例中,上述的多感知型智能机器人还包括预设图像存储单元,用于存储预设的图像资料;所述本地图像识别处理单元用于根据预设的图像资料对所述外部输入的场景图像进行本地图像识别并生成本地图像识别结果。In an embodiment of the present invention, the multi-sense intelligent robot further includes a preset image storage unit configured to store preset image data; the local image recognition processing unit is configured to use the preset image data to The externally input scene image performs local image recognition and generates a local image recognition result.
在本发明的一实施例中,上述的多感知型智能机器人还包括压力信号获取单元,用于获取外部压力信号。In an embodiment of the invention, the multi-sense intelligent robot further includes a pressure signal acquisition unit for acquiring an external pressure signal.
在本发明的一实施例中,所述压力信号获取单元为电阻式压力传感器。In an embodiment of the invention, the pressure signal acquisition unit is a resistive pressure sensor.
在本发明的一实施例中,所述压力信号获取单元包括分布于所述智能机器人表面的压力传感芯片阵列和与所述压力传感芯片阵列连接的模数转换电路,所述压力传感芯片阵列感知所述智能机器人表面的压力变化并将其转换为压力模拟信号,所述模数转换电路将所述压力模拟信号转换为压力数字信号。In an embodiment of the invention, the pressure signal acquisition unit includes a pressure sensing chip array distributed on a surface of the intelligent robot and an analog to digital conversion circuit connected to the pressure sensing chip array, the pressure sensing The chip array senses the pressure change of the surface of the intelligent robot and converts it into a pressure analog signal, which converts the pressure analog signal into a pressure digital signal.
在本发明的一实施例中,所述压力信号识别处理单元包括有:压力类型判断单元,用于计算所述外部压力信号的压力变化率,根据所述压力变化率和预设的变化阈值比对确定所述外部压力信号的类型;压力位置判断单元,用于根据所述外部压力信号确定压力产生位置;以及压力感知型情绪信号生成单元,用于根据所述压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与所述压力产生位置及外部压力信号的类型相对应的压力感知型情绪信号。In an embodiment of the invention, the pressure signal recognition processing unit includes: a pressure type determining unit, configured to calculate a pressure change rate of the external pressure signal, according to the pressure change rate and a preset change threshold ratio Determining a type of the external pressure signal; a pressure position determining unit configured to determine a pressure generating position based on the external pressure signal; and a pressure sensing type emotion signal generating unit configured to generate a position and an external pressure signal according to the pressure The type is compared with a preset map list, and a pressure-aware emotion signal corresponding to the pressure generation position and the type of the external pressure signal is generated.
在本发明的一实施例中,所述压力信号识别处理单元还包括分别与所述压力类型判断单元和压力感知型情绪信号生成单元连接,用于存储预设的变化阈值和预设的映射列表的数据存储单元。In an embodiment of the present invention, the pressure signal recognition processing unit further includes a connection with the pressure type determining unit and the pressure sensing type emotion signal generating unit, respectively, for storing a preset change threshold and a preset mapping list. Data storage unit.
在本发明的一实施例中,上述的多感知型智能机器人还包括与所述控制器连接的运动感测单元,用于感测所述智能机器人的运动状态以生成运动状态参数。In an embodiment of the invention, the multi-sense intelligent robot further includes a motion sensing unit coupled to the controller, configured to sense a motion state of the smart robot to generate a motion state parameter.
在本发明的一实施例中,所述运动感测单元为重力加速度传感器、陀螺仪或安装在所述智能机器人躯干上的倾角传感器。In an embodiment of the invention, the motion sensing unit is a gravity acceleration sensor, a gyroscope or a tilt sensor mounted on the torso of the smart robot.
在本发明的一实施例中,上述的多感知型智能机器人还包括网络判断单元,用以判断所述智能机器人与所述云端服务器的连接状态并根据所述连接状态生 成网络判断结果。In an embodiment of the present invention, the multi-sense intelligent robot further includes a network determining unit, configured to determine a connection state of the smart robot and the cloud server, and generate a connection state according to the connection state. The result of the network judgment.
在本发明的一实施例中,所述智能机器人和所述云端服务器通过无线网络接口连接。In an embodiment of the invention, the intelligent robot and the cloud server are connected through a wireless network interface.
在本发明的一实施例中,所述控制器配置有影响模型,所述口令识别处理结果、所述云端语音识别处理结果、所述本地图像识别结果、所述云端人脸识别结果、所述压力感知型情绪信号、所述运动状态参数为所述影响模型的输入参数,所述影响模型根据所述输入参数输出所述互动决策。In an embodiment of the present invention, the controller is configured with an impact model, the password recognition processing result, the cloud voice recognition processing result, the local image recognition result, the cloud face recognition result, the The pressure-aware emotional signal, the motion state parameter is an input parameter of the impact model, and the impact model outputs the interaction decision according to the input parameter.
在本发明的一实施例中,所述控制器响应于启动指令而启动所述智能机器人。In an embodiment of the invention, the controller activates the intelligent robot in response to a start command.
在本发明的一实施例中,所述启动指令包含在语音信号中,所述口令识别处理单元或所述云端识别单元还用于识别所述语音信号中的启动指令;或所述启动指令包含在外部压力信号中,所述压力信号识别处理单元还用于识别所述外部压力信号中的启动指令;或所述启动指令包含在无线信号中,所述智能机器人还包括无线通信单元和无线信号识别单元,所述无线通信单元用于接收外部传输的无线信号,所述无线信号识别单元用于识别所述无线信号中的启动指令。In an embodiment of the invention, the startup instruction is included in a voice signal, the password recognition processing unit or the cloud recognition unit is further configured to identify a startup instruction in the voice signal; or the startup instruction includes In the external pressure signal, the pressure signal recognition processing unit is further configured to identify a start command in the external pressure signal; or the start command is included in a wireless signal, the smart robot further includes a wireless communication unit and a wireless signal An identification unit, the wireless communication unit is configured to receive an externally transmitted wireless signal, and the wireless signal identification unit is configured to identify a startup instruction in the wireless signal.
本发明还提出一种应用上述的具有云端互动功能的多感知型智能机器人的感知互动方法,包括:对外部输入的语音信号进行本地口令识别并生成口令识别处理结果,或者将所述语音信号发送至所述云端服务器并由所述云端服务器执行云端语音识别和云端语义理解至少之一,接收所述云端服务器发来的云端语音识别处理结果;对外部输入的场景图像进行本地图像识别并生成本地图像识别结果,或者将所述场景图像传输至所述云端服务器进行人脸识别并接收所述云端服务器回传的云端人脸识别结果;对外部压力信号进行识别处理并生成压力感知型情绪信号;以及根据所述口令识别处理结果和所述云端语音识别处理结果至少之一、所述本地图像识别结果和所述云端人脸识别结果至少之一和/或压力感知型情绪信号作出所述智能机器人的互动决策,从而触发所述互动决策的执行。The invention also provides a method for perceptual interaction of a multi-sense intelligent robot with cloud interaction function, which comprises: performing local password recognition on an externally input voice signal and generating a password recognition processing result, or transmitting the voice signal Receiving at least one of cloud speech recognition and cloud semantic understanding by the cloud server, receiving a cloud speech recognition processing result sent by the cloud server; performing local image recognition on the externally input scene image and generating a local Image recognition result, or transmitting the scene image to the cloud server for face recognition and receiving the cloud face recognition result returned by the cloud server; identifying and processing the external pressure signal and generating a pressure-sensing emotion signal; And making the intelligent robot according to at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and/or a pressure-aware emotion signal Interactive decision The execution of the interactive decision.
在本发明的一实施例中,上述的感知互动方法还包括对外部输入的语音信号进行判断,从而选择对所述外部输入的语音信号进行本地口令识别还是传输至所述云端服务器,以及/或者对所述外部输入的场景图像进行判断,从而选择对所述外部输入的场景图像进行本地图像识别还是传输至所述云端服务器。In an embodiment of the present invention, the method for perceptual interaction further includes: determining, by the externally input voice signal, whether to perform local password recognition on the externally input voice signal or transmitting to the cloud server, and/or Determining the externally input scene image to select whether to perform local image recognition on the externally input scene image or to transmit to the cloud server.
在本发明的一实施例中,上述的感知互动方法还包括获得外部输入的语音信 号。In an embodiment of the invention, the method for perceptual interaction further includes obtaining an externally input voice message. number.
在本发明的一实施例中,上述的感知互动方法还包括存储预设的口令资料,且对外部输入的语音信号进行本地口令识别并生成口令识别处理结果的步骤,是根据预设的口令资料对所述语音信号进行本地口令识别并生成口令识别处理结果。In an embodiment of the present invention, the method for perceptual interaction further includes storing a preset password data, and performing local password recognition on the externally input voice signal and generating a password recognition processing result according to the preset password data Local password recognition is performed on the voice signal and a password recognition processing result is generated.
在本发明的一实施例中,所述对外部输入的语音信号进行本地口令识别并生成口令识别处理结果,或者将所述语音信号发送至所述云端服务器之前,还包括根据预存储的声纹资料对所述语音信号进行身份验证。In an embodiment of the present invention, the local password recognition is performed on the externally input voice signal and the password recognition processing result is generated, or the voice signal is sent to the cloud server, and further includes a pre-stored voiceprint. The data authenticates the voice signal.
在本发明的一实施例中,上述的感知互动方法还包括捕捉外部输入的一个以上的场景图像。In an embodiment of the invention, the method for perceptual interaction further includes capturing more than one scene image of an external input.
在本发明的一实施例中,上述的感知互动方法还包括从一个以上的场景图像中获取具备识别特征点的人脸图像;且对外部输入的场景图像进行本地图像识别并生成本地图像识别结果的步骤,是对所述具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果;将所述场景图像传输至所述云端服务器进行人脸识别的步骤,是将具备识别特征点的人脸图像发送至所述云端服务器进行云端人脸识别。In an embodiment of the invention, the method for perceptual interaction further includes acquiring a face image having the identified feature point from more than one scene image; and performing local image recognition on the externally input scene image and generating a local image recognition result The step of performing local image recognition on the face image having the identification feature point and generating a local image recognition result; and the step of transmitting the scene image to the cloud server for face recognition is to have the identification feature point The face image is sent to the cloud server for cloud face recognition.
在本发明的一实施例中,从外部输入的场景图像中获取具备识别特征点的人脸图像的步骤之后还包括排除不具备识别特征点的人脸图像。In an embodiment of the present invention, the step of acquiring the face image having the recognized feature point from the externally input scene image further includes excluding the face image not having the recognized feature point.
在本发明的一实施例中,上述的感知互动方法还包括存储预设的图像资料,且对外部输入的场景图像进行本地图像识别并生成本地图像识别结果的步骤,是根据预设的图像资料对所述具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果。In an embodiment of the present invention, the method for perceptually interacting further includes storing the preset image data, and performing local image recognition on the externally input scene image and generating a local image recognition result according to the preset image data Performing local image recognition on the face image having the identification feature point and generating a local image recognition result.
在本发明的一实施例中,将所述语音信号或场景图像发送至所述云端服务器前,还包括判断网络状态是否正常,在网络正常时将所述语音信号或场景图像发送至所述云端服务器。In an embodiment of the present invention, before the sending the voice signal or the scene image to the cloud server, the method further includes: determining whether the network status is normal, and sending the voice signal or the scene image to the cloud when the network is normal. server.
在本发明的一实施例中,上述的感知互动方法还包括获取外部压力信号。In an embodiment of the invention, the method for perceptual interaction further includes acquiring an external pressure signal.
在本发明的一实施例中,对外部压力信号进行识别处理并生成压力感知型情绪信号的步骤包括:计算所述外部压力信号的压力变化率,根据所述压力变化率和预设的变化阈值比对确定所述外部压力信号的类型;根据所述外部压力信号确 定压力产生位置;以及根据所述压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与所述压力产生位置及外部压力信号的类型相对应的压力感知型情绪信号。In an embodiment of the invention, the step of performing an identification process on the external pressure signal and generating the pressure-sensing emotion signal includes: calculating a pressure change rate of the external pressure signal, according to the pressure change rate and a preset change threshold Aligning determines the type of the external pressure signal; based on the external pressure signal a constant pressure generating position; and comparing the type of the pressure generating position and the external pressure signal with a preset mapping list, and generating a pressure sensing type emotion signal corresponding to the pressure generating position and the type of the external pressure signal.
在本发明的一实施例中,上述的感知互动方法还包括存储预设的变化阈值和预设的映射列表。In an embodiment of the invention, the method for perceptual interaction further includes storing a preset change threshold and a preset mapping list.
在本发明的一实施例中,若所述压力变化率大于预设的第一变化阈值,则将所述外部压力信号的类型确定为拍打,否则,将所述外部压力信号的类型确定为抚摸。In an embodiment of the invention, if the pressure change rate is greater than a preset first change threshold, determining the type of the external pressure signal as tapping; otherwise, determining the type of the external pressure signal as a stroke .
在本发明的一实施例中,所述若所述压力变化率大于预设的第一变化阈值,则将所述外部压力信号的类型确定为拍打包括:如果所述压力变化率大于第一变化阈值而小于等于第二变化阈值,则将所述压力信号的类型确定为轻微拍打;以及如果所述压力变化率大于第二变化阈值,则将所述压力信号的类型确定为用力拍打。In an embodiment of the invention, if the pressure change rate is greater than a preset first change threshold, determining the type of the external pressure signal as a beat comprises: if the pressure change rate is greater than the first change The threshold is less than or equal to the second change threshold, and the type of the pressure signal is determined to be a slight tap; and if the pressure change rate is greater than the second change threshold, the type of the pressure signal is determined to be a hard tap.
在本发明的一实施例中,所述计算所述外部压力信号的压力变化率为:计算外部压力信号持续的时间值,在所述持续的时间值内根据预设时间段选取所述预设时间段对应的数字信号,根据所述预设时间段及所述预设时间段对应的数字信号计算出压力变化率。In an embodiment of the invention, the calculating a pressure change rate of the external pressure signal is: calculating a duration value of the external pressure signal, and selecting the preset according to the preset time period within the continuous time value The digital signal corresponding to the time period calculates a pressure change rate according to the preset time period and the digital signal corresponding to the preset time period.
在本发明的一实施例中,所述预设时间段为0.5-1.5秒。In an embodiment of the invention, the preset time period is 0.5-1.5 seconds.
在本发明的一实施例中,上述的感知互动方法还包括感测所述智能机器人的运动状态以生成运动状态参数。In an embodiment of the invention, the method for perceptual interaction further includes sensing a motion state of the smart robot to generate a motion state parameter.
在本发明的一实施例中,所述互动决策包括情绪表达部位以及情绪表达指令。In an embodiment of the invention, the interactive decision includes an emotional expression location and an emotional expression instruction.
在本发明的一实施例中,所述情绪表达部位包括智能机器人的上肢、下肢、躯干、头部、面部和/或口部;所述情绪表达指令包括执行相应的动作指令、播放相应的提示语音和/或显示相应的提示信息。In an embodiment of the present invention, the emotion expression part includes an upper limb, a lower limb, a trunk, a head, a face, and/or a mouth of the intelligent robot; the emotion expression instruction includes executing a corresponding motion instruction, and playing a corresponding prompt Voice and / or display the corresponding prompt information.
在本发明的一实施例中,所述动作指令包括机械动作指令和/或面部表情指令。In an embodiment of the invention, the action instruction comprises a mechanical action command and/or a facial expression command.
在本发明的一实施例中,所述机械动作指令包括与所述情绪表达部位对应的动作类型信息、动作幅度信息、动作频率信息和/或动作时长信息。In an embodiment of the invention, the mechanical action command includes action type information, action amplitude information, action frequency information, and/or action duration information corresponding to the emotion expression portion.
在本发明的一实施例中,上述的感知互动方法还包括:响应于启动指令而启 动所述智能机器人;所述启动指令包含在语音信号中、外部压力信号中或无线信号中。In an embodiment of the invention, the method for perceptual interaction further includes: in response to a startup command The smart robot is activated; the start command is included in a voice signal, in an external pressure signal, or in a wireless signal.
本发明还提出一种云端互动系统,包括上述的具有云端互动功能的多感知型智能机器人以及云端服务器,所述智能机器人与所述云端服务器进行无线通信。The invention also provides a cloud interaction system, comprising the above-mentioned multi-aware intelligent robot with cloud interaction function and a cloud server, and the intelligent robot performs wireless communication with the cloud server.
本发明由于采用以上技术方案,使之与现有技术相比,具有如下显著优点:通过配置多种感知设备,综合获取环境信号并进行互动决策,提升了机器人的互动能力。同时通过云端识别单元与外部处理资源进行通信,提升了机器人的处理能力,使得更为复杂的互动决策成为可能。Compared with the prior art, the invention has the following significant advantages: by configuring a plurality of sensing devices, comprehensively acquiring environmental signals and making interactive decisions, thereby improving the interactive capability of the robot. At the same time, the cloud recognition unit communicates with external processing resources, which improves the processing power of the robot and makes more complex interactive decisions possible.
附图说明DRAWINGS
为让本发明的上述目的、特征和优点能更明显易懂,以下结合附图对本发明的具体实施方式作详细说明,其中:The above described objects, features, and advantages of the present invention will become more apparent from the aspects of the invention.
图1是本发明第一实施例的具有云端互动功能的多感知型智能机器人的系统框图。1 is a system block diagram of a multi-sense intelligent robot with cloud interaction function according to a first embodiment of the present invention.
图2是本发明第二实施例的具有云端互动功能的多感知型智能机器人的系统框图。2 is a system block diagram of a multi-sense intelligent robot with cloud interaction function according to a second embodiment of the present invention.
图3是本发明一实施例的具有云端互动功能的多感知型智能机器人的感知互动方法流程图。FIG. 3 is a flow chart of a method for perceptual interaction of a multi-sense intelligent robot with cloud interaction function according to an embodiment of the invention.
图4是本发明实施例的云端语音识别方法的流程图。FIG. 4 is a flowchart of a cloud voice recognition method according to an embodiment of the present invention.
图5是本发明另一个实施例的云端语音识别方法的流程图。FIG. 5 is a flowchart of a cloud speech recognition method according to another embodiment of the present invention.
图6是本发明另一个实施例的云端语音识别方法的流程图。6 is a flow chart of a cloud speech recognition method according to another embodiment of the present invention.
图7是本发明实施例的压力传感器使用示意图。Fig. 7 is a schematic view showing the use of a pressure sensor according to an embodiment of the present invention.
图8是本发明一实施例的触觉感知方法的流程图。FIG. 8 is a flow chart of a haptic sensing method according to an embodiment of the present invention.
图9是本发明另一实施例的触觉感知方法的流程图。9 is a flow chart of a haptic sensing method according to another embodiment of the present invention.
图10是本发明一实施例的影响模型示意图。Figure 10 is a schematic diagram of an impact model in accordance with an embodiment of the present invention.
图11是本发明实施例的人脸识别方法的流程图。11 is a flow chart of a face recognition method according to an embodiment of the present invention.
图12是本发明另一个实施例的人脸识别方法的流程图。FIG. 12 is a flowchart of a face recognition method according to another embodiment of the present invention.
图13是本发明另一个实施例的人脸识别方法的流程图。FIG. 13 is a flowchart of a face recognition method according to another embodiment of the present invention.
图14是图1所示多感知型智能机器人的压力信号处理单元的结构框图。 14 is a block diagram showing the structure of a pressure signal processing unit of the multi-sense intelligent robot shown in FIG. 1.
具体实施方式detailed description
本发明的实施例描述具有云端互动功能的多感知型智能机器人及其互动方法,该方法和系统尤其适用于家庭陪伴型机器人。当然可以理解,该方法和系统也可适用于其它具有高互动需求的机器人,例如商业服务机器人。在本发明的实施例中,通过赋予机器人多重感知功能,并且根据这些感知功能来进行互动决策和运动控制,提升机器人的处理和决策能力。Embodiments of the present invention describe a multi-sense intelligent robot with cloud interaction function and an interactive method thereof, and the method and system are particularly suitable for a home companion robot. It will of course be understood that the method and system are also applicable to other robots with high interaction requirements, such as commercial service robots. In the embodiment of the present invention, the robot's processing and decision-making ability is improved by giving the robot multiple sensing functions and performing interactive decision-making and motion control based on these sensing functions.
图1是本发明一实施例的具有云端互动功能的多感知型智能机器人的系统框图。参考图1所示,本实施例的智能机器人100包括语音采集单元101、图像采集单元102、压力信号获取单元103、运动感测单元104、口令识别处理单元105、本地图像识别处理单元106、压力信号识别处理单元107、控制器108、识别选择单元109、云端识别单元110、电源管理单元111和执行机构112。1 is a system block diagram of a multi-sense intelligent robot with cloud interactive function according to an embodiment of the present invention. Referring to FIG. 1, the intelligent robot 100 of the present embodiment includes a voice collection unit 101, an image acquisition unit 102, a pressure signal acquisition unit 103, a motion sensing unit 104, a password recognition processing unit 105, a local image recognition processing unit 106, and a pressure. The signal recognition processing unit 107, the controller 108, the identification selection unit 109, the cloud recognition unit 110, the power management unit 111, and the actuator 112.
各个部件可根据需要连接到控制器108。云端识别单元110用于与外部的云端服务器200进行通信。电源管理单元111用于为整个智能机器人100供电。电源管理单元111通过DC-DC模块,为各单元提供稳定适配的电源。同时电源管理单元111可配置过载保护电路,避免运动执行元件的过载。The various components can be connected to controller 108 as needed. The cloud identification unit 110 is configured to communicate with the external cloud server 200. The power management unit 111 is for supplying power to the entire smart robot 100. The power management unit 111 provides a stably adapted power supply to each unit through the DC-DC module. At the same time, the power management unit 111 can configure the overload protection circuit to avoid overloading of the motion actuator.
云端识别单元110可以使用多种方式来与云端服务器200通信。云端服务器200可以是一台服务器或多台服务器组成的集群,可以由智能机器人100的厂商架设云端服务器或者获取网络提供商提供的服务接口。云端识别单元110可通过接入互联网的无线局域网来与云端服务器通信。作为替代,云端识别单元110还可通过移动互联网与云端服务器通信。The cloud identification unit 110 can communicate with the cloud server 200 in a variety of ways. The cloud server 200 can be a cluster of one server or multiple servers, and the manufacturer of the smart robot 100 can set up a cloud server or obtain a service interface provided by a network provider. The cloud identification unit 110 can communicate with the cloud server through a wireless local area network that accesses the Internet. Alternatively, the cloud identification unit 110 can also communicate with the cloud server via the mobile internet.
下面分别展开描述。The description is expanded below.
语音识别Speech Recognition
语音采集单元101用于从环境中采集语音信号。语音采集单元101的实施例是麦克风,其可以采集语音信号。麦克风可以安装在智能机器人100头部左右耳处。采用双耳的两个麦克风作为语音输入源,将采集到的声音信息转为电信号形式的语音信号。该语音信号是自然语言的音频信息,需要进行降噪、过滤等处理。在优选实施例中,采用了智能化数字阵列降噪拾音器的麦克风,其具有2种降噪模式,最大可降低45dB噪音。另外,智能机器人100优选为企鹅机器人时,麦克风分别置于企鹅的双耳处,通过分散采集声音信号保证获取的音频信号的准确 性和完整性。语音采集单元101还可以具有语音预处理功能,外部输入的语音信号可能受环境、场景、相对位置等因素的影响,需要对音频信息进行调制解调、语音降噪、音频放大等多种方式的预处理。其中,语音降噪可以采用DSP降噪算法进行降噪,能够去除背景噪声、抑制外部人声干扰、抑制回声、抑制混响。DSP降噪算法对稳态和非稳态的噪音以及机械噪音都有非常强的抑制能力。双麦克风和语音预处理结合使用,能将噪音几乎完全消除,同时能保证正常语音的清晰度和自然度,并能无延时的输出。The voice collection unit 101 is configured to collect voice signals from the environment. An embodiment of the voice acquisition unit 101 is a microphone that can acquire voice signals. The microphone can be mounted at the left and right ears of the head of the smart robot 100. The two microphones of the two ears are used as a voice input source, and the collected sound information is converted into a voice signal in the form of an electrical signal. The voice signal is audio information of a natural language, and needs to be subjected to noise reduction, filtering, and the like. In a preferred embodiment, a microphone employing an intelligent digital array noise canceling pickup having two noise reduction modes reduces noise by up to 45 dB. In addition, when the intelligent robot 100 is preferably a penguin robot, the microphones are respectively placed at the ears of the penguin, and the acquired audio signals are dispersed to ensure the accuracy of the acquired audio signals. Sex and integrity. The voice collection unit 101 may also have a voice pre-processing function. The externally input voice signal may be affected by factors such as environment, scene, relative position, etc., and needs to perform modulation, demodulation, voice noise reduction, audio amplification, and the like on the audio information. Pretreatment. Among them, voice noise reduction can use DSP noise reduction algorithm to reduce noise, which can remove background noise, suppress external vocal interference, suppress echo, and suppress reverberation. The DSP noise reduction algorithm has a strong ability to suppress both steady-state and non-steady-state noise as well as mechanical noise. The combination of dual microphone and voice pre-processing eliminates noise almost completely, while ensuring the clarity and naturalness of normal speech and output without delay.
经过预处理的语音信号通过线束传输至位于智能机器人腔体中的识别选择单元109中进行处理。语音信号中包含了机器人感兴趣的各种口令。例如招呼机器人的口令,令机器人完成跑、跳等动作的口令。识别选择单元109接收语音信号,根据预定策略确定适宜的语音识别单元。在本文中,语音识别,指根据输入的声音信号经过一系列的声音算法提取出文本内容。本发明实施例中提供的两种语音识别方式包括本地识别和云端识别,识别选择单元109确定一个具体的语音识别方式后将语音信号发送给相应的识别单元,并接收处理结果。本地识别是将语音信号发送至口令识别处理单元105。云端识别是通过云端识别单元110发送至云端服务器200并由云端服务器200执行云端语音识别和云端语义理解至少之一,接收云端服务器200发来的云端语音识别处理结果。识别选择单元109可以设置多种类型的预定策略,例如,在语音信号中指定识别单元,或默认先执行本地识别,再执行云端识别,或者相反。策略的选择能够减少无用识别的时间,提高智能机器人的工作效率。例如,一般来说,本地识别的处理效率高于云端识别的处理效率,因此通常将语音信号先进行本地识别,再进行云端识别。在一个示例中,识别选择单元109根据口令识别处理结果,决定是否将语音信号发送至云端服务器200进行云端识别。进一步地,识别选择单元109根据口令识别处理结果来判定语音信号是否被本地口令识别成功,若是,则进行后续处理,例如响应口令;若否,则将语音信号发送至云端服务器200进行云端识别。在另一个示例中,识别选择单元109根据云端语音识别处理结果,决定是否将语音信号进行本地口令识别。进一步地,识别选择单元109根据云端语音识别处理结果来判定语音信号是否被云端识别成功,若是,则进行后续处理,例如响应口令,若否,则将语音信号进行本地口令识别。 The preprocessed speech signal is transmitted through the harness to the identification selection unit 109 located in the intelligent robot cavity for processing. The voice signal contains various passwords that the robot is interested in. For example, the password of the robot is called, and the robot is allowed to complete the passwords for running, jumping, and the like. The identification selection unit 109 receives the speech signal and determines an appropriate speech recognition unit in accordance with a predetermined policy. In this context, speech recognition refers to extracting text content through a series of sound algorithms according to the input sound signal. The two voice recognition modes provided in the embodiment of the present invention include local identification and cloud recognition. After the identification selection unit 109 determines a specific voice recognition mode, the voice signal is sent to the corresponding identification unit, and the processing result is received. Local identification is to send a voice signal to the password recognition processing unit 105. The cloud recognition is sent to the cloud server 200 by the cloud recognition unit 110 and at least one of cloud speech recognition and cloud semantic understanding is performed by the cloud server 200, and the cloud speech recognition processing result sent by the cloud server 200 is received. The identification selection unit 109 can set a plurality of types of predetermined policies, for example, designating an identification unit in a voice signal, or performing local recognition by default, and then performing cloud recognition, or vice versa. The choice of strategy can reduce the time of useless identification and improve the efficiency of intelligent robots. For example, in general, the processing efficiency of local recognition is higher than the processing efficiency of cloud recognition. Therefore, the voice signal is usually locally identified and then cloud-recognized. In one example, the identification selection unit 109 determines whether to transmit a voice signal to the cloud server 200 for cloud recognition based on the password recognition processing result. Further, the identification selecting unit 109 determines whether the voice signal is successfully recognized by the local password according to the result of the password recognition processing, and if so, performs subsequent processing, for example, responding to the password; if not, transmitting the voice signal to the cloud server 200 for cloud identification. In another example, the identification selection unit 109 determines whether to perform local password recognition of the speech signal based on the result of the cloud speech recognition processing. Further, the identification selecting unit 109 determines whether the voice signal is successfully recognized by the cloud according to the cloud voice recognition processing result, and if so, performs subsequent processing, for example, responding to the password, and if not, the voice signal is locally password-recognized.
在一实施例中,识别选择单元109可自主的进行上述选择操作。在另一实施例中,识别选择单元109可在控制器108的控制下进行上述选择操作。In an embodiment, the identification selection unit 109 can perform the above selection operation autonomously. In another embodiment, the identification selection unit 109 can perform the above selection operation under the control of the controller 108.
口令识别处理单元105在本地执行,从控制器108读取语音信号,根据预定义的口令资料和语音信号比对,根据比对结果,执行一个适当处理模块。口令识别处理单元105同样将识别处理结果返回给控制器108。在此,预定义的口令资料可以理解为存储在本地的一系列的语音信号,在口令识别处理单元105里集成了这些语音信号的处理模块,这些处理模块通过软件或者电路形式实现。例如,输入问候口令“你好”,对应的是问答模块,给出一个回答“你好”。当然,这些处理模块可以集成在一起,也可以分开实现。在此的示例性说明不用于限制发明本身。The password recognition processing unit 105 executes locally, reads the voice signal from the controller 108, compares the predefined password data with the voice signal, and executes an appropriate processing module based on the comparison result. The password recognition processing unit 105 also returns the recognition processing result to the controller 108. Here, the predefined password data can be understood as a series of voice signals stored locally, and the processing modules of the voice signals are integrated in the password recognition processing unit 105. These processing modules are implemented by software or circuit form. For example, enter the greeting password "Hello", which corresponds to the Q&A module and gives an answer "Hello." Of course, these processing modules can be integrated or implemented separately. The exemplary description herein is not intended to limit the invention itself.
在图2所示的较佳实施例中,智能机器人还可包括预设口令存储单元115,用于存储预设的口令资料。口令识别处理单元105可以根据预设的口令资料对语音信号进行本地口令识别并生成口令识别处理结果。In the preferred embodiment shown in FIG. 2, the intelligent robot may further include a preset password storage unit 115 for storing preset password data. The password recognition processing unit 105 can perform local password recognition on the voice signal according to the preset password data and generate a password recognition processing result.
云端识别可以是云端语音识别和云端语义理解之一或包括两者的组合,云端处理则是根据提取的语言信息,进行相应的处理。目前很多互联网公司提供在线的语音识别和语义理解等云端软件功能服务,通过接入这些公司提供的API,即可获取相应的服务。例如,向在线的航班服务提供商上发送一条“北京到汉口的航班查询”的语音信号,则航班服务提供商对该语音信号进行语音识别,语音分析,语义理解等,从而得到一个语音信号的逻辑含义,根据逻辑含义,返回北京的汉口的当日航班信息,将云端语音识别处理结果返回给控制器108。Cloud recognition can be one of cloud speech recognition and cloud semantic understanding or a combination of the two, and the cloud processing performs corresponding processing according to the extracted language information. At present, many Internet companies provide online cloud function services such as online speech recognition and semantic understanding. By accessing the APIs provided by these companies, they can obtain corresponding services. For example, if a voice signal of "Beijing-Hankou flight inquiry" is sent to an online flight service provider, the flight service provider performs speech recognition, speech analysis, semantic understanding, etc. on the voice signal, thereby obtaining a voice signal. The logical meaning, according to the logical meaning, returns to the current flight information of Hankou in Beijing, and returns the cloud speech recognition processing result to the controller 108.
智能机器人100可常规地处于待机或休眠状态,等待使用者的启动(例如人声呼唤)。在一个实施例中,语音采集单元101采集语音信号,口令识别处理单元105或云端识别单元110可以识别语音信号中的启动指令并传输给控制器108,控制器108响应于这一启动指令,令智能机器人100开始工作。当然可以理解,控制器108可以在其它状况下令智能机器人100开始工作。例如控制器108响应于使用者的开关按钮令智能机器人100开始工作。在替代例子中,启动指令也可以包含在无线信号中。例如智能机器人包括无线通信单元和无线信号识别单元(未图示),无线通信单元用于接收外部传输的无线信号,无线信号识别单元用于识别无线信号中的启动指令。The smart robot 100 can be normally in a standby or hibernation state, waiting for the user's activation (eg, a voice call). In one embodiment, the voice collection unit 101 collects a voice signal, and the password recognition processing unit 105 or the cloud recognition unit 110 can recognize a startup command in the voice signal and transmit it to the controller 108, and the controller 108 responds to the startup command, The intelligent robot 100 starts working. It will of course be understood that the controller 108 can cause the intelligent robot 100 to start working under other conditions. For example, the controller 108 causes the smart robot 100 to start working in response to the user's on/off button. In an alternative example, the startup command can also be included in the wireless signal. For example, the intelligent robot includes a wireless communication unit for receiving an externally transmitted wireless signal and a wireless signal recognition unit (not shown) for identifying a startup command in the wireless signal.
图2是本发明第二实施例的具有云端互动功能的智能机器人的系统框图。参 考图2所示,可以发现,和图1所示的智能机器人结构相比较,图2所示的智能机器人增加了声纹识别单元113和网络判断单元114。2 is a system block diagram of an intelligent robot with cloud interactive function according to a second embodiment of the present invention. Reference As shown in FIG. 2, it can be found that the smart robot shown in FIG. 2 adds the voiceprint identifying unit 113 and the network determining unit 114 as compared with the intelligent robot structure shown in FIG.
声纹识别单元113可连接语音采集单元101和控制器108,声纹识别单元113用于根据预存储的声纹资料对发出所述语音信号的人进行身份验证,其中声纹资料可以存储在本地,也存储在云端(如云端服务器200)。通过声纹识别让智能机器人只对特定人物的声音信号响应,以此增加智能机器人的安全性。The voiceprint recognition unit 113 can be connected to the voice collection unit 101 and the controller 108, and the voiceprint recognition unit 113 is configured to perform identity verification on the person who sends the voice signal according to the pre-stored voiceprint data, wherein the voiceprint data can be stored locally. Also stored in the cloud (such as Cloud Server 200). The voiceprint recognition allows the intelligent robot to respond only to the sound signals of specific people, thereby increasing the safety of the intelligent robot.
网络判断单元114在控制器108和云端识别单元110之间,能够判断智能机器人100与云端服务器200的连接状态并根据该连接状态生成网络判断结果。为此,在将语音信号发送到云端服务器200进行云端识别处理之前,先获取当前的网络状态,只有在网络判断结果为网络正常的情况下才将语音信号发送云端服务器200进行识别处理。目前现有的网络连接技术有无线和有线连接,考虑到智能机器人需要移动的特点,优选的方式是无线连接,通过WIFI或蓝牙连接到互联网上。The network judging unit 114 can determine the connection state of the smart robot 100 and the cloud server 200 between the controller 108 and the cloud recognizing unit 110 and generate a network judgment result based on the connection state. For this reason, before the voice signal is sent to the cloud server 200 for cloud identification processing, the current network state is obtained first, and the voice signal is sent to the cloud server 200 for recognition processing only when the network judgment result is that the network is normal. At present, the existing network connection technologies have wireless and wired connections. Considering the characteristics that intelligent robots need to move, the preferred method is wireless connection, which is connected to the Internet through WIFI or Bluetooth.
应当理解,虽然本实施例中包含了本地识别和云端识别,但可能在一次语音识别过程中,只进行了一次语音识别即得到了预期结果。必要的时候,控制器108会根据当前识别处理单元的识别处理结果,确定是否调用另一个识别处理单元。It should be understood that although the local identification and the cloud recognition are included in the embodiment, it is possible that in one speech recognition process, only one speech recognition is performed, and the expected result is obtained. When necessary, the controller 108 determines whether to call another recognition processing unit based on the recognition processing result of the current recognition processing unit.
从本实施例可知,智能机器人100集成了离线的口令识别和云端在线识别,并能够根据实际场景或其他策略确定适用的识别单元以及执行顺序,扩展了机器人的使用范围。另外,随着网络服务商的发展,可根据需要扩展云端识别处理功能,使智能机器人的智能性得到增强。It can be seen from the embodiment that the intelligent robot 100 integrates offline password recognition and cloud online recognition, and can determine an applicable identification unit and an execution sequence according to actual scenarios or other strategies, and expands the scope of use of the robot. In addition, with the development of network service providers, the cloud recognition processing function can be extended as needed to enhance the intelligence of the intelligent robot.
相应的,本发明提供了一个云端语音识别方法,图4示出云端语音识别方法的一个实施例的流程图。如图4所示,所述云端语音识别方法包括步骤410-460。Accordingly, the present invention provides a cloud speech recognition method, and FIG. 4 shows a flow chart of one embodiment of a cloud speech recognition method. As shown in FIG. 4, the cloud speech recognition method includes steps 410-460.
在步骤410中,获得外部输入的语音信号。例如,通过安装在智能机器人身体部位的麦克风接收外部输入的声音信号。在优选实施例中,采用了智能化数字阵列降噪拾音器的麦克风,其具有2种降噪模式,最大可降低45dB噪音。另外,麦克风分别置于企鹅形态的智能机器人的双耳处,通过分散采集声音信号保证获取的音频信号的准确性和完整性。In step 410, an externally input speech signal is obtained. For example, an externally input sound signal is received through a microphone mounted on the body part of the intelligent robot. In a preferred embodiment, a microphone employing an intelligent digital array noise canceling pickup having two noise reduction modes reduces noise by up to 45 dB. In addition, the microphones are respectively placed at the ears of the penguin-shaped intelligent robot, and the accuracy and integrity of the acquired audio signals are ensured by dispersing the collected sound signals.
在步骤420中,将语音信号发送至云端服务器执行云端识别处理。利用云端的软件服务和云端语音存储功能,实现云端语音识别和云端语义理解,保证语音信号被最大限度的识别以及根据语音信号中提取的语言信息,获取相应的服务或 信息。例如,目前很多互联网公司提供在线的语音识别和语义理解等云端软件功能服务,通过接入这些公司提供的API,即可获取相应的服务。In step 420, the voice signal is sent to the cloud server to perform cloud recognition processing. Utilize cloud software services and cloud voice storage functions to realize cloud speech recognition and cloud semantic understanding, to ensure that voice signals are recognized to the maximum extent and to obtain corresponding services according to the language information extracted from voice signals. information. For example, many Internet companies currently provide cloud software function services such as online speech recognition and semantic understanding. By accessing the APIs provided by these companies, they can obtain corresponding services.
在步骤430中,判断语音信号是否能够云端识别处理。在本步骤中,对步骤420的云端语音识别结果进行判断,如果识别成功,则响应口令,并交给步骤460执行,否则执行步骤440,进行本地口令识别处理。In step 430, it is determined whether the voice signal is capable of cloud recognition processing. In this step, the cloud speech recognition result of step 420 is judged. If the recognition is successful, the password is responded to and executed in step 460. Otherwise, step 440 is performed to perform local password recognition processing.
在步骤440中,进行本地口令识别处理。本地口令识别处理是对云端识别的补充,在云端识别失败后,启动本地口令识别处理,根据预存储在本地的口令和输入的口令进行比对以及调用相应的处理模块,并获取处理结果。In step 440, a local password recognition process is performed. The local password recognition process is a supplement to the cloud recognition. After the cloud identification fails, the local password recognition process is started, the password stored in the local and the input password are compared, the corresponding processing module is called, and the processing result is obtained.
在步骤450中,判断口令是否能被识别处理。在本步骤中,如果口令识别处理成功,则根据处理结果,确定再启动执行机构。如果口令识别处理失败,则不进行任何操作。In step 450, it is determined whether the password can be identified. In this step, if the password recognition process is successful, it is determined based on the processing result that the actuator is restarted. If the password recognition process fails, no action is taken.
在步骤460中,响应口令。例如驱动智能机器人的执行机构执行机械动作或提供信息。执行机构可以包括扬声器、显示器和运动部件,用于播放语音提示信息、显示文本或图案、执行机械动作。例如,回答用户的问候信息,或者根据预先编辑的问答列表回答问题,或者根据用户的要求做一些简单动作。In step 460, the password is responded to. For example, an actuator that drives an intelligent robot performs mechanical actions or provides information. The actuator can include a speaker, a display, and a moving component for playing voice prompt information, displaying text or graphics, and performing mechanical actions. For example, answer the user's greeting message, or answer the question according to the pre-edited question and answer list, or do some simple actions according to the user's request.
图5示出本发明的云端语音识别方法的另一个实施例的流程图。如图5所示,所述云端语音识别方法包括步骤510-560。Figure 5 is a flow chart showing another embodiment of the cloud speech recognition method of the present invention. As shown in FIG. 5, the cloud speech recognition method includes steps 510-560.
从图5可以看出,图5所示的云端语音识别方法和图4所示的云端语音识别方法只在执行顺序上有区别,在图5中,接收到语音信号后,首先进行本地口令识别处理,再进行云端识别处理,图4则相反。在此仅描述与图4相区别的步骤520-550。As can be seen from FIG. 5, the cloud speech recognition method shown in FIG. 5 and the cloud speech recognition method shown in FIG. 4 differ only in the execution order. In FIG. 5, after receiving the speech signal, the local password recognition is first performed. Processing, and then cloud recognition processing, Figure 4 is the opposite. Only steps 520-550 that are different from FIG. 4 are described herein.
在步骤520中,进行本地口令识别处理。根据预存储在本地的口令和输入的口令进行比对以及调用相应的处理模块,并获取口令识别处理结果。In step 520, a local password recognition process is performed. The comparison is performed according to the pre-stored password and the entered password, and the corresponding processing module is called, and the password recognition processing result is obtained.
在步骤530中,判断语音信号是否能被识别处理。在本步骤中,对步骤520的口令识别处理结果进行判断,如果识别成功,则确定再启动执行机构,并交给步骤560执行,否则执行步骤540。In step 530, it is determined whether the speech signal can be identified. In this step, the password recognition processing result of step 520 is judged. If the recognition is successful, it is determined that the execution mechanism is restarted, and the processing is performed to step 560, otherwise step 540 is performed.
在步骤540中,将语音信号发送至云端服务器进行云端识别。利用云端的软件服务和云端语音存储功能,实现云端语音识别和云端语义理解,保证语音信号被最大限度的识别以及根据语音信号中提取的语言信息,获取相应的服务或信息。例如,目前很多互联网公司提供在线的语音识别和语义理解等云端软件功能服务,通过接入这些公司提供的API,即可获取相应的服务。 In step 540, the voice signal is sent to the cloud server for cloud identification. The cloud software service and cloud voice storage function are used to realize cloud speech recognition and cloud semantic understanding, to ensure that the voice signal is recognized to the maximum extent and to obtain corresponding service or information according to the language information extracted from the voice signal. For example, many Internet companies currently provide cloud software function services such as online speech recognition and semantic understanding. By accessing the APIs provided by these companies, they can obtain corresponding services.
在步骤550中,判断语音信号是否能够云端识别处理。在本步骤中,对步骤540的云端语音识别结果进行判断,如果识别成功,则确定再启动执行机构,并交给步骤560执行。如果云端识别处理失败,则不进行任何操作。In step 550, it is determined whether the voice signal is capable of cloud recognition processing. In this step, the cloud speech recognition result of step 540 is judged. If the recognition is successful, it is determined that the execution mechanism is restarted, and the process is performed in step 560. If the cloud recognition processing fails, no action is taken.
图6示出本发明的云端语音识别方法的另一个实施例的流程图。如图6所示,所述云端互动方法包括步骤610-670。和图5相比,增加了步骤640“判断云端网络状态”,在云端网络正常时候,才提交语音信号至云端服务器进行识别处理。此实施方式是为了提高云端识别的效率,减少网络等待时间。Figure 6 is a flow chart showing another embodiment of the cloud speech recognition method of the present invention. As shown in FIG. 6, the cloud interaction method includes steps 610-670. Compared with FIG. 5, step 640 "determining the cloud network status" is added. When the cloud network is normal, the voice signal is submitted to the cloud server for identification processing. This implementation is to improve the efficiency of cloud recognition and reduce network latency.
在一个优选的实施例中,也可以根据预定义的优选策略确定识别执行优先级。例如,可以通过模糊匹配的方式确定那些语音信号首先发送到云端服务器处理,还是首先在本地处理。又例如,可以通过枚举的方式确定处理优先级,本地处理口令信息相对有限,不在这个范围内的语音信息都发送到云端服务器处理。In a preferred embodiment, the recognition execution priority may also be determined based on a predefined preference policy. For example, it can be determined by fuzzy matching whether those voice signals are first sent to the cloud server for processing, or are processed locally first. For another example, the processing priority may be determined by enumeration, and the local processing password information is relatively limited, and the voice information not in the range is sent to the cloud server for processing.
在另一个优选的实施例中,将语音信号发送至服务器执行云端识别前,对语音信号进行预处理,包括对语音进行调制解调、语音降噪、音频放大等多种方式预处理。In another preferred embodiment, before the voice signal is sent to the server to perform cloud recognition, the voice signal is pre-processed, including pre-processing the voice modulation, demodulation, voice noise reduction, and audio amplification.
在另一个优选的实施例中,将语音信号发送至服务器执行云端识别前,还可以根据预存储的声纹资料对发出所述语音信号的人进行身份验证。In another preferred embodiment, before the voice signal is sent to the server to perform cloud recognition, the person who sent the voice signal may also be authenticated according to the pre-stored voiceprint data.
人脸识别Face recognition
图像采集单元102用于捕捉外部输入的一个以上的场景图像。图像采集单元102的实例是摄像头。摄像头可以安装在智能机器人100的双眼。图像采集单元102可以连续采集图像,也可以间隔一定时间采集一帧或几帧图像,视具体场合而定。The image acquisition unit 102 is configured to capture more than one scene image of an external input. An example of image acquisition unit 102 is a camera. The camera can be mounted on the eyes of the smart robot 100. The image acquisition unit 102 may continuously acquire images, or may acquire one or several frames of images at regular intervals, depending on the specific occasion.
经过采集的场景图像通过线束传输至位于智能机器人腔体中的识别选择单元109中处理。识别选择单元109接收场景图像,根据预定策略确定适宜的图像识别单元。本发明实施例中提供的两种图像识别方式包括本地识别和云端识别,识别选择单元109确定一个具体的图像识别方式后将场景图像发送给相应的识别单元,并接收处理结果。本地识别是将场景图像发送至本体图像识别处理单元106。云端识别是通过云端识别单元110发送至云端服务器200并由云端服务器200执行人脸识别,接收云端服务器200发来的人脸识别结果。识别选择单元109可以设置多种类型的预定策略,例如,在场景图像中指定识别单元,或默认先执 行本地识别,再执行云端识别,或者相反。策略的选择能够减少无用识别的时间,提高智能机器人的工作效率。例如,一般来说,本地识别的处理效率高于云端识别的处理效率,因此通常将场景图像先进行本地识别,再进行云端识别。在一个示例中,识别选择单元109根据本地图像识别结果,决定是否将场景图像发送至云端服务器200进行云端识别。进一步地,识别选择单元109根据本地图像识别结果来判定场景图像是否被本地口令识别成功,若是,则进行后续处理;若否,则将场景图像发送至云端服务器200进行云端识别。在另一个示例中,识别选择单元109根据云端人脸识别处理结果,决定是否将场景图像进行本地图像识别。进一步地,识别选择单元109根据云端人脸识别结果来判定场景图像是否被云端识别成功,若是,则进行后续处理,若否,则将场景图像进行本地图像识别。The acquired scene image is processed by the harness transmission to the identification selection unit 109 located in the intelligent robot cavity. The recognition selection unit 109 receives the scene image and determines an appropriate image recognition unit according to a predetermined policy. The two image recognition modes provided in the embodiment of the present invention include local recognition and cloud recognition. The identification selection unit 109 determines a specific image recognition mode, and then sends the scene image to the corresponding identification unit, and receives the processing result. The local recognition is to transmit the scene image to the ontology image recognition processing unit 106. The cloud recognition is sent to the cloud server 200 through the cloud recognition unit 110 and the face recognition is performed by the cloud server 200, and the face recognition result sent by the cloud server 200 is received. The identification selection unit 109 can set a plurality of types of predetermined policies, for example, specifying a recognition unit in the scene image, or defaulting first Line local identification, then perform cloud recognition, or vice versa. The choice of strategy can reduce the time of useless identification and improve the efficiency of intelligent robots. For example, in general, the processing efficiency of local recognition is higher than the processing efficiency of cloud recognition. Therefore, the scene image is usually locally identified and then cloud-recognized. In one example, the identification selection unit 109 determines whether to send the scene image to the cloud server 200 for cloud recognition based on the local image recognition result. Further, the identification selecting unit 109 determines whether the scene image is successfully recognized by the local password according to the local image recognition result, and if so, performs subsequent processing; if not, sends the scene image to the cloud server 200 for cloud recognition. In another example, the recognition selection unit 109 determines whether to perform local image recognition on the scene image based on the cloud face recognition processing result. Further, the identification selecting unit 109 determines whether the scene image is successfully recognized by the cloud according to the cloud face recognition result, and if so, performs subsequent processing, and if not, performs local image recognition on the scene image.
在图2所示的较佳实施例中,智能机器人100还包括人脸图像获取单元116,人脸图像获取单元116连接图像采集单元102和识别选择单元109,其用于在外部输入的场景图像中获取具备识别特征点的人脸图像。具体而言,人脸图像获取单元116可包含能够进行初步选择的算法。这一算法旨在获取那些具备识别特征点的人脸图像,同时去除没有人脸或者模糊难以识别的图像。如果没有获取到那些具备识别特征点的人脸图像,则人脸图像获取单元116排除不具备识别特征点的人脸图像,通知图像采集单元102继续捕捉场景图像。这样,本地图像识别处理单元106可对具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果。云端识别单元110也可将具备识别特征点的人脸图像发送至云端服务器200并由云端服务器200进行人脸识别并接收云端服务器200发来的云端人脸识别结果。这一操作可以节约处理资源和传输资源。In the preferred embodiment shown in FIG. 2, the intelligent robot 100 further includes a face image acquisition unit 116 that connects the image acquisition unit 102 and the recognition selection unit 109 for externally input scene images. Obtain a face image with a recognition feature point. In particular, the face image acquisition unit 116 may include an algorithm capable of making a preliminary selection. This algorithm is designed to capture images of faces that have identified feature points while removing images that are not recognizable or obscured. If the face images having the recognition feature points are not acquired, the face image acquisition unit 116 excludes the face images that do not have the recognition feature points, and notifies the image acquisition unit 102 to continue capturing the scene images. In this way, the local image recognition processing unit 106 can perform local image recognition on the face image having the recognized feature point and generate a local image recognition result. The cloud recognition unit 110 may also send the face image having the identification feature point to the cloud server 200 and perform face recognition by the cloud server 200 and receive the cloud face recognition result sent by the cloud server 200. This operation can save processing resources and transfer resources.
在图2所示的较佳实施例中,智能机器人100还包括预设图像存储单元117,用于存储预设的图像资料。这样,本地图像识别处理单元106可根据预设的图像资料对具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果。In the preferred embodiment shown in FIG. 2, the intelligent robot 100 further includes a preset image storage unit 117 for storing preset image data. In this way, the local image recognition processing unit 106 can perform local image recognition on the face image having the recognized feature point according to the preset image data and generate a local image recognition result.
本实施例的一个特点是,一些复杂的运算和处理可以不利用智能机器人的内部资源完成,而是依靠外部的服务器。在一个实施例中,智能机器人100对捕捉的场景图像所进行的处理步骤是初步处理,旨在从中选定存在人脸的人脸图像,然后将这些人脸图像以及人脸识别请求发送给云端服务器200,请求执行人脸识别。云端服务器200配备了执行人脸识别算法的程序,可以响应于人脸识别请求, 对图像进行特征点分析,并与人脸库进行比对,获得人脸识别信息。在此,云端服务器200中的人脸识别算法可以使用已知的算法,在此不再详细展开。较佳地,在人脸图像获取单元116中,还可以判断一场景图像是否包含具备识别特征点的人脸图像,如果有则获取该人脸图像,如果没有则通知图像采集单元102继续捕捉场景图像。A feature of this embodiment is that some complex operations and processing can be done without utilizing the internal resources of the intelligent robot, but rely on an external server. In one embodiment, the processing steps performed by the intelligent robot 100 on the captured scene image are preliminary processing, in order to select a face image from which a face exists, and then send the face image and the face recognition request to the cloud. The server 200 requests execution of face recognition. The cloud server 200 is equipped with a program for executing a face recognition algorithm, which can respond to a face recognition request. The feature points are analyzed on the image and compared with the face database to obtain face recognition information. Here, the face recognition algorithm in the cloud server 200 can use a known algorithm, and is not expanded in detail here. Preferably, in the face image acquiring unit 116, it is further determined whether a scene image includes a face image having the recognized feature point, and if so, the face image is acquired, and if not, the image capturing unit 102 is notified to continue capturing the scene. image.
云端识别单元110可通过接入互联网的无线局域网传输人脸图像到云端服务器200。云端服务器200可预先获得并建立家庭成员的人脸库,供比对识别。作为替代,云端识别单元110还可通过移动互联网传输人脸图像到云端服务器。商业服务机器人典型地使用云端服务器200以存储足够容量的人脸库及提供足够强大的处理资源。The cloud identification unit 110 can transmit the face image to the cloud server 200 through the wireless local area network accessing the Internet. The cloud server 200 can obtain and establish a face library of family members in advance for comparison identification. Alternatively, the cloud recognition unit 110 may also transmit a face image to the cloud server via the mobile internet. Commercial service robots typically use cloud server 200 to store a face library of sufficient capacity and to provide sufficiently powerful processing resources.
本发明提供了一个人脸识别方法,图11是本发明实施例的人脸识别方法的流程图。如图11所示,人脸识别方法包括步骤1110-1170。The present invention provides a face recognition method, and FIG. 11 is a flowchart of a face recognition method according to an embodiment of the present invention. As shown in FIG. 11, the face recognition method includes steps 1110-1170.
在步骤1110中,捕捉外部输入的一个以上的场景图像。例如,通过安装在智能机器人的图像采集单元102捕捉外界图像。较佳地,作为图像采集单元102的摄像头分别置于企鹅形态的智能机器人的双眼处。In step 1110, more than one scene image of the external input is captured. For example, an external image is captured by the image acquisition unit 102 installed in the intelligent robot. Preferably, the cameras as the image acquisition unit 102 are respectively placed at the eyes of the intelligent robot in the form of a penguin.
在步骤1120中,从外部输入的场景图像中获取具备识别特征点的人脸图像。例如人脸图像识别单元106会从场景图像中获取具备识别特征点的人脸图像。In step 1120, a face image having the recognized feature point is acquired from the externally input scene image. For example, the face image recognition unit 106 acquires a face image having the recognition feature point from the scene image.
在步骤1130中,将具备识别特征点的人脸图像传输至云端服务器200。利用云端的软件服务和云端人脸库存储功能,实现云端人脸识别,保证人脸图像被最大限度的识别。例如,目前很多互联网公司提供在线的人脸识别等云端软件功能服务,通过接入这些公司提供的API,即可获取相应的服务。In step 1130, the face image having the identification feature point is transmitted to the cloud server 200. Cloud-based software services and cloud face storage are used to realize cloud face recognition and ensure that face images are recognized to the maximum extent. For example, many Internet companies currently provide cloud software function services such as online face recognition, and access to the APIs provided by these companies can obtain corresponding services.
在步骤1140中,判断人脸图像是否能够云端识别处理。在本步骤中,对步骤1130的云端人脸识别结果进行判断,如果识别成功,则进入步骤1170执行,否则执行步骤1150,进行本地图像识别处理。In step 1140, it is determined whether the face image is capable of cloud recognition processing. In this step, the cloud face recognition result of step 1130 is determined. If the recognition is successful, the process proceeds to step 1170, otherwise step 1150 is performed to perform local image recognition processing.
在步骤1150中,进行本地图像识别处理。本地图像识别处理是对云端识别的补充,在云端识别失败后,启动本地图像识别处理,将具备识别特征点的人脸图像传输至本地图像识别处理单元106。本地图像识别处理单元106根据预设的图像资料对具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果。In step 1150, a local image recognition process is performed. The local image recognition process is a supplement to the cloud recognition. After the cloud recognition fails, the local image recognition process is started, and the face image having the recognized feature points is transmitted to the local image recognition processing unit 106. The local image recognition processing unit 106 performs local image recognition on the face image having the recognized feature point based on the preset image data and generates a local image recognition result.
在步骤1160中,判断人脸图像是否能被识别处理。在本步骤中,如果图像 识别处理成功,则继续到步骤1170。如果图像识别处理失败,则不进行任何操作。In step 1160, it is determined whether the face image can be recognized. In this step, if the image If the recognition process is successful, then proceed to step 1170. If the image recognition processing fails, no operation is performed.
在步骤1170中,保存识别结果。识别结果可以与其它结果一起被控制器108使用。In step 1170, the recognition result is saved. The recognition results can be used by controller 108 along with other results.
图12是本发明另一个实施例的人脸识别方法的流程图。如图12所示,所述人脸识别方法包括步骤1210-1270。FIG. 12 is a flowchart of a face recognition method according to another embodiment of the present invention. As shown in FIG. 12, the face recognition method includes steps 1210-1270.
从图12可以看出,图12所示的人脸识别方法和图11所示的人脸识别方法只在执行顺序上有区别,在图12中,接收到人脸图像后,首先进行本地图像识别处理,再进行云端人脸识别处理,图11则相反。在此仅描述与图11相区别的步骤1230-1250。As can be seen from FIG. 12, the face recognition method shown in FIG. 12 and the face recognition method shown in FIG. 11 differ only in the execution order. In FIG. 12, after receiving the face image, the local image is first performed. Recognition processing, and then cloud face recognition processing, Figure 11 is the opposite. Only steps 1230-1250 that differ from FIG. 11 are described herein.
在步骤1230中,进行本地图像识别处理。本地图像识别处理是将具备识别特征点的人脸图像传输至本地图像识别处理单元106。本地图像识别处理单元106根据预设的图像资料对具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果。In step 1230, local image recognition processing is performed. The local image recognition processing is to transmit the face image having the recognition feature point to the local image recognition processing unit 106. The local image recognition processing unit 106 performs local image recognition on the face image having the recognized feature point based on the preset image data and generates a local image recognition result.
在步骤1240中,判断人脸图像是否能被识别处理。在本步骤中,如果图像识别处理成功,则继续到步骤1270。如果图像识别处理失败,则执行步骤1250,进行云端人脸识别处理。In step 1240, it is determined whether the face image can be recognized. In this step, if the image recognition processing is successful, then proceed to step 1270. If the image recognition processing fails, step 1250 is executed to perform cloud face recognition processing.
在步骤1250中,将具备识别特征点的人脸图像传输至云端服务器200。利用云端的软件服务和云端人脸库存储功能,实现云端人脸识别,保证人脸图像被最大限度的识别。例如,目前很多互联网公司提供在线的人脸识别等云端软件功能服务,通过接入这些公司提供的API,即可获取相应的服务。In step 1250, the face image with the identified feature points is transmitted to the cloud server 200. Cloud-based software services and cloud face storage are used to realize cloud face recognition and ensure that face images are recognized to the maximum extent. For example, many Internet companies currently provide cloud software function services such as online face recognition, and access to the APIs provided by these companies can obtain corresponding services.
在步骤1260中,判断人脸图像是否能够云端识别处理。在本步骤中,对步骤1250的云端人脸识别结果进行判断,如果识别成功,则进入步骤1270执行,否则不进行任何操作。In step 1260, it is determined whether the face image is capable of cloud recognition processing. In this step, the cloud face recognition result of step 1250 is determined. If the recognition is successful, the process proceeds to step 1270, otherwise no operation is performed.
在步骤1270中,保存识别结果。识别结果可以与其它结果一起被控制器108使用。In step 1270, the recognition result is saved. The recognition results can be used by controller 108 along with other results.
图13是本发明另一个实施例的人脸识别方法的流程图。如图13所示,所述人脸识别方法包括步骤1310-1380。和图12相比,增加了步骤1350“判断云端网络状态”,在云端网络正常时候,才提交人脸图像至云端服务器进行识别处理。此实施方式是为了提高云端识别的效率,减少网络等待时间。FIG. 13 is a flowchart of a face recognition method according to another embodiment of the present invention. As shown in FIG. 13, the face recognition method includes steps 1310-1380. Compared with FIG. 12, step 1350 is added to "determine the cloud network status". When the cloud network is normal, the face image is submitted to the cloud server for identification processing. This implementation is to improve the efficiency of cloud recognition and reduce network latency.
在一个优选的实施例中,也可以根据预定义的优选策略确定识别执行优先级。 例如,可以通过模糊匹配的方式确定哪些人脸图像首先发送到云端服务器处理,哪些又必须在本地处理。又例如,可以通过枚举的方式确定处理优先级,本地处理图像信息相对有限,不在这个范围内的人脸图像都发送到云端服务器处理。In a preferred embodiment, the recognition execution priority may also be determined based on a predefined preference policy. For example, it is possible to determine which face images are first sent to the cloud server for processing and which must be processed locally by means of fuzzy matching. For another example, the processing priority may be determined by enumeration, and the locally processed image information is relatively limited, and the face images not in the range are sent to the cloud server for processing.
压力识别Pressure identification
压力信号获取单元103用于感知智能机器人表面的外部压力信号。压力信号获取单元单元103典型地包括薄膜压力传感器片阵列和模数(A/D)转换电路。薄膜压力传感器可分布在智能机器人的前胸、前肢、头部、后背的区域。本方案的薄膜压力传感器背面有粘胶,直接粘贴在智能机器人身体的某个部位。在智能机器人的背部、前胸、腹部和/或前肢处可安装长条形传感器,可感知条形区域内的受力状态。在智能机器人头部安装方形传感器,可感知方块区域内的受力状态。在本方案中的薄膜压力传感器优选为电阻式压力传感器。The pressure signal acquisition unit 103 is for sensing an external pressure signal of the surface of the intelligent robot. The pressure signal acquisition unit unit 103 typically includes a thin film pressure sensor sheet array and an analog to digital (A/D) conversion circuit. The membrane pressure sensor can be distributed in the area of the front chest, forelegs, head and back of the intelligent robot. The film pressure sensor of this solution has adhesive on the back and is directly attached to a certain part of the body of the intelligent robot. A long strip sensor can be mounted on the back, front chest, abdomen and/or forelegs of the intelligent robot to sense the force state in the strip area. A square sensor is mounted on the head of the intelligent robot to sense the force state in the block area. The film pressure sensor in the present embodiment is preferably a resistive pressure sensor.
压力传感器片阵列用于获取外部压力信号,并将压力信号传输到模数转换电路。压力传感器片阵列可以采用超薄型电阻式压力传感器作为外力检测设备,传感器将施加在其薄膜区域的压力转化为电阻值的变化,从而获得压力信息对应的信号。外部压力越大,电阻值越低,通过传感器内部的电路将外部压力所改变的电阻值的变化转化为电压或电流的变化,并将电压或电流的数值转换为模拟信号输出到模数转换电路。The pressure sensor patch array is used to acquire an external pressure signal and transmit the pressure signal to an analog to digital conversion circuit. The pressure sensor sheet array can use an ultra-thin resistive pressure sensor as an external force detecting device, and the sensor converts the pressure applied in the thin film region into a change in the resistance value, thereby obtaining a signal corresponding to the pressure information. The larger the external pressure, the lower the resistance value, the change of the resistance value changed by the external pressure is converted into the voltage or current change by the internal circuit of the sensor, and the value of the voltage or current is converted into an analog signal output to the analog-to-digital conversion circuit. .
模数转换电路将外部压力信号转换为数字信号,并传输到控制器108。控制器108可将这些信号交由压力信号识别处理单元107处理。在替换实施例中,压力信号获取单元103可直接连接压力信号识别处理单元107,以直接将其信号传输给压力信号识别处理单元107。The analog to digital conversion circuit converts the external pressure signal into a digital signal and transmits it to the controller 108. The controller 108 can pass these signals to the pressure signal recognition processing unit 107 for processing. In an alternate embodiment, the pressure signal acquisition unit 103 may directly connect the pressure signal recognition processing unit 107 to directly transmit its signal to the pressure signal recognition processing unit 107.
压力信号识别处理单元107用于获取外部压力信号并对其进行处理以生成压力感知型情绪信号。图14是图1所示多感知型智能机器人的压力信号识别处理单元的结构框图,参考图14所示,压力信号识别处理单元107包括压力类型判断单元205、压力位置判断单元206、压力感知型情绪信号生成单元207、数据存储单元208。The pressure signal recognition processing unit 107 is for acquiring an external pressure signal and processing it to generate a pressure-sensing emotion signal. 14 is a block diagram showing the structure of a pressure signal recognition processing unit of the multi-sense type intelligent robot shown in FIG. 1. Referring to FIG. 14, the pressure signal recognition processing unit 107 includes a pressure type determination unit 205, a pressure position determination unit 206, and a pressure sensing type. Emotion signal generating unit 207, data storage unit 208.
压力类型判断单元205用于计算外部压力信号持续的时间值和压力变化率,根据压力变化率和预设的变化阈值比对确定外部压力信号的类型。智能机器人100可常规地处于待机或休眠状态,等待使用者的触摸启动。例如启动智能机器 人100的指令可以包含在压力信号中。压力信号识别处理单元107用于识别压力信号中的启动指令。The pressure type determining unit 205 is configured to calculate a time value and a pressure change rate of the external pressure signal, and determine the type of the external pressure signal according to the pressure change rate and the preset change threshold. The smart robot 100 can be normally in a standby or hibernation state, waiting for the user's touch to start. Such as starting a smart machine The instructions of the person 100 can be included in the pressure signal. The pressure signal recognition processing unit 107 is for identifying a start command in the pressure signal.
压力位置判断单元206用于根据外部压力信号确定压力产生位置。The pressure position determining unit 206 is configured to determine a pressure generating position based on an external pressure signal.
压力感知型情绪信号生成单元207用于根据压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与压力产生位置及外部压力信号的类型相对应的压力感知型情绪信号。The pressure-aware emotion signal generating unit 207 is configured to compare the pressure generating position and the type of the external pressure signal with a preset mapping list, and generate a pressure-sensing emotion signal corresponding to the pressure generating position and the type of the external pressure signal.
控制器108会参考接收到的压力感知型情绪信号,与预设的映射列表进行比对,生成与压力感知型情绪信号相对应的情绪表达部位和情绪表达指令。这里的情绪表达指令用于控制执行相应的机械动作、播放相应的提示语音和/或显示相应的提示信息。The controller 108 compares the received pressure-aware emotion signal with a preset mapping list to generate an emotion expression part and an emotion expression instruction corresponding to the pressure-aware emotion signal. The emotion expression instructions herein are used to control the execution of corresponding mechanical actions, play corresponding prompt voices, and/or display corresponding prompt information.
在一个优选的实施例中,如图14所示,智能机器人100还包括数据存储单元208,分别与压力类型判断单元205和压力感知型情绪信号生成单元207连接,用于存储预设的变化阈值和预设的映射列表。In a preferred embodiment, as shown in FIG. 14, the intelligent robot 100 further includes a data storage unit 208 connected to the pressure type determining unit 205 and the pressure sensing type emotion signal generating unit 207, respectively, for storing a preset change threshold. And a list of preset mappings.
图7是压力传感器的使用示意图。在图7中,压力传感器700包括压力敏感层703和黏贴层702,通过黏贴层702可以将压力传感器粘贴在智能机器人外壳701的任意位置。压力传感器的大小和面积也可以根据实际需要进行调整。Figure 7 is a schematic view of the use of a pressure sensor. In FIG. 7, the pressure sensor 700 includes a pressure sensitive layer 703 and an adhesive layer 702 through which the pressure sensor can be attached at any position of the smart robot housing 701. The size and area of the pressure sensor can also be adjusted according to actual needs.
图8是本发明实施例的触觉感知方法的流程图。本实施例的触觉感知方法包括步骤801-806。FIG. 8 is a flowchart of a haptic sensing method according to an embodiment of the present invention. The haptic sensing method of the present embodiment includes steps 801-806.
在步骤801中,获取外部压力信号,将外部压力信号转换为数字信号。当该方法应用智能机器人时,在智能机器人身体的各部位黏贴传感部件,用于获取在各部位的压力信号。在本步骤中,将获取到的压力信号转成数字信号用于后续处理。In step 801, an external pressure signal is acquired to convert the external pressure signal into a digital signal. When the method applies the intelligent robot, the sensing component is attached to each part of the body of the intelligent robot for acquiring the pressure signal at each part. In this step, the acquired pressure signal is converted into a digital signal for subsequent processing.
在步骤802中,计算外部压力信号持续的时间值,根据时间值及数字信号计算出压力变化率。在一个优选的实施例中,在压力信号持续时间段内,选择一个0.5-1.5秒的预设时间段,计算该时间段内的压力信号的变化(即施加的外力变化),将两者之差与该时间段的比值作为压力变化率。一般情况下0.5-1.5秒的时间段足以传感器捕捉到施加的作用力的精确变化从而捕捉到数字信号的变化。例如,在1秒的施加的外力为100牛顿,施加的面积为0.026平方米,通过100/0.026≈3846牛顿/平方米,3846牛顿/平方米即为压力变化率表征的数值。In step 802, a time value for the duration of the external pressure signal is calculated, and a rate of pressure change is calculated based on the time value and the digital signal. In a preferred embodiment, during the duration of the pressure signal, a preset time period of 0.5-1.5 seconds is selected, and the change of the pressure signal (ie, the applied external force change) in the time period is calculated, and the two are The ratio of the difference to the time period is taken as the rate of change of pressure. Typically, a period of 0.5-1.5 seconds is sufficient for the sensor to capture an accurate change in applied force to capture changes in the digital signal. For example, the applied external force at 1 second is 100 Newtons, and the applied area is 0.026 square meters, passing 100/0.026 ≈ 3846 Newtons per square meter, and 3846 Newtons per square meter is the value characterized by the rate of change of pressure.
在步骤803中,比较压力变化率与预设的第一变化阈值。在本步骤中,比较压力变化率与预设的第一变化阈值,并根据比较结果确定外部压力信号的类型。 In step 803, the pressure change rate is compared to a preset first change threshold. In this step, the pressure change rate is compared with a preset first change threshold, and the type of the external pressure signal is determined according to the comparison result.
在步骤804中,确定外部压力信号的类型确定为拍打。In step 804, it is determined that the type of the external pressure signal is determined to be tapping.
在步骤805中,确定外部压力信号的类型确定为抚摸。In step 805, it is determined that the type of the external pressure signal is determined to be a stroke.
例如上例中的压力变化率为3846牛顿/平方米,如果预设的变化阈值大于该值,则可以判定为拍打,否则为抚摸。For example, the pressure change rate in the above example is 3,846 Newtons per square meter. If the preset change threshold is greater than the value, it can be determined as a tap, otherwise it is a stroke.
在步骤806中,根据压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与压力产生位置及外部压力信号的类型相对应的情绪表达部位和情绪表达指令,从而触发情绪表达。In step 806, the pressure generation position and the type of the external pressure signal are compared with a preset mapping list, and an emotion expression part and an emotion expression instruction corresponding to the type of the pressure generation position and the external pressure signal are generated, thereby triggering the emotion. expression.
预设的映射列表存储有压力产生位置、外部压力信号的类型与机器人反馈的映射关系。在一个实施例中,所述映射关系如下表1所示:The preset mapping list stores the mapping relationship between the pressure generating position, the type of the external pressure signal, and the robot feedback. In one embodiment, the mapping relationship is as shown in Table 1 below:
表1Table 1
Figure PCTCN2017076274-appb-000001
Figure PCTCN2017076274-appb-000001
根据压力产生位置、外部压力信号的类型生成情绪表达部位和情绪表达指令。情绪表达指令用于表征机器人反馈类型,例如上表中的机器人反馈。通过情绪表达指令,能够触发机器人的执行机构执行某些动作、表情,从而表达一些诸如高兴、愤怒、忧郁等拟人情绪。情绪表达的执行机构可以包括机器人身体的各个部位,安装在机器人身体上的扬声器,显示器等。例如,通过双手、双脚执行手舞足蹈的动作,或通过声音合成装置和扬声器播放相应的提示音,或通过显示器显示一些表情符号,提示音等,或几种方式组合反馈。The emotion expression part and the emotion expression instruction are generated according to the pressure generation position and the type of the external pressure signal. Emotional expression instructions are used to characterize robot feedback types, such as robot feedback in the above table. Through the emotional expression command, the robot's executive mechanism can be triggered to perform certain actions and expressions, thereby expressing some anthropomorphic emotions such as happiness, anger, depression, and the like. The actuator of the emotional expression may include various parts of the robot body, speakers mounted on the body of the robot, a display, and the like. For example, the action of dancing and dancing is performed by hands and feet, or the corresponding sounds are played by the sound synthesizing device and the speaker, or some emoticons, prompts, etc. are displayed through the display, or feedback is combined in several ways.
在上述实施例提供的感应方法,使智能机器人能够根据不同部位以及在其上施加的外力的类型做出不同的反馈,使智能机器人更加拟人化。In the sensing method provided by the above embodiment, the intelligent robot can make different feedback according to different parts and the type of external force applied thereon, so that the intelligent robot is more anthropomorphic.
图9是本发明另一实施例的触觉感知方法的流程图。所述触觉感知方法包括步骤901-907。其中步骤901-902和图8的步骤801-802相同,这里就不再赘述。9 is a flow chart of a haptic sensing method according to another embodiment of the present invention. The haptic sensing method includes steps 901-907. Steps 901-902 are the same as steps 801-802 of FIG. 8, and are not described herein again.
在步骤903中,比较压力变化率与预设的第一变化阈值、第二变化阈值。在 本步骤中,分别比较压力变化率和第一变化阈值、第二变化阈值,如果压力信号变化大于第二变化阈值,则执行步骤904,如果压力变化率大于第一变化阈值而小于等于第二变化阈值,则执行步骤905,否则执行步骤906。In step 903, the pressure change rate is compared with a preset first change threshold and a second change threshold. In In this step, the pressure change rate and the first change threshold and the second change threshold are respectively compared. If the pressure signal change is greater than the second change threshold, step 904 is performed, if the pressure change rate is greater than the first change threshold and less than or equal to the second change If the threshold is reached, step 905 is performed, otherwise step 906 is performed.
在步骤904,905,906中,分别判定上述外部压力信号的类型为用力拍打,轻微拍打和抚摸。下表2是一个新的映射表。In steps 904, 905, 906, it is determined that the type of the external pressure signal is force tapping, tapping and touching, respectively. Table 2 below is a new mapping table.
表2Table 2
Figure PCTCN2017076274-appb-000002
Figure PCTCN2017076274-appb-000002
在步骤907中,根据压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与压力产生位置及外部压力信号的类型相对应的情绪表达部位和情绪表达指令。In step 907, the type of the pressure generating position and the external pressure signal are compared with the preset mapping list, and an emotion expression part and an emotion expression instruction corresponding to the type of the pressure generating position and the external pressure signal are generated.
在图9所述的触觉感知方法中,增加了第二变化阈值的描述,从而将拍打分为用力拍打和轻微拍打,增加了智能机器人处理和反馈的多样性,使其更加拟人化。当然,本领域的技术人员可以理解到,图8和图9仅仅是对本发明的触觉感知方法的示例性描述,压力类型的种类不应该局限于上述提到的三种类型,所有通过信号变化率和预设的变化阈值比较确定的压力类型都应该在本发明保护的范围之内。另外,本发明强调通过压力类型和压力位置共同作用生成情绪表达部位和情绪表达指令,其中,情绪表达部位和情绪表达指令用于触发多种形式的情绪表达,可以定义多种压力位置、压力类型和控制信号的映射关系(如上表),所有这些定义和实现都应该包含在本发明的保护范围之内。本领域的技术人员可以在本发明的精神下做出一些合理的变形,此变形也应包括在本发明的保护范围之内。In the haptic sensing method described in FIG. 9, the description of the second change threshold is added, so that the tapping is divided into hard tapping and slight tapping, which increases the diversity of intelligent robot processing and feedback, making it more anthropomorphic. Of course, those skilled in the art can understand that FIG. 8 and FIG. 9 are merely exemplary descriptions of the haptic sensing method of the present invention, and the types of pressure types should not be limited to the three types mentioned above, all passing signal change rates. The type of pressure determined in comparison with the preset change threshold should be within the scope of the present invention. In addition, the present invention emphasizes that an emotional expression part and an emotional expression instruction are generated by a combination of a pressure type and a pressure position, wherein the emotional expression part and the emotional expression instruction are used to trigger various forms of emotional expression, and various pressure positions and pressure types can be defined. The mapping relationship with the control signals (as in the above table), all of these definitions and implementations should be included in the scope of the present invention. A person skilled in the art can make some reasonable modifications in the spirit of the present invention, and such modifications are also included in the scope of the present invention.
上述的触觉感知各单元应用于智能机器人时,通过在智能机器人身体各个部位黏贴的压力感知单元将压力传输到机器人的触觉感知各单元,通过触觉感知各 单元处理后生成情绪表达部位和情绪表达指令,该控制信号用于驱动机器人做出各种情绪表达。When the above-mentioned tactile sensing units are applied to the intelligent robot, the pressure is transmitted to the tactile sensing units of the robot through the pressure sensing unit attached to various parts of the intelligent robot body, and each of the tactile senses is sensed by the tactile sense. After the unit processing, an emotion expression part and an emotion expression instruction are generated, and the control signal is used to drive the robot to make various emotion expressions.
机器人身体上,如双手、双脚、前胸、后背、头部等上安装多个执行机构112,如电机、扬声器、显示器等,这些部件和控制器108电连接,并按照收到的情绪表达部位和情绪表达指令做出相对应的情绪表达。A plurality of actuators 112, such as motors, speakers, displays, etc., are mounted on the body of the robot, such as hands, feet, chest, back, head, etc., and these components are electrically connected to the controller 108 and in accordance with the received emotions. The expression site and the emotional expression instruction make corresponding emotional expressions.
运动状态感测Motion state sensing
运动感测单元104用于感测智能机器人100的运动状态以生成运动状态参数。运动感测单元104的实例包括重力加速度传感器、陀螺仪或安装在机器人躯干上的倾角传感器,以实时测量机器人运动过程中加速度和角速度的数据。运动感测单元104的数据输出给控制器108。The motion sensing unit 104 is configured to sense a motion state of the smart robot 100 to generate a motion state parameter. Examples of the motion sensing unit 104 include a gravity acceleration sensor, a gyroscope, or a tilt sensor mounted on the robot's torso to measure data of acceleration and angular velocity during robot motion in real time. The data of the motion sensing unit 104 is output to the controller 108.
在运动控制方面,控制器108通过运动感测单元104获取运动参数的实时数据,通过调节算法调整运动。在运动过程中,控制器108将重力加速度传感器感测机器人的加速度等运动参数作为反馈,或利用陀螺仪或安装在机器人躯干上的倾角传感器感测该机器人的运动状态等运动参数作为反馈,利用模式识别算法识别出当前的运动状态,通过反馈调节运动,保证运动的稳定性。例如,控制器108通过运动感测单元104解算出机器人的倾角,模拟识别出是否处于要跌倒状态;如果靠近跌倒的边界,通过反馈调节关节,避免跌倒的发生。In terms of motion control, the controller 108 acquires real-time data of the motion parameters through the motion sensing unit 104, and adjusts the motion by the adjustment algorithm. During the motion, the controller 108 senses the motion parameter such as the acceleration of the gravity sensor as the feedback, or uses the gyroscope or the tilt sensor mounted on the torso of the robot to sense the motion state of the robot, and the like as feedback. The pattern recognition algorithm recognizes the current motion state and adjusts the motion through feedback to ensure the stability of the motion. For example, the controller 108 calculates the inclination of the robot through the motion sensing unit 104, and the simulation identifies whether it is in a state to fall; if it is close to the boundary of the fall, the joint is adjusted by feedback to avoid the occurrence of a fall.
互动决策Interactive decision
控制器108连接语音采集单元101、图像采集单元102、压力信号获取单元103、运动感测单元104、口令识别处理单元105、本地图像识别处理单元106、压力信号识别处理单元107、识别选择单元109、云端识别单元110和执行机构112。控制器108可获取语音信号、场景图像或人脸图像、外部压力信号和运动状态参数,用于控制机器人的整体运作。例如控制器108可命令语音采集单元101、图像采集单元102、压力信号获取单元103、运动感测单元104捕捉外部信息,或者命令口令识别处理单元105、本地图像识别处理单元106、压力信号识别处理单元107开始工作以获得所需的口令识别结果、本地图像识别结果、压力感知型情绪信号等。控制器108命令云端识别单元110和云端服务器200通信,以将需要进一步处理的数据发送给云端服务器200,并从云端服务器200获得处 理结果,例如云端语音识别结果、云端人脸识别结果。控制器108可命令例如执行机构112执行相应动作。The controller 108 is connected to the voice collection unit 101, the image acquisition unit 102, the pressure signal acquisition unit 103, the motion sensing unit 104, the password recognition processing unit 105, the local image recognition processing unit 106, the pressure signal recognition processing unit 107, and the identification selection unit 109. , cloud identification unit 110 and actuator 112. The controller 108 can acquire a voice signal, a scene image or a face image, an external pressure signal, and a motion state parameter for controlling the overall operation of the robot. For example, the controller 108 can instruct the voice collection unit 101, the image acquisition unit 102, the pressure signal acquisition unit 103, the motion sensing unit 104 to capture external information, or the command password recognition processing unit 105, the local image recognition processing unit 106, and the pressure signal recognition processing. Unit 107 begins to work to obtain the desired password recognition result, local image recognition result, pressure-aware emotion signal, and the like. The controller 108 instructs the cloud identification unit 110 and the cloud server 200 to communicate to transmit data that needs further processing to the cloud server 200, and obtains from the cloud server 200. Results, such as cloud speech recognition results, cloud face recognition results. Controller 108 can command, for example, actuator 112 to perform the corresponding action.
控制器108会根据口令识别处理结果和云端语音识别处理结果至少之一、本地图像识别结果和云端人脸识别结果至少之一、压力感知型情绪信号中的任一个或多个的组合作出智能机器人100的互动决策,且控制器108可选地根据运动状态参数调整智能机器人100的运动。The controller 108 may make an intelligent robot according to a combination of at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and one or more of the pressure-sensing emotion signals. An interactive decision of 100, and the controller 108 optionally adjusts the motion of the intelligent robot 100 based on the motion state parameters.
智能机器人100可常规地处于待机或休眠状态,等待使用者的启动(例如人声呼唤或者轻拍唤醒)。在一个实例中,语音采集单元101采集语音信号,并传输给控制器108,控制器108将其发送给口令识别处理单元105后可以识别语音信号中的启动指令,响应于这一启动指令,据此开启机器人开始工作。在另一个实例中,压力信号获取单元103采集压力信号后,控制器108响应于这一压力信号,据此开启智能机器人100开始工作。当然可以理解,控制器108可以在其它状况下开启智能机器人100。例如控制器108响应于使用者的开关按钮开启智能机器人100。The smart robot 100 can be normally in a standby or hibernation state, waiting for the user's activation (eg, vocal call or tap waking). In one example, the voice collection unit 101 collects the voice signal and transmits it to the controller 108. After the controller 108 sends it to the password recognition processing unit 105, it can recognize the start command in the voice signal, in response to the start command. This turns on the robot to start working. In another example, after the pressure signal acquisition unit 103 acquires the pressure signal, the controller 108 responds to the pressure signal, thereby turning on the smart robot 100 to start working. It will of course be understood that the controller 108 can turn on the smart robot 100 under other conditions. For example, the controller 108 turns on the smart robot 100 in response to a user's switch button.
各个感知设备获得的有关智能机器人100的运动、语音等实时数据也可以通过云端识别单元110传输给云端服务器200,从而实现云端服务器200对智能机器人100的运行情况进行监测。通过云端识别单元110与云端服务器200的连接,将数据传至云端以进行处理,可以提高系统的实时处理能力。The real-time data about the motion, voice, and the like of the smart robot 100 obtained by each sensing device can also be transmitted to the cloud server 200 through the cloud recognition unit 110, thereby implementing the cloud server 200 to monitor the operation of the smart robot 100. Through the connection of the cloud identification unit 110 and the cloud server 200, data is transmitted to the cloud for processing, and the real-time processing capability of the system can be improved.
控制器108是智能机器人100的核心,主要负责采集各感知设备信号和数据,对信号和数据进行分析处理,从而进行互动和运动决策。控制器108内部可配置如图10的影响模型,其输入参数是口令识别处理结果和云端语音识别处理结果至少之一、本地图像识别结果和云端人脸识别结果至少之一、压力感知型情绪信号以及运动状态参数中的一个或多个,影响模型可以据此作出互动决策,命令执行机构112作出互动,来实现与外界的互动。影响模型可以是依据人工智能算法建立的训练模型。这一训练模型可以根据人工智能算法,将实际的输入参数和开发者期望的与该实际的输入参数对应的输出参数作为训练,从而获得该训练模型的算法参数。The controller 108 is the core of the intelligent robot 100, and is mainly responsible for collecting signals and data of each sensing device, and analyzing and processing the signals and data, thereby performing interaction and motion decision. The controller 108 can be internally configured with an impact model as shown in FIG. 10, and the input parameters are at least one of a password recognition processing result and a cloud speech recognition processing result, at least one of a local image recognition result and a cloud face recognition result, and a pressure-sensing emotion signal. And one or more of the motion state parameters, the impact model can make an interactive decision based on this, and the command executing agency 112 interacts to achieve interaction with the outside world. The impact model can be a training model built on an artificial intelligence algorithm. The training model can obtain the algorithm parameters of the training model according to the artificial intelligence algorithm, and the actual input parameters and the output parameters that the developer desires corresponding to the actual input parameters are used as training.
互动决策的一个部分是情绪表达。控制器108能够从口令识别处理结果和云端语音识别处理结果之一、本地图像识别结果和云端人脸识别结果之一、压力感 知型情绪信号以及运动状态参数中获取用户的情绪信息,并根据用户的情绪信息确定所述智能机器人的情绪类型,然后根据影响模型中预先存储的映射列表确定与情绪类型对应的智能机器人的情绪表达部位以及情绪表达指令,最后控制情绪表达部位执行情绪表达指令。One part of interactive decision making is emotional expression. The controller 108 is capable of recognizing one of the processing result and the cloud speech recognition processing result, one of the local image recognition result and the cloud face recognition result, and the feeling of stress Obtaining emotional information of the user in the cognitive emotion signal and the motion state parameter, and determining the emotion type of the intelligent robot according to the emotional information of the user, and then determining the emotion of the intelligent robot corresponding to the emotion type according to the mapping list pre-stored in the impact model The expression part and the emotion expression instruction, and finally control the emotion expression part to execute the emotion expression instruction.
作为本发明的一个示例,根据用户的面部图像确定用户的面部表情,根据用户的面部表情确定用户的情绪信息。例如,当用户的面部表情为微笑时,用户的情绪信息为开心,根据用户的情绪信息确定的智能机器人的情绪类型为喜。As an example of the present invention, the facial expression of the user is determined based on the facial image of the user, and the emotional information of the user is determined according to the facial expression of the user. For example, when the facial expression of the user is a smile, the emotional information of the user is happy, and the emotional type of the intelligent robot determined according to the emotional information of the user is hi.
作为本发明的另一个示例,通过语音采集单元101获取用户的音量和声音频率,根据用户的音量和声音频率确定用户的情绪信息。例如,当用户的音量小于第一预设值,且用户的声音频率小于第二预设值时,确定用户的情绪信息为伤心,根据用户的情绪信息确定的智能机器人的情绪类型为哀。As another example of the present invention, the volume and sound frequency of the user are acquired by the voice collecting unit 101, and the emotion information of the user is determined according to the volume and sound frequency of the user. For example, when the volume of the user is less than the first preset value, and the voice frequency of the user is less than the second preset value, it is determined that the emotional information of the user is sad, and the emotional type of the intelligent robot determined according to the emotional information of the user is sad.
作为本发明的另一个示例,通过压力信号获取单元103和/或运动感测单元104获取用户的情绪信息,并根据用户的情绪信息确定智能机器人的情绪类型。例如,通过压力信号获取单元103检测到用户拥抱智能机器人时,确定智能机器人的情绪类型为喜;再例如,通过压力信号获取单元103和运动感测单元104检测到用户用力摇晃智能机器人100时,确定智能机器人的情绪类型为怒。As another example of the present invention, the emotion information of the user is acquired by the pressure signal acquisition unit 103 and/or the motion sensing unit 104, and the emotion type of the intelligent robot is determined according to the emotion information of the user. For example, when the pressure signal acquisition unit 103 detects that the user embraces the smart robot, it is determined that the emotion type of the smart robot is hi; and, for example, when the pressure signal acquisition unit 103 and the motion sensing unit 104 detect that the user shakes the smart robot 100 with force, Determine the emotional type of the intelligent robot as anger.
优选地,情绪类型包括喜、怒、哀和/或乐。优选地,一种情绪类型至少与一个情绪表达部位相对应。在本发明实施例中,情绪表达指令与情绪表达部位是相对应的;情绪表达指令为动作指令和/或面部表情指令。Preferably, the type of emotion includes joy, anger, sadness and/or music. Preferably, one type of emotion corresponds to at least one part of the emotional expression. In the embodiment of the present invention, the emotion expression instruction corresponds to the emotion expression part; the emotion expression instruction is an action instruction and/or a facial expression instruction.
优选地,情绪表达部位包括前肢、后肢、躯干、头部和/或面部,后肢包括腿和脚。优选地,当情绪类型为喜时,情绪类型对应的所述情绪表达部位为前肢,所述情绪类型对应的所述情绪表达指令为前肢上下摇摆,还可以同时进行面部表情表达,如面部呈现喜悦的表情。优选地,当情绪类型为怒时,情绪类型对应的所述情绪表达部位为前肢、躯干、右腿和右脚,所述情绪类型对应的所述情绪表达指令为所述前肢展开不动,所述躯干稍向左倾,所述右腿前后摆动,以及所述右脚跺脚,还可以同时进行面部表情表达,如面部呈现发怒的表情。Preferably, the emotional expression site includes a forelimb, a hind limb, a torso, a head and/or a face, and the hind limb includes a leg and a foot. Preferably, when the emotion type is hi, the emotion expression part corresponding to the emotion type is a forelimb, and the emotion expression instruction corresponding to the emotion type is a fore limb swinging up and down, and the facial expression expression may also be performed at the same time, such as facial expression joy. Expression. Preferably, when the emotion type is anger, the emotion expression parts corresponding to the emotion type are a forelimb, a trunk, a right leg, and a right foot, and the emotion expression instruction corresponding to the emotion type is that the forelimb is unfolded. The torso is tilted slightly to the left, the right leg swings back and forth, and the right foot is stomped, and facial expressions can also be expressed at the same time, such as facial expressions showing anger.
优选地,当情绪类型为哀时,情绪类型对应的情绪表达部位为头部,所述情绪类型对应的情绪表达指令为头部转到肩部位置,以及低下头部,还可以同时进行面部表情表达,如面部呈现伤心的表情。 Preferably, when the emotion type is sorrow, the emotion expression part corresponding to the emotion type is a head, and the emotion expression instruction corresponding to the emotion type is that the head is turned to the shoulder position, and the head is lowered, and the facial expression can also be performed simultaneously. Expressions such as facial expressions with sad expressions.
优选地,当所述情绪类型为乐时,情绪类型对应的所述情绪表达部位为前肢和躯干,所述情绪类型对应的所述情绪表达指令为所述前肢上下摆动以及躯干左右摆动,还可以同时进行面部表情表达,如面部呈现欢乐的表情。Preferably, when the emotion type is music, the emotion expression part corresponding to the emotion type is a forelimb and a torso, and the emotion expression instruction corresponding to the emotion type is that the forelimb swings up and down and the trunk swings left and right, and At the same time, facial expressions are expressed, such as a happy expression on the face.
优选地,动作指令包括与情绪表达部位对应的动作类型信息、动作幅度信息、动作频率信息和/或动作时长信息。Preferably, the action command includes action type information, action amplitude information, action frequency information, and/or action duration information corresponding to the emotion expression portion.
例如,当智能机器人的情绪类型为喜时对应的情绪表达部位为前肢,对应的动作类型信息为上下摆动;动作幅度信息指的是前肢上下摆动的幅度;动作频率信息指的是前肢上下摆动的频率,例如每秒一次;动作时长信息指的是控制前肢上下摆动的总时长。For example, when the emotional type of the intelligent robot is hi, the corresponding emotional expression part is the forelimb, and the corresponding action type information is up and down swing; the action amplitude information refers to the amplitude of the fore limb swinging up and down; the action frequency information refers to the fore limb swinging up and down. The frequency, for example, once per second; the action duration information refers to the total length of time during which the forelimbs are controlled to swing up and down.
另外,型情绪表达指令可为声音。例如情绪类型为喜对应的音频信息为欢乐的叫声;情绪类型为怒对应的音频信息为发怒的叫声;情绪类型为哀对应的音频信息为悲哀的叫声;情绪类型为乐对应的音频信息为欢乐的叫声。In addition, the type of emotion expression instruction can be sound. For example, the audio information corresponding to the emotion type is the joyous call; the audio information corresponding to the emotion type is the angry call; the audio type of the emotion type is the sad call; the emotion type is the audio corresponding to the music. Information is the call of joy.
本发明实施例通过智能机器人主动获取用户的情绪信息,根据用户的情绪信息确定智能机器人的情绪类型,根据预先存储的映射列表确定与智能机器人的情绪类型对应的智能机器人的情绪表达部位以及情绪表达指令,再控制情绪表达部位执行情绪表达指令,由此主动感受外部用户的情绪变化并通过用户的情绪信息来确定智能机器人的情绪类型,并通过智能机器人的肢体动作来表达智能机器人的情绪,从而提高了智能机器人与用户之间的互动度,提高了智能机器人的情绪表达效果,增强了趣味性,并提高了用户体验。In the embodiment of the present invention, the intelligent robot actively acquires the emotional information of the user, determines the emotional type of the intelligent robot according to the emotional information of the user, and determines the emotional expression part and the emotional expression of the intelligent robot corresponding to the emotional type of the intelligent robot according to the pre-stored mapping list. Directing, controlling the emotional expression part to execute the emotional expression instruction, thereby actively sensing the emotional change of the external user and determining the emotional type of the intelligent robot through the emotional information of the user, and expressing the emotion of the intelligent robot through the physical motion of the intelligent robot, thereby The interaction between the intelligent robot and the user is improved, the emotional expression of the intelligent robot is improved, the interest is enhanced, and the user experience is improved.
互动决策的另一个部分是根据口令执行动作。作为举例而非限制,当用户叫唤智能机器人的名字时,智能机器人会朝向用户的方向行走。或者用户指示智能机器人坐下、摇头等动作时,智能机器人作出响应。Another part of the interactive decision is to perform actions based on the password. By way of example and not limitation, when the user calls the name of the intelligent robot, the intelligent robot will walk in the direction of the user. Or when the user instructs the intelligent robot to sit down, shake his head, etc., the intelligent robot responds.
图3示出本发明一实施例的感知互动方法流程图。该方法可以在图1、图2所示的系统中执行,也可以在其它系统中执行。参考图3所示,本实施例的一种智能机器人的感知互动方法,包括以下步骤:FIG. 3 is a flow chart showing a method for perceptual interaction according to an embodiment of the present invention. The method can be performed in the system shown in Figures 1 and 2, or in other systems. Referring to FIG. 3, a method for sensing interaction of an intelligent robot according to this embodiment includes the following steps:
在步骤301,进行语音识别。At step 301, speech recognition is performed.
在此步骤中,对外部输入的语音信号进行本地口令识别并生成口令识别处理结果,或者将语音信号发送至云端服务器并由所述云端服务器执行云端语音识别和云端语义理解至少之一,接收云端服务器发来的云端语音识别处理结果。 In this step, local password recognition is performed on the externally input voice signal and a password recognition processing result is generated, or the voice signal is sent to the cloud server and at least one of cloud voice recognition and cloud semantic understanding is performed by the cloud server, and the cloud is received. The result of cloud speech recognition processing sent by the server.
在步骤302,进行人脸识别。At step 302, face recognition is performed.
在此步骤中,对外部输入的场景图像进行处理以生成本地图像识别结果,或者将从外部输入的场景图像传输至云端服务器进行人脸识别并接收云端服务器回传的云端人脸识别结果。In this step, the externally input scene image is processed to generate a local image recognition result, or the externally input scene image is transmitted to the cloud server for face recognition and receives the cloud face recognition result returned by the cloud server.
在步骤303,对外部压力信号进行识别处理并生成压力感知型情绪信号。At step 303, an external pressure signal is identified and a pressure-aware emotional signal is generated.
在此步骤中,确定外部压力信号的类型和压力产生位置,计算外部压力信号持续的时间值和压力变化率,根据压力变化率和预设的变化阈值比对确定外部压力信号的类型,并且根据外部压力信号确定压力产生位置;根据压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与压力产生位置及外部压力信号的类型相对应的压力感知型情绪信号。In this step, determining the type of the external pressure signal and the pressure generating position, calculating the duration value and the pressure change rate of the external pressure signal, determining the type of the external pressure signal according to the ratio of the pressure change rate and the preset change threshold value, and according to The external pressure signal determines a pressure generating position; the pressure generating position and the type of the external pressure signal are compared with a preset mapping list, and a pressure sensing type emotion signal corresponding to the type of the pressure generating position and the external pressure signal is generated.
在步骤304,根据口令识别处理结果和云端语音识别处理结果至少之一、本地图像识别结果和云端人脸识别结果至少之一和压力感知型情绪信号作出智能机器人的互动决策,从而触发互动决策。In step 304, an interactive decision of the intelligent robot is made according to at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and the pressure-aware emotion signal, thereby triggering the interactive decision.
本实施例的其它细节可参考前文描述的内容,在此不再展开描述。For further details of this embodiment, reference may be made to the foregoing description, and the description will not be repeated here.
本发明上述实施例的智能机器人及其感知互动方法,通过配置多种感知设备,综合获取环境信号并进行互动决策,提升了机器人的互动能力。同时通过云端识别单元与外部处理资源进行通信,提升了机器人的处理能力,使得更为复杂的互动决策成为可能。The intelligent robot and the sensing interaction method thereof according to the above embodiments of the present invention improve the interaction ability of the robot by configuring a plurality of sensing devices, comprehensively acquiring environmental signals and performing interactive decision making. At the same time, the cloud recognition unit communicates with external processing resources, which improves the processing power of the robot and makes more complex interactive decisions possible.
虽然本发明已参照当前的具体实施例来描述,但是本技术领域中的普通技术人员应当认识到,以上的实施例仅是用来说明本发明,在没有脱离本发明精神的情况下还可作出各种等效的变化或替换,因此,只要在本发明的实质精神范围内对上述实施例的变化、变型都将落在本申请的权利要求书的范围内。 While the invention has been described with respect to the embodiments of the present invention, it will be understood by those skilled in the art Various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (46)

  1. 一种具有云端互动功能的多感知型智能机器人,其与外部的云端服务器配合,其特征在于,所述智能机器人包括有:A multi-sense intelligent robot with cloud interaction function, which cooperates with an external cloud server, wherein the intelligent robot includes:
    口令识别处理单元,用于对外部输入的语音信号进行本地口令识别并生成口令识别处理结果;a password recognition processing unit, configured to perform local password recognition on the externally input voice signal and generate a password recognition processing result;
    本地图像识别处理单元,用于对外部输入的场景图像进行本地图像识别并生成本地图像识别结果;a local image recognition processing unit, configured to perform local image recognition on the externally input scene image and generate a local image recognition result;
    压力信号识别处理单元,用于对外部压力信号进行识别处理并生成压力感知型情绪信号;a pressure signal recognition processing unit configured to perform an identification process on the external pressure signal and generate a pressure-sensing emotional signal;
    云端识别单元,用于将所述语音信号发送至所述云端服务器并由所述云端服务器执行云端语音识别和云端语义理解至少之一,并接收所述云端服务器发来的云端语音识别处理结果;及用于将所述场景图像发送至所述云端服务器并由所述云端服务器进行人脸识别,并接收所述云端服务器发来的云端人脸识别结果;a cloud identification unit, configured to send the voice signal to the cloud server, and perform at least one of cloud voice recognition and cloud semantic understanding by the cloud server, and receive a cloud voice recognition processing result sent by the cloud server; And sending the scene image to the cloud server and performing face recognition by the cloud server, and receiving a cloud face recognition result sent by the cloud server;
    控制器,用于根据所述口令识别处理结果和所述云端语音识别处理结果至少之一、所述本地图像识别结果和所述云端人脸识别结果至少之一和/或压力感知型情绪信号作出所述智能机器人的互动决策,从而触发所述互动决策的执行。a controller, configured to perform, according to at least one of the password recognition processing result and the cloud speech recognition processing result, the local image recognition result, and the cloud facial recognition result and/or a pressure-aware emotion signal The intelligent robot's interactive decision, thereby triggering the execution of the interactive decision.
  2. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括识别选择单元,用于对外部输入的语音信号进行判断,从而选择将外部输入的语音信号是传输给所述口令识别处理单元还是传输给所述云端识别单元,以及/或者对外部输入的场景图像进行判断,从而选择将外部输入的场景图像是传输给本地图像识别处理单元还是传输给所述云端识别单元。The multi-sense intelligent robot according to claim 1, further comprising an identification selecting unit configured to determine an externally input voice signal, thereby selecting to transmit the externally input voice signal to said password recognition processing The unit is also transmitted to the cloud recognition unit, and/or the externally input scene image is determined to select whether to transmit the externally input scene image to the local image recognition processing unit or to the cloud recognition unit.
  3. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括语音采集单元,用于获得外部输入的语音信号。A multi-sense intelligent robot according to claim 1, further comprising a voice collecting unit for obtaining an externally input voice signal.
  4. 如权利要求3所述的多感知型智能机器人,其特征在于,所述语音采集单元为麦克风,所述麦克风的数量为两个,分别安装在所述智能机器人的左右耳处。The multi-sense intelligent robot according to claim 3, wherein the voice collecting unit is a microphone, and the number of the microphones is two, which are respectively installed at the left and right ears of the smart robot.
  5. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括预设口令存储单元,所述预设口令存储单元用于存储预设的口令资料;所述口令识别处理单元用于根据预设的口令资料对所述语音信号进行本地口令识别并生成口令识别处理结果。 The multi-sense intelligent robot according to claim 1, further comprising a preset password storage unit, wherein the preset password storage unit is configured to store preset password data; and the password recognition processing unit is configured to The preset password data performs local password recognition on the voice signal and generates a password recognition processing result.
  6. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括声纹识别单元,用于在对所述语音信号进行识别处理之前,根据预存储的声纹资料进行身份验证。The multi-sense intelligent robot according to claim 1, further comprising a voiceprint recognition unit configured to perform identity verification based on the pre-stored voiceprint data before the recognition process of the voice signal.
  7. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括图像采集单元,用于捕捉外部输入的一个以上的场景图像。The multi-sense intelligent robot of claim 1 further comprising an image acquisition unit for capturing more than one scene image of the external input.
  8. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括人脸图像获取单元,用于从外部输入的场景图像中获取具备识别特征点的人脸图像;所述本地图像识别处理单元用于对所述具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果;所述云端识别单元用于将具备识别特征点的人脸图像发送至所述云端服务器进行云端人脸识别。The multi-perceptive intelligent robot according to claim 1, further comprising a face image acquiring unit configured to acquire a face image having the recognized feature point from the externally input scene image; and the local image recognition processing The unit is configured to perform local image recognition on the face image with the identified feature point and generate a local image recognition result; the cloud recognition unit is configured to send the face image with the identified feature point to the cloud server for the cloud person Face recognition.
  9. 如权利要求8所述的多感知型智能机器人,其特征在于,所述人脸图像获取单元还用于从外部输入的场景图像中获取具备识别特征点的人脸图像之后排除不具备识别特征点的人脸图像。The multi-sense type intelligent robot according to claim 8, wherein the face image acquiring unit is further configured to: after acquiring a face image having the recognized feature point from the externally input scene image, excluding the unrecognized feature point Face image.
  10. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括预设图像存储单元,用于存储预设的图像资料;所述本地图像识别处理单元用于根据预设的图像资料对所述外部输入的场景图像进行本地图像识别并生成本地图像识别结果。The multi-sense intelligent robot according to claim 1, further comprising a preset image storage unit for storing preset image data; and the local image recognition processing unit is configured to use the preset image data pair The externally input scene image performs local image recognition and generates a local image recognition result.
  11. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括压力信号获取单元,用于获取外部压力信号。The multi-perceptive intelligent robot according to claim 1, further comprising a pressure signal acquisition unit for acquiring an external pressure signal.
  12. 如权利要求11所述的多感知型智能机器人,其特征在于,所述压力信号获取单元为电阻式压力传感器。The multi-sensor type intelligent robot according to claim 11, wherein the pressure signal acquisition unit is a resistive pressure sensor.
  13. 如权利要求11所述的多感知型智能机器人,其特征在于,所述压力信号获取单元包括分布于所述智能机器人表面的压力传感芯片阵列和与所述压力传感芯片阵列连接的模数转换电路,所述压力传感芯片阵列感知所述智能机器人表面的压力变化并将其转换为压力模拟信号,所述模数转换电路将所述压力模拟信号转换为压力数字信号。The multi-sense intelligent robot according to claim 11, wherein the pressure signal acquisition unit comprises a pressure sensing chip array distributed on a surface of the intelligent robot and a modulus connected to the pressure sensing chip array a conversion circuit that senses a pressure change of the surface of the intelligent robot and converts it into a pressure analog signal, the analog to digital conversion circuit converting the pressure analog signal into a pressure digital signal.
  14. 如权利要求1所述的多感知型智能机器人,其特征在于,所述压力信号识别处理单元包括有:The multi-sensor type intelligent robot according to claim 1, wherein the pressure signal recognition processing unit comprises:
    压力类型判断单元,用于计算所述外部压力信号的压力变化率,根据所述压力变化率和预设的变化阈值比对确定所述外部压力信号的类型; a pressure type determining unit, configured to calculate a pressure change rate of the external pressure signal, and determine a type of the external pressure signal according to the pressure change rate and a preset change threshold value comparison;
    压力位置判断单元,用于根据所述外部压力信号确定压力产生位置;以及a pressure position determining unit configured to determine a pressure generating position based on the external pressure signal;
    压力感知型情绪信号生成单元,用于根据所述压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与所述压力产生位置及外部压力信号的类型相对应的压力感知型情绪信号。a pressure-sensing emotion signal generating unit, configured to compare the pressure generating position and the type of the external pressure signal with a preset mapping list, and generate a pressure sensing corresponding to the pressure generating position and the type of the external pressure signal Emotional signal.
  15. 如权利要求14所述的多感知型智能机器人,其特征在于,所述压力信号识别处理单元还包括分别与所述压力类型判断单元和压力感知型情绪信号生成单元连接,用于存储预设的变化阈值和预设的映射列表的数据存储单元。The multi-perceptive intelligent robot according to claim 14, wherein the pressure signal recognition processing unit further comprises a connection with the pressure type determination unit and the pressure sensing type emotion signal generation unit, respectively, for storing the preset A data storage unit that changes the threshold and the preset mapping list.
  16. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括与所述控制器连接的运动感测单元,用于感测所述智能机器人的运动状态以生成运动状态参数。The multi-sense intelligent robot according to claim 1, further comprising a motion sensing unit coupled to the controller for sensing a motion state of the intelligent robot to generate a motion state parameter.
  17. 如权利要求16所述的多感知型智能机器人,其特征在于,所述运动感测单元为重力加速度传感器、陀螺仪或安装在所述智能机器人躯干上的倾角传感器。The multi-sense intelligent robot according to claim 16, wherein the motion sensing unit is a gravity acceleration sensor, a gyroscope, or a tilt sensor mounted on the torso of the smart robot.
  18. 如权利要求1所述的多感知型智能机器人,其特征在于,还包括网络判断单元,用以判断所述智能机器人与所述云端服务器的连接状态并根据所述连接状态生成网络判断结果。The multi-sense intelligent robot according to claim 1, further comprising a network determining unit, configured to determine a connection state of the smart robot and the cloud server, and generate a network determination result according to the connection state.
  19. 如权利要求1所述的多感知型智能机器人,其特征在于,所述智能机器人和所述云端服务器通过无线网络接口连接。The multi-sense intelligent robot according to claim 1, wherein the intelligent robot and the cloud server are connected through a wireless network interface.
  20. 如权利要求16所述的多感知型智能机器人,其特征在于,所述控制器配置有影响模型,所述口令识别处理结果、所述云端语音识别处理结果、所述本地图像识别结果、所述云端人脸识别结果、所述压力感知型情绪信号、所述运动状态参数为所述影响模型的输入参数,所述影响模型根据所述输入参数输出所述互动决策。The multi-sense intelligent robot according to claim 16, wherein the controller is configured with an influence model, the password recognition processing result, the cloud speech recognition processing result, the local image recognition result, and the The cloud face recognition result, the pressure-aware emotion signal, the motion state parameter are input parameters of the influence model, and the influence model outputs the interaction decision according to the input parameter.
  21. 如权利要求1所述的多感知型智能机器人,其特征在于,所述控制器响应于启动指令而启动所述智能机器人。A multi-perceptive intelligent robot according to claim 1, wherein said controller activates said intelligent robot in response to a start command.
  22. 如权利要求21所述的多感知型智能机器人,其特征在于,所述启动指令包含在语音信号中,所述口令识别处理单元或所述云端识别单元还用于识别所述语音信号中的启动指令;或所述启动指令包含在外部压力信号中,所述压力信号识别处理单元还用于识别所述外部压力信号中的启动指令;或所述启动指令包含在无线信号中,所述智能机器人还包括无线通信单元和无线信号识别单元,所述无线通信单元用于接收外部传输的无线信号,所述无线信号识别单元用于识别所述无 线信号中的启动指令。The multi-sense intelligent robot according to claim 21, wherein said activation command is included in a voice signal, and said password recognition processing unit or said cloud recognition unit is further configured to identify activation in said voice signal And the start command is included in an external pressure signal, the pressure signal recognition processing unit is further configured to identify a start command in the external pressure signal; or the start command is included in a wireless signal, the smart robot A wireless communication unit for receiving an externally transmitted wireless signal, and a wireless signal identification unit for identifying the Start command in the line signal.
  23. 一种应用权利要求1-22任一项所述的具有云端互动功能的多感知型智能机器人的感知互动方法,包括:A method for perceptually interacting with a multi-sense intelligent robot with cloud interaction function according to any one of claims 1 to 22, comprising:
    对外部输入的语音信号进行本地口令识别并生成口令识别处理结果,或者将所述语音信号发送至所述云端服务器并由所述云端服务器执行云端语音识别和云端语义理解至少之一,接收所述云端服务器发来的云端语音识别处理结果;Performing local password recognition on the externally input voice signal and generating a password recognition processing result, or transmitting the voice signal to the cloud server and performing at least one of cloud voice recognition and cloud semantic understanding by the cloud server, receiving the Cloud voice recognition processing results sent by the cloud server;
    对外部输入的场景图像进行本地图像识别并生成本地图像识别结果,或者将所述场景图像传输至所述云端服务器进行人脸识别并接收所述云端服务器回传的云端人脸识别结果;Performing local image recognition on the externally input scene image and generating a local image recognition result, or transmitting the scene image to the cloud server for face recognition and receiving the cloud face recognition result returned by the cloud server;
    对外部压力信号进行识别处理并生成压力感知型情绪信号;以及Identifying external pressure signals and generating pressure-aware emotional signals;
    根据所述口令识别处理结果和所述云端语音识别处理结果至少之一、所述本地图像识别结果和所述云端人脸识别结果至少之一和/或压力感知型情绪信号作出所述智能机器人的互动决策,从而触发所述互动决策的执行。Making the intelligent robot according to at least one of the password recognition processing result and the cloud speech recognition processing result, at least one of the local image recognition result and the cloud face recognition result, and/or a pressure-aware emotion signal An interactive decision that triggers the execution of the interactive decision.
  24. 如权利要求23所述的感知互动方法,其特征在于,还包括对外部输入的语音信号进行判断,从而选择对所述外部输入的语音信号进行本地口令识别还是传输至所述云端服务器,以及/或者对所述外部输入的场景图像进行判断,从而选择对所述外部输入的场景图像进行本地图像识别还是传输至所述云端服务器。The method of claim 23, further comprising: determining a voice signal externally input to select whether to perform local password recognition on the externally input voice signal or to transmit to the cloud server, and/ Or determining the externally input scene image to select whether to perform local image recognition on the externally input scene image or to transmit to the cloud server.
  25. 如权利要求23所述的感知互动方法,其特征在于,还包括获得外部输入的语音信号。The method of perceptually interacting according to claim 23, further comprising obtaining an externally input speech signal.
  26. 如权利要求23所述的感知互动方法,其特征在于,还包括存储预设的口令资料,且对外部输入的语音信号进行本地口令识别并生成口令识别处理结果的步骤,是根据预设的口令资料对所述语音信号进行本地口令识别并生成口令识别处理结果。The method of sensing interaction according to claim 23, further comprising the step of storing the preset password data, and performing local password recognition on the externally input voice signal and generating a password recognition processing result according to the preset password The data performs local password recognition on the voice signal and generates a password recognition processing result.
  27. 如权利要求23所述的感知互动方法,其特征在于,所述对外部输入的语音信号进行本地口令识别并生成口令识别处理结果,或者将所述语音信号发送至所述云端服务器之前,还包括根据预存储的声纹资料对所述语音信号进行身份验证。The method of claim 23, wherein the externally input voice signal is subjected to local password recognition and a password recognition processing result is generated, or the voice signal is sent to the cloud server, and further includes The voice signal is authenticated based on pre-stored voiceprint data.
  28. 如权利要求23所述的感知互动方法,其特征在于,还包括捕捉外部输入的一个以上的场景图像。The method of perceptually interacting as claimed in claim 23, further comprising capturing more than one scene image of the external input.
  29. 如权利要求28所述的感知互动方法,其特征在于,还包括从外部输入的场景图像中获取具备识别特征点的人脸图像;且对外部输入的场景图像进行本地图 像识别并生成本地图像识别结果的步骤,是对所述具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果;将所述场景图像传输至所述云端服务器进行人脸识别的步骤,是将具备识别特征点的人脸图像发送至所述云端服务器进行云端人脸识别。The method of perceptually interacting according to claim 28, further comprising: acquiring a face image having the recognized feature point from the externally input scene image; and performing the map on the externally input scene image The step of recognizing and generating a local image recognition result is to perform local image recognition on the face image having the identification feature point and generate a local image recognition result; and transmitting the scene image to the cloud server for face recognition The step is to send a face image having the identification feature point to the cloud server for cloud face recognition.
  30. 如权利要求29所述的感知互动方法,其特征在于,从外部输入的场景图像中获取具备识别特征点的人脸图像的步骤之后还包括排除不具备识别特征点的人脸图像。The method of perceptually interacting according to claim 29, wherein the step of acquiring the face image having the recognized feature point from the externally input scene image further comprises excluding the face image not having the recognized feature point.
  31. 如权利要求29所述的感知互动方法,其特征在于,还包括存储预设的图像资料,且对外部输入的场景图像进行本地图像识别并生成本地图像识别结果的步骤,是根据预设的图像资料对所述具备识别特征点的人脸图像进行本地图像识别并生成本地图像识别结果。The method of sensing interaction according to claim 29, further comprising the step of storing the preset image data, and performing local image recognition on the externally input scene image and generating a local image recognition result according to the preset image The data performs local image recognition on the face image having the identification feature point and generates a local image recognition result.
  32. 如权利要求23所述的感知互动方法,其特征在于,将所述语音信号或场景图像发送至所述云端服务器前,还包括判断网络状态是否正常,在网络正常时将所述语音信号或场景图像发送至所述云端服务器。The method of claim 23, wherein before the sending the voice signal or the scene image to the cloud server, the method further comprises: determining whether the network status is normal, and the voice signal or the scene when the network is normal. The image is sent to the cloud server.
  33. 如权利要求23所述的感知互动方法,其特征在于,还包括获取外部压力信号。The method of sensing interaction of claim 23, further comprising acquiring an external pressure signal.
  34. 如权利要求23所述的感知互动方法,其特征在于,对外部压力信号进行识别处理并生成压力感知型情绪信号的步骤包括:The method of sensing interaction according to claim 23, wherein the step of performing an identification process on the external pressure signal and generating a pressure-sensing emotion signal comprises:
    计算所述外部压力信号的压力变化率,根据所述压力变化率和预设的变化阈值比对确定所述外部压力信号的类型;Calculating a pressure change rate of the external pressure signal, and determining a type of the external pressure signal according to the pressure change rate and a preset change threshold value comparison;
    根据所述外部压力信号确定压力产生位置;以及Determining a pressure generating position based on the external pressure signal;
    根据所述压力产生位置及外部压力信号的类型与预设的映射列表进行比对,生成与所述压力产生位置及外部压力信号的类型相对应的压力感知型情绪信号。Comparing the pressure generating position and the type of the external pressure signal with a preset mapping list, generating a pressure sensing type emotion signal corresponding to the pressure generating position and the type of the external pressure signal.
  35. 如权利要求34所述的感知互动方法,其特征在于,还包括存储预设的变化阈值和预设的映射列表。The method of perceptually interacting according to claim 34, further comprising storing a preset change threshold and a preset mapping list.
  36. 如权利要求34所述的感知互动方法,其特征在于,若所述压力变化率大于预设的第一变化阈值,则将所述外部压力信号的类型确定为拍打,否则,将所述外部压力信号的类型确定为抚摸。The method of sensing interaction according to claim 34, wherein if the pressure change rate is greater than a preset first change threshold, determining the type of the external pressure signal as tapping, otherwise, the external pressure The type of signal is determined to be a stroke.
  37. 如权利要求36所述的感知互动方法,其特征在于,所述若所述压力变化率大于预设的第一变化阈值,则将所述外部压力信号的类型确定为拍打包括: The method of claim 36, wherein if the pressure change rate is greater than a preset first change threshold, determining the type of the external pressure signal as a tap comprises:
    如果所述压力变化率大于第一变化阈值而小于等于第二变化阈值,则将所述压力信号的类型确定为轻微拍打;以及If the pressure change rate is greater than the first change threshold and less than or equal to the second change threshold, determining the type of the pressure signal as a slight tap;
    如果所述压力变化率大于第二变化阈值,则将所述压力信号的类型确定为用力拍打。If the pressure change rate is greater than the second change threshold, the type of the pressure signal is determined to be a force tap.
  38. 如权利要求34所述的感知互动方法,其特征在于,所述计算所述外部压力信号的压力变化率为:计算外部压力信号持续的时间值,在所述持续的时间值内根据预设时间段选取所述预设时间段对应的数字信号,根据所述预设时间段及所述预设时间段对应的数字信号计算出压力变化率。The method of sensing interaction according to claim 34, wherein said calculating a rate of change of pressure of said external pressure signal is: calculating a duration value of an external pressure signal, said preset time within said continuous time value The segment selects the digital signal corresponding to the preset time period, and calculates a pressure change rate according to the preset time period and the digital signal corresponding to the preset time period.
  39. 如权利要求38所述的感知互动方法,其特征在于,所述预设时间段为0.5-1.5秒。The method of perceptual interaction according to claim 38, wherein the preset time period is 0.5-1.5 seconds.
  40. 如权利要求23所述的感知互动方法,其特征在于,还包括感测所述智能机器人的运动状态以生成运动状态参数。The perceptual interaction method of claim 23, further comprising sensing a motion state of the intelligent robot to generate a motion state parameter.
  41. 如权利要求23所述的感知互动方法,其特征在于,所述互动决策包括情绪表达部位以及情绪表达指令。The method of perceptually interacting according to claim 23, wherein the interactive decision comprises an emotional expression part and an emotional expression instruction.
  42. 如权利要求41所述的感知互动方法,其特征在于,所述情绪表达部位包括智能机器人的上肢、下肢、躯干、头部、面部和/或口部;所述情绪表达指令包括执行相应的动作指令、播放相应的提示语音和/或显示相应的提示信息。The method of perceptually interacting according to claim 41, wherein the emotional expression portion comprises an upper limb, a lower limb, a trunk, a head, a face and/or a mouth of the intelligent robot; and the emotion expression instruction comprises performing a corresponding action Command, play the corresponding prompt voice and / or display the corresponding prompt information.
  43. 如权利要求42所述的感知互动方法,其特征在于,所述动作指令包括机械动作指令和/或面部表情指令。The method of perceptually interacting according to claim 42, wherein the action instruction comprises a mechanical action command and/or a facial expression command.
  44. 如权利要求43所述的感知互动方法,其特征在于,所述机械动作指令包括与所述情绪表达部位对应的动作类型信息、动作幅度信息、动作频率信息和/或动作时长信息。The method of perceptually interacting according to claim 43, wherein the mechanical action command comprises action type information, action amplitude information, action frequency information, and/or action duration information corresponding to the emotion expression portion.
  45. 如权利要求23所述的感知互动方法,其特征在于,还包括:响应于启动指令而启动所述智能机器人;所述启动指令包含在语音信号中、外部压力信号中或无线信号中。The method of sensing interaction according to claim 23, further comprising: initiating said intelligent robot in response to a start command; said start command being included in a voice signal, in an external pressure signal, or in a wireless signal.
  46. 一种云端互动系统,其特征在于,包括如权利要求1-22任一项所述的具有云端互动功能的多感知型智能机器人,以及云端服务器;所述智能机器人与所述云端服务器进行无线通信。 A cloud interactive system, comprising: a multi-sense intelligent robot with cloud interaction function according to any one of claims 1-22, and a cloud server; the intelligent robot performs wireless communication with the cloud server .
PCT/CN2017/076274 2016-06-15 2017-03-10 Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor WO2017215297A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610422085.1A CN107511832A (en) 2016-06-15 2016-06-15 High in the clouds interaction systems and its more sensing type intelligent robots and perception interdynamic method
CN201610422085.1 2016-06-15

Publications (1)

Publication Number Publication Date
WO2017215297A1 true WO2017215297A1 (en) 2017-12-21

Family

ID=60663695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/076274 WO2017215297A1 (en) 2016-06-15 2017-03-10 Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor

Country Status (2)

Country Link
CN (1) CN107511832A (en)
WO (1) WO2017215297A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096996A (en) * 2019-04-28 2019-08-06 深圳前海达闼云端智能科技有限公司 Biological information identification method, device, terminal, system and storage medium
CN110363278A (en) * 2019-07-23 2019-10-22 广东小天才科技有限公司 A kind of parent-child interaction method, robot, server and parent-child interaction system
WO2020133405A1 (en) * 2018-12-29 2020-07-02 深圳市大疆创新科技有限公司 Method and device for controlling ground remote control robot
CN111931484A (en) * 2020-07-31 2020-11-13 于梦丽 Data transmission method based on big data
CN113119138A (en) * 2021-04-16 2021-07-16 中国科学技术大学 Blind-aiding robot system and method based on Internet of things
CN113524212A (en) * 2021-06-29 2021-10-22 智动时代(北京)科技有限公司 Three-body robot composition method
CN114260919A (en) * 2022-01-18 2022-04-01 华中科技大学同济医学院附属协和医院 Intelligent robot
CN115795278A (en) * 2022-12-02 2023-03-14 广东元一科技实业有限公司 Intelligent cloth paving machine control method and device and electronic equipment

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108687768B (en) * 2018-04-02 2022-08-05 深圳臻迪信息技术有限公司 Wading robot and wading robot information input method
CN110405777B (en) * 2018-04-28 2023-03-31 深圳果力智能科技有限公司 Interactive control method of robot
CN109036392A (en) * 2018-05-31 2018-12-18 芜湖星途机器人科技有限公司 Robot interactive system
CN108839036A (en) * 2018-07-05 2018-11-20 四川长虹电器股份有限公司 Home intelligent health supervision robot
CN110852133A (en) * 2018-07-27 2020-02-28 宝时得科技(中国)有限公司 Automatic walking equipment, control method and control device thereof, and computer equipment
CN110969051A (en) * 2018-09-29 2020-04-07 上海小蚁科技有限公司 Face recognition method based on image sensor system and image sensor system
CN109108984B (en) * 2018-10-22 2021-03-26 杭州任你说智能科技有限公司 Method for accessing physical robot to cloud voice platform and physical robot
CN109571494A (en) * 2018-11-23 2019-04-05 北京工业大学 Emotion identification method, apparatus and pet robot
CN109623837A (en) * 2018-12-20 2019-04-16 北京子歌人工智能科技有限公司 A kind of partner robot based on artificial intelligence
CN109605373A (en) * 2018-12-21 2019-04-12 重庆大学 Voice interactive method based on robot
CN111604920B (en) * 2020-06-02 2022-06-07 南京励智心理大数据产业研究院有限公司 Accompanying growth robot based on diathesis education
CN112829754B (en) * 2021-01-21 2023-07-25 合众新能源汽车股份有限公司 Vehicle-mounted intelligent robot and operation method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102778879A (en) * 2012-07-30 2012-11-14 西安润基投资控股有限公司 Home furnishing ternary intelligent system and method for achieving home furnishing intelligent control
CN104268684A (en) * 2014-09-23 2015-01-07 桂林驰讯科技有限公司 Intelligent inspection system based on intelligent terminal
CN105184058A (en) * 2015-08-17 2015-12-23 李泉生 Private conversation robot
CN105652875A (en) * 2016-03-29 2016-06-08 苏州倍特罗智能科技有限公司 Robot system with recognition modules for ward
CN205989331U (en) * 2016-06-15 2017-03-01 深圳光启合众科技有限公司 High in the clouds interaction systems and its many sensing types intelligent robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102778879A (en) * 2012-07-30 2012-11-14 西安润基投资控股有限公司 Home furnishing ternary intelligent system and method for achieving home furnishing intelligent control
CN104268684A (en) * 2014-09-23 2015-01-07 桂林驰讯科技有限公司 Intelligent inspection system based on intelligent terminal
CN105184058A (en) * 2015-08-17 2015-12-23 李泉生 Private conversation robot
CN105652875A (en) * 2016-03-29 2016-06-08 苏州倍特罗智能科技有限公司 Robot system with recognition modules for ward
CN205989331U (en) * 2016-06-15 2017-03-01 深圳光启合众科技有限公司 High in the clouds interaction systems and its many sensing types intelligent robot

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020133405A1 (en) * 2018-12-29 2020-07-02 深圳市大疆创新科技有限公司 Method and device for controlling ground remote control robot
CN110096996A (en) * 2019-04-28 2019-08-06 深圳前海达闼云端智能科技有限公司 Biological information identification method, device, terminal, system and storage medium
CN110096996B (en) * 2019-04-28 2021-10-22 达闼机器人有限公司 Biological information identification method, device, terminal, system and storage medium
CN110363278A (en) * 2019-07-23 2019-10-22 广东小天才科技有限公司 A kind of parent-child interaction method, robot, server and parent-child interaction system
CN111931484A (en) * 2020-07-31 2020-11-13 于梦丽 Data transmission method based on big data
CN113119138A (en) * 2021-04-16 2021-07-16 中国科学技术大学 Blind-aiding robot system and method based on Internet of things
CN113524212A (en) * 2021-06-29 2021-10-22 智动时代(北京)科技有限公司 Three-body robot composition method
CN114260919A (en) * 2022-01-18 2022-04-01 华中科技大学同济医学院附属协和医院 Intelligent robot
CN114260919B (en) * 2022-01-18 2023-08-29 华中科技大学同济医学院附属协和医院 Intelligent robot
CN115795278A (en) * 2022-12-02 2023-03-14 广东元一科技实业有限公司 Intelligent cloth paving machine control method and device and electronic equipment
CN115795278B (en) * 2022-12-02 2023-08-04 广东元一科技实业有限公司 Intelligent cloth paving machine control method and device and electronic equipment

Also Published As

Publication number Publication date
CN107511832A (en) 2017-12-26

Similar Documents

Publication Publication Date Title
WO2017215297A1 (en) Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor
JP6816925B2 (en) Data processing method and equipment for childcare robots
CN205989331U (en) High in the clouds interaction systems and its many sensing types intelligent robot
EP3624442A1 (en) Robot and method for operating the same
US11948241B2 (en) Robot and method for operating same
EP3623118A1 (en) Emotion recognizer, robot including the same, and server including the same
CN111163906B (en) Mobile electronic device and method of operating the same
US20180257236A1 (en) Apparatus, robot, method and recording medium having program recorded thereon
JP2018072876A (en) Emotion estimation system and emotion estimation model generation system
WO2002099545A1 (en) Man-machine interface unit control method, robot apparatus, and its action control method
CN111002303B (en) Recognition device, robot, recognition method, and storage medium
US20180376069A1 (en) Erroneous operation-preventable robot, robot control method, and recording medium
CN110737335B (en) Interaction method and device of robot, electronic equipment and storage medium
JP6891601B2 (en) Robot control programs, robot devices, and robot control methods
KR20220130000A (en) Ai avatar-based interaction service method and apparatus
WO2016206644A1 (en) Robot control engine and system
CN111506183A (en) Intelligent terminal and user interaction method
US11938625B2 (en) Information processing apparatus, information processing method, and program
JP2018075657A (en) Generating program, generation device, control program, control method, robot device and telephone call system
KR102519599B1 (en) Multimodal based interaction robot, and control method for the same
KR20200101221A (en) Method for processing user input and electronic device supporting the same
JP7414735B2 (en) Method for controlling multiple robot effectors
JP7024754B2 (en) Controls, robots, control methods and programs
US20210201139A1 (en) Device and method for measuring a characteristic of an interaction between a user and an interaction device
JP4735965B2 (en) Remote communication system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17812412

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17812412

Country of ref document: EP

Kind code of ref document: A1