CN110405778B - Robot - Google Patents

Robot Download PDF

Info

Publication number
CN110405778B
CN110405778B CN201810404381.8A CN201810404381A CN110405778B CN 110405778 B CN110405778 B CN 110405778B CN 201810404381 A CN201810404381 A CN 201810404381A CN 110405778 B CN110405778 B CN 110405778B
Authority
CN
China
Prior art keywords
touch
robot
touched
processor
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810404381.8A
Other languages
Chinese (zh)
Other versions
CN110405778A (en
Inventor
刘阳
赵强
魏凯凯
李小灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shen Zhen Gli Technology Ltd
Original Assignee
Shen Zhen Gli Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shen Zhen Gli Technology Ltd filed Critical Shen Zhen Gli Technology Ltd
Priority to CN201810404381.8A priority Critical patent/CN110405778B/en
Publication of CN110405778A publication Critical patent/CN110405778A/en
Application granted granted Critical
Publication of CN110405778B publication Critical patent/CN110405778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Toys (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a robot, which comprises a robot body, a processor and a touch sensor array. Wherein the processor is disposed within the robot body. The touch sensor array is arranged on the robot body and comprises a plurality of touch sensors which are arranged in an array mode, and the touch sensors respectively detect touch actions of users to generate time sequence detection results and send the time sequence detection results to the processor. The processor divides the touch action into different touch types according to the time sequence detection result output by the touch sensor array, and makes different reaction strategies according to the different touch types. By means of the method, the robot control method can be enriched, and interestingness of the robot is enhanced.

Description

Robot
Technical Field
The application relates to the technical field of robot control, in particular to a robot.
Background
At present, consumer-grade robots are more and more popular in the market, are more and more widely used in the fields of education, service, entertainment and the like, and gradually enter families, schools, various service places and the like.
However, in the prior art, the control modes of consumer robots such as an accompanying robot, a toy robot, an entertainment robot, an education robot and the like are single, and the interestingness of the robot is not strong.
Disclosure of Invention
The main technical problem who solves of this application provides a robot, can improve the single scheduling problem of prior art's robot interactive mode.
In order to solve the foregoing technical problem, an embodiment of the present application provides a robot including a robot body, a processor, and a touch sensor array. Wherein the processor is disposed within the robot body. The touch sensor array is arranged on the robot body and comprises a plurality of touch sensors which are arranged in an array mode, and the touch sensors respectively detect touch actions of a user to generate a time sequence detection result and send the time sequence detection result to the processor. The processor divides the touch action into different touch types according to the time sequence detection result output by the touch sensor array, and makes different reaction strategies according to the different touch types.
Compared with the prior art, the beneficial effect of this application is: through setting up the touch-control sensor array that a plurality of touch-control sensor arrays were arranged, the user can touch a plurality of touch-control sensors, can touch different touch-control sensors at the same or different touch-control time and combine the touch-control time, touch-control sensor array generates the chronogenesis testing result, owing to there are multiple touch-control mode, just can generate multiple chronogenesis testing result, the treater distinguishes the touch-control action according to the chronogenesis testing result and divides into different touch-control types, there are multiple control mode just can be to different touch-control types, the robot formulates different reaction strategies according to different touch-control intensity, the touch-control type of touch-control sensor array has so been richened, thereby the interactive mode between robot and the user has been richened, make the robot have more interesting, more intelligent.
Drawings
FIG. 1 is a schematic structural diagram of an embodiment of a robot according to the present application;
FIG. 2 is a schematic diagram of a circuit configuration of an embodiment of a robot according to the present application;
FIG. 3 is a schematic diagram of a touch sensor array according to an embodiment of the present application;
FIG. 4 is a schematic view of a first process of an embodiment of a robot according to the present application;
FIG. 5 is a second schematic flow diagram of an embodiment of a robot according to the present application;
FIG. 6 is a schematic view of a third process of the robot embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, a single touch sensor is provided on a robot, and when the sensor is touched, the robot generates a single reaction or action. The touch sensors are arranged at different positions, and the robot can make different reactions when touching the touch sensors at different positions, but certainly still touches the touch sensor at one position, the robot realizes one reaction, touches the touch sensors at different positions, and makes different reactions, that is, the touch sensor at the same position does not have multiple touch modes, and the robot does not have multiple reaction modes, so that the touch interaction mode of the robot and a user is single, the control mode of the robot is single, and the interestingness of the robot is not strong. In order to solve the above problems, the present application provides the following embodiments:
referring to fig. 1-4, the robot embodiment of the present application includes: the robot comprises a robot body 11, a processor 12 and a touch sensor array 13.
In this embodiment, the robot body 11 may refer to a part of the robot including most parts, such as a body, a frame, a structure of the robot itself, and a plurality of parts inside and outside, and further includes, for example, a housing (not labeled), a driving component (not shown) for driving the robot to move, a moving component (not labeled) for moving the robot, such as a wheel or a leg, an audio component (not shown) for receiving external voice information and emitting sound, a display component (not labeled) for displaying image information, text information, or expression information, an actuator (not labeled) for causing the robot to perform other motions, such as a robot arm, and the like, and various electronic components, circuit structures, part structures, and the like.
The processor 12 is provided in the robot body 11. The processor 12 is configured to control the operation of the entire robot, receive related control instructions, and output control information to control the operation of each part of the robot, and the robot outputs corresponding responses according to the related control information.
The processor 12 may be referred to as a CPU (Central Processing Unit). The processor 12 may be an integrated circuit chip having logic and/or signal processing capabilities, computing capabilities. The processor 12 may also be a general purpose processor 12, a digital signal processor 12 (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor 12 may be a microprocessor 12 or the processor 12 may be any conventional processor 12 or the like.
The touch sensor array 13 is disposed on the robot body 11. A receiving part such as a touch sensor array 13 is provided on the surface of the housing of the robot body 11 and can be touched by a user. The receiving means is means for acquiring a touch signal. Further, for example, the receiving component of the touch sensor array 13 is disposed on the surface of the housing of the robot body 11 and covered by artificial skin, so that the touch feeling is better and the robot is more aesthetic. The touch sensor array 13 is connected to the processor 12. The touch sensor array 13 includes a plurality of touch sensors 131 arranged in an array, and the touch sensors 131 respectively detect touch actions of a user to generate timing detection results and send the timing detection results to the processor 12. The timing detection result is, for example, a result that the touch sensors 131 are touched differently and have a time difference and a certain time sequence, and is not only a result of touching or not touching as in the related art. In the present embodiment, the touch sensor 131 is, for example, a capacitive touch sensor. In the present embodiment, the touch operation may be a touch operation, and the touch sensor 131 may also be referred to as a touch sensor.
The processor 12 divides the touch actions into different touch types according to the timing detection result output by the touch sensor array 13, and makes different response strategies for the different touch types. For example, the processor 12 processes the timing detection result, identifies a touch action corresponding to the timing detection result to obtain a corresponding touch type, and executes a corresponding reaction policy for the touch type. For example by controlling different components to exhibit corresponding reaction strategies. Specifically, for example, a user generates a combined signal of a timing signal and a touch signal at the touch sensor array 13, and sends the combined signal to the processor 12, and the processor 12 processes the combined signal and identifies a touch type corresponding to the combined signal.
In this embodiment, by setting the touch sensor arrays 13 arranged in the touch sensor arrays 13, a user can touch the touch sensors 131, and can touch different touch sensors 131 within the same or different touch time ranges and combine the touch time ranges, the touch sensor arrays 13 generate timing detection results, that is, multiple touch modes exist to generate multiple timing detection results, the processor 12 distinguishes touch actions into different touch types according to the timing detection results, and makes different reaction strategies, so that the touch types of the touch sensor arrays 13 are enriched, thereby enriching the interaction modes between the robot and the user, enriching the control modes of the robot, and making the robot more interesting and intelligent.
Optionally, the processor 12 determines at least one or a combination of the number, the position, and the touched time difference of the touched touch sensors 131 in the touch sensor array 13 according to the timing detection result, so as to distinguish the touch actions into different touch types.
In this embodiment, the timing detection result may include at least one or a combination of the number, the position, and the touched time difference of the touched touch sensors 131, that is, the timing detection result may include the number of the touched touch sensors 131, the position of the touched touch sensor 131, the touched time difference of the touched touch sensor 131, or a combination of any two of the above, such as a combination of the number and the time, a combination of the number and the position, and a combination of the position and the time. Or a combination of the three, a combination of quantity, location and time.
Unlike the single touch mode (whether touch is performed or not) and the single reaction mode in the prior art, the embodiment determines the touch action by combining one or more of the number, the position and the touched time difference of the touch sensor arrays 13, and enriches the interaction types from the aspects of time, space and the like, thereby further enriching the interaction modes of the robot, enhancing the interestingness of the robot, and enhancing the interaction function of the robot.
Referring to fig. 1-4, optionally, the touch type includes at least one or a combination of tapping, scratching, and stroking. By differentiating the different touch actions into different touch types, the touch type may correspond to at least one reaction strategy, for example, the tapping action may correspond to an anger strategy, when the processor 12 receives the timing detection result to determine that the touch type is the tapping action. The corresponding audio information such as "i am angry" is sent out through the audio component, and/or angry expressions or images are sent out through the display component, and/or angry gestures are made through a mechanical arm and the like. Of course, different touch degrees of one touch type may correspond to multiple reaction strategies, for example, different reaction strategies may be formulated according to the touch degree of the touch type.
With reference to fig. 1-5, optionally, when the processor 12 determines that the touch sensors 131 arranged along the same direction in the touch sensor array 13 are touched multiple times according to the timing detection result, while the number of the touch sensors 131 touched at each time is greater than the preset number threshold, and the time difference between the touched time of at least some adjacent touch sensors 131 is greater than the preset first time threshold, the touch action area is divided into the stroking action.
In the present embodiment, the touch sensors 131 arranged in the same direction may be arranged in rows, wherein the touch sensors 131 in the same row direction but arranged in different rows are also arranged in the same direction, may be arranged in column directions, wherein the touch sensors 131 in the same column direction but arranged in different columns are also arranged in the same direction, or may be the touch sensors 131 arranged in the same oblique direction, wherein the touch sensors 131 in different oblique lines but arranged in parallel with each other are also arranged in the same direction.
In this embodiment, the multi-touch may include a unidirectional multi-touch along a setting direction, for example "→" the multi-touches are all along this direction.Of course, the multiple touches may also include repeating the multiple touches back and forth along the setting direction, for example
Figure BDA0001646500210000051
Multiple touches are along the back and forth direction. Each touch refers to a single touch along one direction.
Referring to fig. 3, taking the 4-by-4 touch sensor array 13 as an example, the array includes 16 touch sensors 131 in 4 rows and 4 columns, and for the row direction, the touch sensors 131 arranged along the row direction include 4 rows. When only one row of touch sensors 131 is touched along a single direction, the multiple touches refer to touching the row of touch sensors 131 twice or more. For example, touch the row of touch sensors 131 three times: c31 → C32 → C33 → C34, also in this case, multi-touch. The number of touch sensors touched at each time in the three-time touch is 4.
If only one row of touch direction sensors 131 is touched along the back-and-forth (repeated) direction, multiple touches may refer to touching the row of touch direction sensors 131 at least once back-and-forth, where one back-and-forth touch includes one forward touch and one reverse touch, which is equivalent to one touch and one touch, and the total number of times is considered to be two. For example, touching the row of touch sensors 131 one round trip: c31 → C32 → C33 → C34, C34 → C33 → C32 → C31, in this case, multiple touches are also performed. The number of touch sensors touched at a time is 4. Of course, the back-and-forth touch is not necessarily a double touch, and may be a touch manner such as "come back", and the like, such as C31 → C32 → C33 → C34, C34 → C33 → C32 → C31, C31 → C32 → C33 → C34.
If at least two rows of touch sensors 131 are touched in sequence along a single direction, the multiple touches mean that each row of touch sensors 131 of the at least two rows touches at least one time respectively. The column direction is the same as the other directions. For example, the two rows C31 → C32 → C33 → C34, C41 → C42 → C43 → C44 are sequentially touched, and each row is touched at least once, and in this case, multiple touches are performed. The same applies to sequentially touching at least two rows of touch sensors 131 along the back and forth direction, and if it is ensured that each row of touch sensors 131 is touched at least once, it is considered as multi-touch regardless of one-way touch and repeated touch. Alternatively, for example, the two rows of C31 → C32 → C33, C42 → C43 → C44 are sequentially touched, which is also considered as multiple touches, and the number of numerical controls is 3 at a time.
Optionally, if at least two rows of touch sensors 131 are touched simultaneously along a single direction, the multiple touches indicate that the at least two rows of touch sensors 131 are touched twice or more simultaneously. For example, the two rows C31 → C32 → C33 → C34, C41 → C42 → C43 → C44 are touched at the same time, and two or more touches are simultaneously touched, in which case the touch may be multi-touch. The same principle of simultaneously touching at least two rows of touch sensors 131 along the repeating direction as the above-mentioned touching one row of touch sensors 131 along the repeating direction is applied. Referring to fig. 3, in the present embodiment, the number of touch sensors 131 touched at a time is greater than a preset number threshold. For example, the number of the sensor arrays of 4 by 4 is, for example, 3 for the row direction, so that when one or more rows of touch sensors 131 are touched, the number of the touch sensors 131 touched per row is greater than 3, and may be equal to 3. Column direction or other directions. The preset number may be specifically determined according to the number of the touch sensors 131 and the like, and is not limited herein.
In the above example, it can be understood that, in the multi-touch, the touch sensors 131 having the same installation direction in each touch may be the same touch sensors 131, such as C31 → C32 → C33 → C34, C31 → C32 → C33 → C34. The touch sensors 131 may be partially identical, such as C31 → C32 → C33, C32 → C33 → C34. Or may be completely different, such as C31 → C32 → C33 → C34, C21 → C22 → C23 → C24.
In this embodiment, the touched time difference of at least some adjacent touch sensors 131 is greater than the preset first time threshold. The touched time difference refers to a time interval between two adjacent touch sensors 131 being touched successively. The preset first time threshold may be set according to specific situations, and is not limited in this embodiment.
Referring to fig. 3, taking the 4 × 4 touch sensor array 13 as an example, the preset number threshold is 3, the preset first time threshold is 2S, the sequence of touches is C11 → C12 → C13 → C14, and the touched time difference between adjacent touch sensors 131 among C11, C12, C13, and C14 is greater than 2S, then the touch action can be classified as a stroking action. For another example, the touch sequence is C11 → C21 → C31, C12 → C22 → C32, C13 → C23 → C33, C14 → C24 → C34, if the requirements of the number threshold and the first time threshold are met, the touch sequence can be classified into a stroking action.
Through above-mentioned mode, divide into the touch-control type of stroking the action with above-mentioned touch-control action for when the user made daily stroking to the robot, easy discernment is the stroke action of distinguishing, and it is little with daily stroke difference to stroll touch-control type, can reduce user's learning cost, makes the user get into the hand easily, thereby makes the robot more intelligent, more closes to the life.
With reference to fig. 1-5, optionally, when the processor 12 determines that the touch sensors 131 arranged along the same direction in the touch sensor array 13 are touched multiple times according to the timing detection result, and the number of the touch sensors 131 touched each time is smaller than a preset number threshold, and the time difference between the touched time of at least some adjacent touch sensors 131 is larger than a preset second time threshold, the touch action area is divided into an itch scratching action.
The above description may be referred to for multiple touches.
In the scratching action, relative to the stroking action, the number of the touch sensors 131 touched at each time in the scratching action is smaller than the preset number threshold, and is also smaller than the number of the touch sensors 131 touched at each time in the stroking action.
In this embodiment, for the tickle action, it is also required that the touched time difference of at least part of the adjacent touch sensors 131 is greater than the preset second time threshold. In this embodiment, optionally, the preset second time threshold may be smaller than the preset first time threshold, and further for scratching, the time difference between the touched time of at least a portion of the adjacent touch sensors 131 is greater than the preset second time threshold and smaller than the preset first time threshold.
Referring to fig. 3, taking the 4 × 4 touch sensor array 13 as an example, the preset number threshold is 3, the preset second time threshold is 1S, the touch sequence is C12 → C13, and multiple touches are performed, and the touched time difference between C12 and C13 is greater than 1S, so the touch action can be classified as an itch scratching action. For another example, the touch sequence is C12 → C13, C22 → C23, and multiple touches can be classified as scratching actions if they meet the requirements of the number threshold and the time threshold. For example, C12 → C13, C13 → C12, and the time difference between the touch of C12 and C13 is greater than 1S, the touch action can be classified as scratching action.
In a similar way, the scratching actions are distinguished in the above mode, and compared with the stroking actions, the number of touch control of the scratching actions each time is smaller than that of the stroking actions, so that the scratching actions are closer to the scratching actions in actual life, the learning cost of a user is reduced, and the interestingness of the robot is enriched.
With reference to fig. 1-4, optionally, when the processor 12 determines that the number of the touched touch sensors 131 is greater than the preset number threshold according to the timing detection result, and the touched time difference of the adjacent touch sensors 131 is smaller than the preset first time threshold or the preset second time threshold, the touch action area is divided into the tapping action.
Optionally, the preset second time threshold is smaller than the preset first time threshold, and the touched time difference of the adjacent touch sensors 131 for the tapping motion is smaller than the preset second time threshold.
In the present embodiment, the tapping motion can be considered as close to touching the plurality of touch sensors 131 at the same time, but actually, there is still a time difference between the times of touching different touch sensors.
Referring to fig. 3, taking the 4-by-4 touch sensor array 13 as an example, the preset number threshold is 3, the preset time threshold is 1S, the touched touch sensors 13 are C11, C12, C13, and C14, and the average touched time difference between the adjacent touch sensors 131 among C11, C12, C13, and C14 is less than 1S, so that the touch operation can be classified as a tapping operation. For example, if the touched time difference between adjacent touch sensors 131 is less than 1S for touched positions C11, C12, C13, C21, C22, C23, C31, C32, and C33, the touch is classified as a tapping motion.
In this embodiment, for the tapping motion, the number of the touched touch sensors 131 is greater than the preset number threshold, which may mean that the number of the touched touch sensors 131 is greater than the preset number threshold at each tapping. The touched time difference between the adjacent touch sensors 131 is smaller than a predetermined time threshold, and thus is classified as a tapping motion. Of course, the touched time interval between the touch sensor 131 touched earliest and the touch sensor 131 touched latest among the touched touch sensors 131 is smaller than a preset time threshold, such as a preset second time threshold.
Optionally, the processor 12 further formulates different reaction strategies according to different durations of the same touch type.
For example, the longer the stroking time is, the longer the stroking duration is, and the stroking range may be considered to be deeper, and the processor 12 may make different response strategies according to different stroking durations. For example, if the stroking action lasts for 3 minutes, the processor 12 formulates a smart reaction strategy, and outputs actions, expressions, voices and the like through each component of the robot body 11, such as smiling expressions, soft music or related voice information, if the stroking action lasts for 5 minutes, the processor 12 formulates a "dream home" reaction strategy, controls the robot body 11 to enter a "sleep" state or a "sleep" state and the like, and outputs corresponding information through each component of the robot body 11, such as eye-closing expressions, yawning actions and the like.
Regarding the scratching action and the flapping action, the duration is different, and the processor 12 also makes different response strategies according to the duration.
Of course other touch types, such as a press action, may be included in other embodiments. The processor 12 determines, according to the timing detection result, that the number of the touched touch sensors 131 is greater than the preset number threshold, and the touched touch sensors 131 are always in the touched state for a time greater than a certain time, such as a preset first time threshold, similar to pressing on the touch sensor array 13, and then divides the touch action area into pressing actions.
In this embodiment, if the processor 12 cannot determine the determined touch type according to the timing detection result, the processor 12 may determine the touch action as an invalid touch type, and the processor 12 may formulate a corresponding response policy, for example, output sound, motion, and/or expression to inform the user that the touch action is invalid. Of course, no reaction strategy may be established.
Different reaction strategies are formulated according to different duration times of different touch types, so that the interaction scene of the robot can be further enriched, the interestingness of the robot is increased, and the robot is more intelligent.
Referring to fig. 1 to 6, optionally, the robot further includes an image capturing device 14, the image capturing device 14 is configured to capture an image of a user, and the processor 12 is configured to distinguish the users individually according to the image captured by the image capturing device 14 and make different response strategies for the same touch type of different users.
In this embodiment, the image capture device 14 may be a camera. For example, the robot includes a memory (not shown), and the image capturing device 14 is connected to the processor 12 and the memory. Image capture device 14 may capture images of the user, either in real time or under the direction of processor 12. The images acquired by the image acquisition device 14 are stored in the memory, and the processor 12 processes the images, such as feature recognition processing, for example, face recognition, to further distinguish between different users individually, so as to formulate different reaction strategies for the same touch type of different users, thereby making the robot more intelligent and the interactive scene richer.
For example, in one person, a child plays with the robot for a long time, and a parent interacts with the robot for a short time, so that the robot can distinguish the child from the parent by respectively acquiring image information of the child and the parent, when the child performs an itch scratching action on the robot, a reaction policy output by the processor 12 according to the itch scratching action may be happy, for example, "creaky" is output in a loud way, or dancing hands and feet, and when the parent performs the itch scratching action on the robot, the reaction policy output by the processor 12 according to the itch scratching action may be feared, for example, an escape action is output, and wheels are driven to be away from a user. That is, the processor 12 makes different reaction strategies for the same touch action of different users.
Further, the processor 12 allocates a familiarity parameter to the user, further changes the familiarity parameter according to an interaction process between the user and the robot, and makes different reaction strategies for the same touch type according to different familiarity parameters.
For example, the image captured by image capture device 14 is stored in memory to form a user image library, and when image capture device 14 captures image information of a user, processor 12 compares the image information of the user with the user image library, and if image information already exists, further records the interaction process to change the familiarity parameter, such as increasing the familiarity parameter. If the user image library does not contain the image information of the user, the image information of the user is newly added into the user image library, and then the interaction process is recorded. For example, the acquaintance parameter may be determined according to the interaction time, the interaction time may be superimposed in a cumulative manner, the longer the superimposed interaction time is, the larger the acquaintance parameter is, and the shorter the interaction time is, the smaller the acquaintance parameter is. For example, if the interaction time between a child and the robot is longer than the interaction time between a parent and the robot, the acquaintance parameter accumulated for the child by the robot is larger than the acquaintance parameter accumulated for the parent.
Of course, the robot may also preset a user, and the acquaintance parameter of the user is preset to be the maximum, for example, by taking a picture and uploading the picture through the image capturing device 14, or may be uploaded to the robot through other external devices, for example, a mobile phone, such as a memory or a server. And a non-preset user adopts the method to collect the data through the image equipment.
In the embodiment, the familiarity parameter is distributed to the user, and different reaction strategies are formulated for the same touch type according to different familiarity parameters, so that the interaction between the robot and the user is more humanized and intelligent.
Optionally, the robot includes a plurality of touch sensor arrays 13 disposed at different positions, and the processor 12 formulates different reaction strategies for the same touch type received by the touch sensor arrays 13 at different positions.
In the prior art, there is no touch type, only whether touch is needed, and the touch type is similar to a 0 or 1 switch, only one reaction strategy is needed after the touch sensor at one position is touched, and the reaction strategies at different positions are different, but in this embodiment, the touch sensor arrays 13 at the same position have different touch types, so that the reaction strategies of the robot are different, and when the touch sensor arrays 13 at different positions receive the same touch type, the reaction strategies of the robot are also different, so that the interaction scene and interest of the robot are greatly enriched, and the problems of single interaction mode, low interest and the like in the prior art are solved.
In summary, by setting the touch sensor array 13 with a plurality of touch sensors 131 arranged, a user can touch the plurality of touch sensors 131, and can touch different touch sensors 131 at the same or different touch times and combine the touch times, the touch sensor array 13 generates a timing detection result, the processor 12 distinguishes touch actions into different touch types according to the timing detection result, and formulates different reaction strategies, so that the touch types of the touch sensor array 13 are enriched, and thus, the interaction manner between the robot and the user is enriched, and the robot is more interesting and intelligent.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (6)

1. A robot, comprising:
a robot body;
a processor disposed within the robot body;
the touch sensor array is arranged on the robot body and comprises a plurality of touch sensors which are arranged in an array mode, and the touch sensors respectively detect touch actions of a user to generate a time sequence detection result and send the time sequence detection result to the processor;
the processor divides the touch action into different touch types according to a time sequence detection result output by the touch sensor array, and formulates different reaction strategies aiming at the different touch types, wherein the touch types at least comprise at least one or a combination of beating, scratching and stroking;
when the processor determines that the touch sensors arranged along the same direction in the touch sensor array are touched for multiple times according to the time sequence detection result, the number of the touch sensors touched each time is larger than a preset number threshold, and the touched time difference of at least part of adjacent touch sensors is larger than a preset first time threshold, the touch action is divided into stroking actions; and/or when the processor determines that the touch sensors arranged along the same direction in the touch sensor array are touched for multiple times according to the time sequence detection result, the number of the touch sensors touched each time is smaller than a preset number threshold, and the touched time difference of at least part of adjacent touch sensors is larger than a preset second time threshold, the touch action is divided into itch scratching actions; and/or when the processor determines that the number of the touched touch sensors is larger than a preset number threshold according to the time sequence detection result, and the touched time difference of the adjacent touch sensors is smaller than a preset first time threshold or a preset second time threshold, dividing the touch action area into beating actions.
2. The robot of claim 1, wherein: and the processor determines at least one or a combination of the number and the position of the touched touch sensors in the touch sensor array and the touched time difference according to the time sequence detection result, and then distinguishes the touch action into different touch types.
3. The robot of claim 1, wherein: the processor further formulates different reaction strategies according to different durations of the same touch type.
4. The robot of claim 1, wherein: the robot further comprises image acquisition equipment, the image acquisition equipment is used for acquiring images of users, the processor distinguishes the users individually according to the images acquired by the image acquisition equipment, and different reaction strategies are formulated according to the same touch type of different users.
5. The robot of claim 4, wherein: the processor distributes a mature degree parameter for the user, further changes the mature degree according to the interaction process of the user and the robot, and formulates different reaction strategies aiming at the same touch type according to the mature degree.
6. The robot of claim 1, wherein: the robot comprises a plurality of touch sensor arrays arranged at different positions, and the processor formulates different reaction strategies aiming at the same touch type received by the touch sensor arrays at different positions.
CN201810404381.8A 2018-04-28 2018-04-28 Robot Active CN110405778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810404381.8A CN110405778B (en) 2018-04-28 2018-04-28 Robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810404381.8A CN110405778B (en) 2018-04-28 2018-04-28 Robot

Publications (2)

Publication Number Publication Date
CN110405778A CN110405778A (en) 2019-11-05
CN110405778B true CN110405778B (en) 2022-10-21

Family

ID=68345674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810404381.8A Active CN110405778B (en) 2018-04-28 2018-04-28 Robot

Country Status (1)

Country Link
CN (1) CN110405778B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115047993A (en) * 2021-02-26 2022-09-13 华为技术有限公司 Touch behavior identification method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013244566A (en) * 2012-05-28 2013-12-09 Fujitsu Ltd Robot, and method for controlling the same
CN205201537U (en) * 2015-11-04 2016-05-04 上海拓趣信息技术有限公司 Robot of accompanying and attending to
CN107225577A (en) * 2016-03-25 2017-10-03 深圳光启合众科技有限公司 Apply tactilely-perceptible method and tactile sensor on intelligent robot
WO2018016461A1 (en) * 2016-07-20 2018-01-25 Groove X株式会社 Autonomous-behavior-type robot that understands emotional communication through physical contact

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100941638B1 (en) * 2007-12-18 2010-02-11 한국전자통신연구원 Contact Behavior Recognition System and Method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013244566A (en) * 2012-05-28 2013-12-09 Fujitsu Ltd Robot, and method for controlling the same
CN205201537U (en) * 2015-11-04 2016-05-04 上海拓趣信息技术有限公司 Robot of accompanying and attending to
CN107225577A (en) * 2016-03-25 2017-10-03 深圳光启合众科技有限公司 Apply tactilely-perceptible method and tactile sensor on intelligent robot
WO2018016461A1 (en) * 2016-07-20 2018-01-25 Groove X株式会社 Autonomous-behavior-type robot that understands emotional communication through physical contact

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
压阻阵列触滑觉复合传感器;罗志增等;《机器人》;20010328(第02期);全文 *

Also Published As

Publication number Publication date
CN110405778A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
US9069381B2 (en) Interacting with a computer based application
WO2019120029A1 (en) Intelligent screen brightness adjustment method and apparatus, and storage medium and mobile terminal
US9244533B2 (en) Camera navigation for presentations
KR102152717B1 (en) Apparatus and method for recognizing behavior of human
TWI411935B (en) System and method for generating control instruction by identifying user posture captured by image pickup device
US8381108B2 (en) Natural user input for driving interactive stories
KR102223693B1 (en) Detecting natural user-input engagement
US20110279368A1 (en) Inferring user intent to engage a motion capture system
CN111580652B (en) Video playback control method, device, augmented reality device and storage medium
KR20120052228A (en) Bringing a visual representation to life via learned input from the user
CN104115099A (en) Engagement-dependent gesture recognition
WO2022001606A1 (en) Interaction method and apparatus, and electronic device and storage medium
JP2003334389A (en) Control device by gesture recognition, its method and recording medium
CN106778565B (en) Pull-up counting method and device
TWI630505B (en) Interactive augmented reality system and portable communication device and interaction method thereof
CN107968885A (en) Alarm clock prompting method, mobile terminal and computer-readable recording medium
CN107066081A (en) The interaction control method and device and virtual reality device of a kind of virtual reality system
CN110405778B (en) Robot
CN116755555A (en) Pet interaction method and device based on AR (augmented reality) glasses, storage medium and electronic equipment
CN109529340A (en) Virtual object control method, device, electronic equipment and storage medium
CN114071211B (en) Video playing method, device, equipment and storage medium
Yeo et al. OmniSense: Exploring novel input sensing and interaction techniques on mobile device with an omni-directional camera
KR102349002B1 (en) Dynamic projection mapping interactive gesture recognition media content prodcution method and apparatus
CN109547696A (en) A kind of image pickup method and terminal device
CN115309261A (en) A human-computer interaction method, device, storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant