WO2018108176A1 - Robot video call control method, device and terminal - Google Patents

Robot video call control method, device and terminal Download PDF

Info

Publication number
WO2018108176A1
WO2018108176A1 PCT/CN2017/116674 CN2017116674W WO2018108176A1 WO 2018108176 A1 WO2018108176 A1 WO 2018108176A1 CN 2017116674 W CN2017116674 W CN 2017116674W WO 2018108176 A1 WO2018108176 A1 WO 2018108176A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
target
robot
feature information
information
Prior art date
Application number
PCT/CN2017/116674
Other languages
French (fr)
Chinese (zh)
Inventor
何坚强
Original Assignee
北京奇虎科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2018108176A1 publication Critical patent/WO2018108176A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present invention relates to the field of automatic control technologies, and in particular, to a robot video call control method, apparatus and terminal.
  • intelligent robots as one of the smart products, can replace or assist humans in some work, and are often used in all walks of life.
  • intelligent robots have gradually replaced thousands of households to deal with daily household chores. Although such robots have simple automatic control and movement functions, they cannot meet modern needs.
  • the present invention provides a robot video call control method and a corresponding device thereof.
  • a robot video call control method of the present invention includes the following steps: establishing a video call with a calling party, and transmitting a video stream obtained by the local camera unit to the calling party; Receiving the instruction of the calling by the calling party, parsing the target information included in the seeking instruction, determining the target feature information of the corresponding target object according to the target information; and starting the walking when the target object is not captured
  • the device performs local movement, performs image recognition on the video stream of the camera unit during the movement, determines an image containing the target feature information to capture the target object, and controls the walking device to make the local device after capturing the target object Maintain a preset distance range from the target object.
  • the present invention also provides a robot video call control apparatus comprising: at least one processor; and at least one memory communicably coupled to the at least one processor; the at least one memory including processing
  • the walking device is activated to perform local movement, and the video stream of the camera unit is image-recognized during the movement, and the image containing the target feature information is determined to capture the target object; when the target object is captured, the control is performed.
  • the walking device maintains a preset distance range between the unit and the target object.
  • the present invention also provides a video callable mobile robot comprising a processor for running a program to perform the steps of the robot video call control method.
  • the present invention also provides a computer program comprising computer readable code that causes the robotic video call control method to be executed when the video callable mobile robot runs the computer readable code.
  • the invention provides a computer readable medium storing the computer program of the fourth aspect.
  • the present invention has the following beneficial effects:
  • the method, device and terminal for controlling a video call of a robot provided by the present invention establish and initiate remote control through pre-stored images, relationship of characters, and contact information with a person to realize remote connection, and when remote control is transmitted remotely by a character.
  • the image of the target object stored in advance is used as the feature information for identification, and after the target object is captured, the extended feature information of the target object is collected, so as to facilitate A fast capture of the target object can be achieved by extending the feature information when subsequently capturing the target object.
  • the voice reminder is provided in the invention to ensure that the child receives the video call initiated by the parent in time, which helps the parent to monitor the child's activity anytime and anywhere.
  • the process of capturing the target object mainly adopted
  • the method involves voice and/or infrared positioning, so that the robot can locate the child's position more quickly while searching for children, greatly improving the time and accuracy of the person recognition, after capturing the child and with the child.
  • the distance measuring device is kept within a preset distance from the child, thereby ensuring the child's safe and clear reception of the child information and the maximum range of capturing the child's state.
  • the invention integrates the entertainment learning function of video communication, mobile and human-computer interaction, and any character stored in the robot database can initiate an instruction to start the robot, by receiving and parsing the instruction, completing and/or starting the instruction involved in the instruction.
  • Tasks and functions, the robot provided by the present invention can also provide children with human-computer interaction activities matching their age and/or IQ through observation and learning, so that the machine can achieve the maximum possible development of children's intelligence in the process of accompanying children. To reduce the child's loneliness, it is more practical in real life.
  • FIG. 1 is a flowchart of a method for controlling a video call of a robot according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for controlling a video call of a robot according to another embodiment of the present invention
  • FIG. 3 is a flowchart of a method for controlling a video call of a robot according to still another embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for controlling a video call of a robot according to still another embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a video call control apparatus for a robot according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a sub-frame of a robot video call control device according to another embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a method for controlling a video call of a robot according to still another embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a method for controlling a video call of a robot according to still another embodiment of the present invention.
  • Figure 9 is a block diagram of a video callable mobile robot performing the method according to the present invention.
  • Figure 10 is a schematic diagram of a memory unit for holding or carrying program code implementing a method in accordance with the present invention.
  • terminal and terminal device used herein include both a wireless signal receiver device and a device having only a wireless signal receiver without a transmitting capability.
  • a device comprising receiving and transmitting hardware having a receiving and transmitting hardware capable of two-way communication over a two-way communication link.
  • Such devices may include cellular or other communication devices having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data Processing, fax, and/or data communication capabilities; PDA (Personal Digital Assistant), which can include radio frequency receivers, pagers, Internet/Intranet access, web browsers, notepads, calendars, and/or GPS (Global Positioning System (Global Positioning System) receiver; conventional laptop and/or palmtop computer or other device having a conventional laptop and/or palmtop computer or other device that includes and/or includes a radio frequency receiver.
  • PCS Personal Communications Service
  • PDA Personal Digital Assistant
  • terminal may be portable, transportable, installed in a vehicle (aviation, sea and/or land), or adapted and/or configured to operate locally, and/or Run in any other location on the Earth and/or space in a distributed form.
  • the "terminal” and “terminal device” used herein may also be a communication terminal, an internet terminal, a music/video playing terminal, and may be, for example, a PDA, a MID 10 (Mobile Internet Device), and/or have a music/video playing function.
  • Mobile phones can also be smart TVs, set-top boxes and other devices.
  • the robot involved in the present invention can understand the human language, talks with the operator in human language, and forms an external environment in which it can "survive” in its own “consciousness” through programming. It analyzes what is happening and adjusts its movements to meet the operator's requirements for robots. Robots can be programmed to make their intelligence reach the level of children. Robots can walk alone, “seeing” things and analyzing what they see, obeying instructions and answering questions in human language. More importantly, it has the ability to "understand”.
  • the robot video call control method enables a family member to view the situation of the home at any time and anywhere in the home of the guardian through the communication terminal, and the child at home is thinking about the parents and/or other at home.
  • members are members, they can also send timely communication to them through the robot end.
  • the robot can update the stored extended feature information in time by monitoring the change of the extended features of the extended part of the target object.
  • the robot can receive the message in time and transmit the video image of the situation at home, the robot of the present invention has Human-computer interaction function can also play, answer questions and help learn The role.
  • the robot video call control method disclosed in the following embodiments, as shown in FIG. 1, includes the following steps:
  • S100 Establish a video call with the calling party, and transmit the video stream obtained by the local camera unit to the calling party;
  • the information stored in the robot records the relationship between the robot and the family member, and establishes a connection mode with the direct communication terminal between the family members.
  • the calling party in step S100 is a family member, for example, each mobile terminal of the family member, such as a mobile phone. Computers, ipads, etc.
  • the application can be an app that controls the robot directly related to the robot, or a web link that controls the robot, in order to perform real-time monitoring of the role of the child, the family
  • the member establishes and activates the video communication transmission function of the robot in advance, directly establishes a video call with the communication terminal of the family member, or establishes a video call with the family member when receiving the transmission instruction of the family member, and real-time Transmitting a video image to a terminal of a family member, or directly receiving a call from a family member who controls the bot's App application or entering a web application that controls the machine, and issues a video call request, directly accepts the family member's Video call request and to The family member who is the video call, that is, the calling party transmits the video stream acquired by the local machine, wherein the past video stream transmitted by the robot is also displayed on the calling mobile terminal of the calling party where the controlling robot is installed, or the webpage opened by the calling mobile terminal. Displayed on the app,
  • S200 receiving an instruction of the calling party initiated by the calling party, parsing object information included in the seeking instruction, and determining target feature information of the corresponding target object according to the target information;
  • the robot accepts and sends back the video stream to the caller.
  • the caller can directly observe the situation at home through the video. If the caller cannot see the target object in the video, the caller can be on his own side.
  • the mobile terminal issues a seek instruction, and the search instruction includes information about the target object, and the information is stored in the robot or in the cloud connected to the robot. After the robot receives the seek instruction, the robot is in the local machine.
  • the target object information included in the searched instruction is parsed, and the target feature information of the target object corresponding to the target object is determined according to the target object information, and the robot searches and determines the target object based on the basis in the subsequent process.
  • the mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and the robot receives the “seeking daughter” search command and parses the information in the seek instruction in the local machine, that is, parses and extracts “ "Daughter” message, sending the message "daughter” to the store "daughter” a database of target information and corresponding feature information, and determining the feature information of the daughter by storing the target information of the daughter, wherein the storage relationship between the specific target feature information and the target information is detailed later, and the feature information is the daughter's Facial features, such as the contour and position of the entire face and facial features, will be used by the robot to find and determine the daughter based on this.
  • Another embodiment of the present step is that after the robot receives the instruction of seeking, the robot sends the searched instruction to the cloud, and the cloud parses the target information contained in the searched instruction, and sends the target information to the robot, and the robot according to the target.
  • the object information determines the target feature information of the target object corresponding to the target object, and the robot will find and determine the target object based on the basis in the subsequent process.
  • the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and after receiving the instruction of “seeking the daughter”, the robot sends the instruction of “seeking the daughter” to the cloud, and the cloud analyzes the search.
  • the information in the instruction that is, parsing and extracting the information of "daughter”, and sending the information to the robot, the robot receives the parsed information, and sends the message "daughter" to the target information storing "daughter”
  • a database corresponding to the characteristic information, and determining the characteristic information of the daughter by storing the target information of the daughter the robot will find and determine the daughter based on the basis in the subsequent process.
  • Another embodiment of the present step is that after the robot receives the seek instruction, the robot sends the search command to the cloud, and the cloud parses the target information contained in the search command, and the cloud sends the target information to the storage target.
  • the cloud and the target feature information are in the cloud database, and the target feature information of the target object corresponding to the target object is determined according to the target information in the cloud, and the cloud sends the target feature information to the robot, and the robot will use the target in the subsequent process. Based on finding and determining the target object.
  • the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and after receiving the instruction of “seeking the daughter”, the robot sends the instruction of “seeking the daughter” to the cloud, and the cloud analyzes the search.
  • the information in the instruction that is, parsing and extracting the information of "daughter”, determining the characteristic information of the daughter through the information, and transmitting the characteristic information to the robot, and the robot receives the characteristic information directly receiving the daughter, and based thereon Find and identify your daughter.
  • S300 when the target object is not captured, start the walking device to perform local movement, perform image recognition on the video stream of the imaging unit during the moving process, and determine an image including the target feature information to capture the target object;
  • step S200 After the feature information of the target object and the target object is determined in step S200, if it has been acquired In the video stream, image recognition is performed without image with target feature information, and the robot activates its own walking device to move the robot. During the moving process, the robot passes the acquired video stream of the camera unit and passes image recognition technology. Identifying the image in the video, determining whether the image contains the image of the target feature information, and capturing the target object, wherein the uncaptured object includes the following situations: 1. The robot does not recognize the image in the video stream.
  • the walking device is activated; The robot captures the target feature of the target object, and the distance from the target object remains within the preset range. If the target object suddenly moves away from the target feature that causes the robot to fail to recognize the target object, the walking device is activated, if the target object is far away and Before the robot locates the position of the target object, the robot finds that the target object is gradually approaching by ranging, and does not activate the walking device.
  • the robotic walking device receives the signal from the camera unit and converts it into an electrical signal of a controller that controls and electrically connects the traveling device, and the controller rotates the electrical signal to the driving device that activates the traveling device, and the driving device starts
  • the walking device realizes the movement of the robot, wherein the driving device may be a motor, the walking device may be a wheel, a crawler or a wheel and the like; the image recognition is to first store a picture in the robot and use it as a model, the robot processor The model is first preprocessed, and the contour with lines, the angle between the lines and the lines, the relative position of the lines and the lines, the colors covered in the outline, etc. are extracted, after capturing a video image.
  • the robot processor In order to determine whether the target object needs to be captured in the current image, the robot processor sequentially preprocesses each frame of the image in the image, and extracts the contour with lines, the angle between the lines and the lines, and the line and The relative position of the lines, the colors covered in the outline, etc. Comparative fitting, the fitting when it reaches the set value is considered to exist in the video image captured target.
  • the characteristic information of the daughter is determined in step S200, and the robot searches for the daughter at home based on the feature information, and the robot firstly at the position where he receives the instruction to seek.
  • the walking device is activated to cooperate The camera unit captures the target feature information of the daughter.
  • the walking device is activated, and the image is recognized by the video stream of the camera unit to capture the daughter.
  • the target features are the contour features of the daughter's face.
  • S400 After capturing the target object, controlling the walking device to maintain a preset distance range between the local machine and the target object.
  • the robot can preset the distance range between the robot and the target object and is M. After the robot captures the target object, the distance between the robot and the target object is first measured by a measuring device such as an ultrasonic sensor mounted thereon, if M ⁇ L, If the distance between the robot and the target object is far away and is not within the preset distance range, the robot moves to the preset distance range from the target object by the walking device, and the robot always keeps with the target object during the moving process of the target object. If the distance is set, if M ⁇ L, the distance between the robot and the target object is relatively close. At this time, the robot only needs to maintain the preset distance range from the target object.
  • a measuring device such as an ultrasonic sensor mounted thereon
  • step S300 if the mother searches for a daughter, in step S300, if the robot finds the daughter according to the characteristic information of the daughter, wherein the preset distance between the robot and the daughter is 3 m, the robot measures the distance between the robot and the daughter through its own measuring device. When 6m is greater than the preset distance of 3m, the robot moves to the preset distance range with the daughter through the walking device, and the robot always maintains a preset distance of 3m from the target object during the walking of the daughter.
  • S410 After capturing the target object, collecting extended feature information of the target object that belongs to the target feature information, and when the target feature information cannot be captured, positioning the target object according to the extended feature information The amplified part achieves the target object capture.
  • the extended feature information of the amplified part other than the target feature information on the target object is collected by the video, on the calling side.
  • Video call via robot and target object In the process, due to the movement of the target object and/or the movement of the robot, the video of the robot cannot capture the target feature information of the target object, that is, the robot cannot clearly recognize in the partial frame or the continuous frame picture in the current video stream.
  • the target feature corresponds to the outline with lines, the angle between the line and the line, the relative position of the line and the line, the color covered in the outline, etc.
  • the robot can expand through the amplification part other than the target feature information of the target object.
  • the feature information quickly finds and focuses on the target feature information of the target object. After the target object captures the target object, the target feature information of the target object cannot be recognized because the target object suddenly moves away.
  • the robot simultaneously identifies the target feature information and the extended feature information of the target object in the current video stream. If the augmented part of the target object is identified by the extended feature information, the target object is captured according to the augmented part, and the target feature of the target object is further determined based on the target object.
  • the camera captures other extended feature information of the daughter other than the facial feature information, such as the color and style of the daughter's clothes, pants, shoes, and the color of the hair.
  • the robot can not locate and capture the facial feature information of the daughter, and the robot can locate the target feature information and capture the target object through the extended feature information of the extended part.
  • the ratio decolorizes part of the frame or the continuous frame image, processes it as a black and white picture, extracts the outline feature of the body contour, and determines the target object as the extended feature information in the process of capturing the target object.
  • the walking device is started to continue searching for the target feature information around the augmented portion until the target feature information is acquired.
  • the robot When the robot cannot capture the target feature information of the target object, the robot first compares the feature information captured by the camera unit with the previously acquired extended feature information and locates the extended portion captured by the camera unit, and the robot starts the walking device to move around. The amplified portion continues to search for the target feature information until the camera unit captures and locates the target feature information. Continue to capture the target in the video call.
  • the robot can capture the facial features of the daughter. After the daughter stands up, the robot can only capture the body part of the daughter, the daughter. Wearing a pink dress, the style is a round neck, sleeveless tutu. At this time, the robot can determine the captured position by the extended feature information collected before, that is, the outline of the daughter's body, the color and style of the skirt.
  • the camera unit can capture the facial features of the daughter; if the daughter stands up and turns over the body, so that the back of the camera is facing the camera unit of the robot, the camera unit of the robot captures the color of the skirt of the extension of the daughter's torso and The extended feature information of the style and the back profile, then the robot needs to start the walking device Moving around the torso and searching for the target feature information by changing the angle of the camera unit, until the camera unit searches for the target feature information, the robot adjusts the distance with the daughter, and locates the target feature information before continuing to capture the target object, such as a robot.
  • the robot When the target feature information of the daughter is captured, and the distance from the daughter is kept within the preset range, when the outline of the video image of the robot camera unit is unclear, the robot temporarily does not activate the walking device, and after the outline of the video image is clear, the video is passed. After the image recognition in the stream, the recognized extended feature information is derived from the amplified portion of the daughter's back of the body, and the walking device is activated to capture the facial feature information of the daughter around the feature portion.
  • the extended feature information is collected from a moving scene image in which the image portion corresponding to the target feature information in the video stream moves together.
  • the target object belongs to the dynamic image in the video stream relative to other scenes.
  • the extended information is collected, the extended features on the target object in the video stream that move together with the target feature information are collected, such as a robot camera unit.
  • the other line contours in the partial frame or the continuous frame picture change with the line contour of the face and may be surrounded by the face contour in a closed
  • the part in the contour is used as an amplification site, and the contour of the amplified part, the relative position of each contour, and the color of the contour coating are collected after the amplification site is determined.
  • the daughter is relative to the home in the video stream.
  • the static item belongs to the dynamic object.
  • the daughter's torso, clothes, hair, etc. all move together following the daughter's movement.
  • the daughter's object is determined according to the facial feature information of the daughter in the video stream, some frames or consecutive frames in the video stream are collected.
  • the outline of the part of the frame picture that changes with the contour of the other side of the daughter's face, the relative position of each outline, and the color of the outline, and other line outlines can be surrounded with the outline of the face and in a closed outline
  • Whether the position of the part changes relative to the position of other items determines whether the daughter is in a state of motion, and if yes, it is exercise, otherwise the daughter is not in a state of motion.
  • the contour features of the face in the partial or continuous frame image of the video stream of the camera unit are changed relative to other items, and the contour position of the face is changed according to the change of the face feature position.
  • the change is changed with respect to the position of other items, the daughter is determined to be in a moving state, and the contours of other parts can be wrapped in a closed contour with the contour of the face, and the leg is determined according to the position of the contour of the other part relative to the face.
  • the target information and the target feature information are stored in a database in a mapping relationship, and the target feature information is determined according to the target information by querying the database.
  • the target information and the target feature information are all stored in the database in a one-to-one correspondence relationship.
  • the target feature information can be queried through the target information, and the database can be a local database. It can be a cloud database connected to the robot. If it is a local database, after obtaining the target information, the target feature information can be directly determined locally. If it is a cloud database, the robot sends the target information to the cloud, and the cloud determines the target feature information corresponding to the target information. , return the target feature information to the robot.
  • the mother when storing, the mother will store the common name of Xiaohong (including other family members to Xiaohong) and Xiaohong's facial feature information, and also include the collected extension.
  • the feature information as shown in Table 1, is the storage relationship between the target information and the target feature information in the database.
  • the step of controlling the walking device to maintain the preset distance range between the local device and the target object after capturing the target object acquiring the local distance sensor detection target and the target along with the running of the walking device The distance data between the objects, when the distance data exceeds the predetermined distance range, controls the walking device to start moving first, and otherwise controls the walking device to stop walking and pause the movement.
  • the distance sensor After capturing the target object, the distance sensor is always in the measurement state while maintaining the distance between the robot and the target object. The distance between the robot itself and the target object is measured. During the movement of the target object, the robot and the target When the distance between the objects exceeds the preset range, the robot automatically controls the walking device to start moving and moves. If the distance between the robot and the target object is measured by the distance sensor within a preset range, the robot automatically controls the walking device to stop the walking device. The robot pauses to move.
  • the distance sensor is always in the measurement state while the robot's walking device maintains its distance from the small red, and the distance between the robot itself and the target object is measured.
  • the robot automatically controls the walking device to start walking and moves. If the distance between the robot and the red red is measured by the distance sensor, the distance is within a preset range.
  • the robot automatically controls the walking device to stop the walking device to suspend the movement of the robot.
  • the robot automatically controls the walking device to start walking and move, making the robot more intelligent.
  • the target information is a name or an indicator of the target object.
  • the target object information carried in the command is parsed, and the target object information is the name or indicator of the target object, such as the name of the person, the computer, and the like.
  • the above-mentioned mother’s instruction as a calling party at the terminal is “looking for electricity”.
  • "brain” in which "find computer” character information
  • step S200 parses out the computer in "find computer” and uses it as an indicator of the target object, such as a computer as an indicator, that is, target information; for example, the daughter's name is Xiaohong, the instruction issued by the mother is "Looking for Xiaohong", in which "Look for Xiaohong” character information, step S200 parses out the little red in "Look for Xiaohong” and uses it as an indicator of the target object, ie
  • the target information Xiaohong is the name of the daughter's target object, that is, the target information; in addition, the indicator may also be the information stored by the calling party terminal and the target object, and the information of the target object is triggered at the calling party terminal, and the calling party terminal will The information of the target object generates an indicator corresponding to the target object, and send
  • the mother On behalf of Xiaohong’s image, the mother triggers the daughter’s name on the terminal or the image representing Xiaohong, and the caller’s terminal generates a search for Xiaohong. Character shown, and send the indicator to the robot.
  • the target feature information is facial feature information of the target object.
  • the target feature information of the target object is entered in advance, and the facial feature change can best express a person's mood or express the state at the time.
  • the facial feature is preferably used as the target feature information, which is convenient for the parent during the subsequent video call. Or the family members during the video call, or the video and/or photos taken, can first observe the emotions of the children and/or other family members at home through the facial expression.
  • the method further includes the following steps: after detecting that the extended feature information of the target object changes, re-collecting the extended feature information of the extended part.
  • the robot can save the extended feature information of the target object to the next time the extended feature information changes, and after the robot detects that the extended feature information of the stored target object changes, the new extended feature information is re-acquired, so as to expand the feature in the target object.
  • the target object needs to be captured, and the target feature information of the target object can be quickly located and the target object can be captured.
  • the extended feature information stored in the database is also the information of the pink dress.
  • the robot determines the object wearing the white dress by capturing the target feature information. After red, and found that the clothing extension feature information has changed, the extended feature information of the small red body and the white dress is re-acquired.
  • the distance sensor is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit.
  • the distance sensor involved in step S400 is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit, and the binocular ranging imaging device is convenient to use, and can initially determine the distance between the robot and the target object.
  • Ultrasonic has small error in long distance ranging and good effect.
  • the infrared sensor has small distance measurement error and good effect.
  • the invention combines with each other to optimize the distance error of the robot in the distance.
  • the extended feature information includes one or any of a plurality of features: a torso part feature, a clothing part feature, a facial contour feature, a hair outline feature information, or audio feature information.
  • the extended feature information includes one or any of a plurality of features: a torso part feature, a clothing part feature, a face outline feature, a fur outline feature information, or audio feature information.
  • the machine further includes an audio and/or infrared positioning unit, when the target object is captured, the local device turns on the audio and/or infrared positioning unit to acquire the target position to determine the starting direction of the walking device. .
  • an audio and/or infrared positioning unit that acquires the position of the target by turning on the audio and/or infrared positioning unit during capturing of the target object, and thereby determining the walking of the robotic walking device at the beginning direction.
  • the robot After determining that the target object is Xiaohong, Xiaohong laughs at the front of the robot at this time, and the robot acquires the audio of Xiaohong through the audio positioning unit, and locates the position of the red red as the robot. Directly forward, at this time, the robot directly activates the walking device to move the robot forward; if the robot radiates through the infrared light of the infrared positioning unit to sense the surrounding scenery and the infrared light reflected from the environment, the position of the small red is determined to be the right front of the robot. , the robot starts the walking device and moves to the right front to find the red.
  • the local machine measures the distance between the local device and the obstacle through the distance sensor, and controls the walking device to bypass and/or stay away from the obstacle.
  • the target object continues to be captured after being bypassed and/or away from the obstacle.
  • the robot In the process of searching for Xiaohong, the robot will inevitably encounter obstacles, such as stools and walls in the home. It is also possible to measure the distance between the robot and the obstacles in the figure through the distance sensor, and find the orientation of the red. In this case, the walking device is controlled to bypass and/or move away from the stool, the wall, and continue to capture the red.
  • obstacles such as stools and walls in the home. It is also possible to measure the distance between the robot and the obstacles in the figure through the distance sensor, and find the orientation of the red. In this case, the walking device is controlled to bypass and/or move away from the stool, the wall, and continue to capture the red.
  • the local machine further includes a voice reminding unit, when the local machine moves within a distance range from the target object, the voice reminding unit is activated, and a voice reminder is issued.
  • the robot In order to ensure that the target object can receive the message sent by the parent in time when the calling party initiates the video call, the robot prompts the voice reminding unit when the robot captures the target object and moves to a preset distance from the target object. And send a voice reminder.
  • the robot finds Xiaohong and moves to the distance range from Xiaohong, it will send a voice reminder to Xiaohong. For example, if the mother calls, the mother calls, and the mother calls, fast. Answer the call, answer the call quickly, and answer the call quickly.
  • the method further includes the following steps:
  • the robot continuously collects the video of the child's home through the camera unit.
  • the robot does not turn off the camera unit, but continuously collects videos of Xiaohong’s at home, learning, and the like.
  • the robot sends the video to a terminal connected thereto, and sends a text reminder and/or a voice reminder to the terminal.
  • the robot After a video capture is completed, the robot sends the collected video to the terminal and/or cloud of the family member connected to it, and sends a text reminder and/or a voice reminder to the terminal, and after collecting a video, the next video is continuously collected. .
  • the robot plays the video of Xiaohong at home, it is sent to the family member's terminal and/or the cloud, such as a mobile phone, a computer, an ipad, etc., and/or a cloud connected thereto, and the video is successfully transmitted.
  • Send text reminders and/or voice reminders to the terminal of the family member such as: there is a video with Xiaohong playing; if the robot is not connected to the cloud, it will only be sent to the terminal, and if it is sent to the cloud and the terminal, or the terminal is all
  • the status is off, only the cloud is sent, and when any terminal is turned on, an alert message is sent.
  • the robot when the robot acquires the target object, according to a change of the target object facial feature and/or audio feature and/or an interaction instruction issued by the target object, the local machine Initiating a voice interaction unit and/or initiating a video call to a mobile terminal connected to the local machine.
  • the robot changes the facial features of the target object through the camera unit during the process of collecting the target object. If the child is crying, the robot activates the human-computer interaction unit of the machine to make the child happy; according to the change of the audio characteristics of the target object, If the child is angry, the robot determines that the child belongs to the temper state through the audio characteristics, the robot activates the human-computer interaction unit of the machine to comfort the child; the robot sends an instruction according to the target object, such as the child submits flowers to the machine in English. Say, the robot answers according to the question raised by the child, the English of the flower is flower; and if the target child sends a call to the dad to the robot, the robot sends a video call request to the mobile terminal of the father.
  • the robot determines that Xiaohong is crying now by the change of the characteristics of the little red face and the change of the audio feature, and the robot starts the human-computer interaction unit to tell Xiaohong or tell the story. Jokes, etc., make Xiaohong happy; and Xiaohong gives the robot instructions to listen to the song, and the robot sings to Xiaohong. If Xiaohong says to the robot that I want to learn Tang poetry, the robot will ask questions based on Xiaohong’s usual time. Determine Xiaohong's intellectual development stage, and give Xiao Hongyan a Tang poem suitable for Xiaohong's intellectual stage learning and analyze it.
  • the camera unit further includes a photographing function to change according to a facial feature and/or an audio feature of the target object and/or an interactive instruction issued by the target object. Taking a picture of the target object.
  • the camera unit of the robot further includes a photographing function.
  • the robot collects the target object video
  • the facial features of the target object change. If the target object smiles really, the camera unit captures the state of the target object at this time. As in the target object has been silently talking to a person, the camera unit also captures the state of the target object at this time; and as the target object gives the robot a picture of me and the dog, the robot according to the target
  • the object's command activates the camera function of the camera unit, and takes a picture of the target object and the dog.
  • the method further includes the following steps:
  • S700 Receive an interaction instruction of the target object.
  • the target feature information of the family members at home can be stored in the local database of the robot and/or the cloud database connected to the robot, so members stored in the database can go to the machine.
  • the person sends an interactive instruction, and the robot first receives the interactive instruction issued by the current target object.
  • members of the Xiaohong family include Grandpa, Grandma, Dad, Mom, and Xiaohong himself, and the database stores the target feature information of all family members, that is, facial feature information.
  • the current family members include grandfather, grandma, and small. Red three people, if the current target identified by Xiaohong is Xiaohong, when multiple people send interactive commands to the robot at the same time, only the interactive command sent by Xiaohong is accepted.
  • the robot After the robot acquires the interactive instruction of the target object, it needs to parse the information contained in the interactive instruction, and parse the indicator corresponding to the local functional unit in the interactive instruction, so as to open the functional unit of the local machine.
  • the robot receives Xiaohong to send an interactive instruction
  • the interactive instruction is “Give me a story about the duckling”
  • the robot analyzes the “Little Duck Story” and “Speak” in the instruction, and “The Story of the Little Duck” Change to search for "Little Duck's Story” in the database or network, and extract it, and change the "speak” to start the indicator of the speech unit.
  • S900 Start a functional unit corresponding to the indicator.
  • the interaction instruction issued by the target object includes a functional indication that can achieve the purpose of the target object, and according to the indicator parsed in step S800, the functional unit that realizes the purpose of the target object is started, and the target object is issued. instruction.
  • the interactive instruction issued by Xiaohong "tell me the story of the duckling", after the parsing of step S800, searches for "the story of the duckling" through the database and/or the network, and extracts it, and starts the robot.
  • the voice function tells Xiaohong the story of "Little Duck”.
  • the database can be a local database or a cloud database. When searching, the database and the network can be searched at the same time, or only the local database can be searched when there is no network connection.
  • the interactive instruction issues a voice instruction for the target and/or a button corresponding to the function unit that the target clicks on the local machine.
  • the robot has a sensor for receiving voice, and a physical function button for man-machine interaction is set on the robot. If the robot is provided with a touch screen, the function mortgage can also be a virtual touch button.
  • the present invention also provides a robot video call control apparatus, including the following modes.
  • Piece
  • S10 a video module, configured to establish a video call with the calling party, and transmit the video stream obtained by the local camera unit to the calling party;
  • the information stored in the robot records the relationship between the robot and the family members, and establishes a connection mode with the direct communication terminal between the family members.
  • the calling party is a family member, for example, various mobile terminals of the family members such as a mobile phone, a computer, an ipad, etc.
  • the family members are set in advance.
  • the video module S10 directly establishes a video call with the communication terminal of the family member, or when receiving the transmission instruction of the family member, the video module S10 establishes a video call with the family member.
  • the video call request transmits the video stream acquired by the local party to the calling party that initiates the video call, wherein the past video stream transmitted by the robot is also displayed on the calling mobile terminal of the calling party where the controlling robot is installed, or is in the call
  • the mobile application opened by the mobile terminal displays the real-time video call.
  • S20 an analysis module, configured to receive an instruction of the calling party initiated by the calling party, parse target information included in the seeking instruction, and determine target feature information of the corresponding target object according to the target information;
  • the video module S10 establishes a video call with the calling party, and the robot accepts and sends back the video stream to the calling party, and the calling party can directly observe the situation at home through the video, if the calling party cannot see that he wants to see in the video.
  • the caller can issue a seek command on the mobile terminal on the side of the caller.
  • the search command contains information about the target object, and the information is stored in the robot or in the cloud connected to the robot.
  • the robot analysis module S20 parses the target object information included in the homing instruction in the local machine, and then determines the target feature information of the target object corresponding to the target object according to the target object information, and the robot in the subsequent process, Based on this, the target object will be found and determined.
  • the robot analysis module S20 receives the "seeking daughter” search command and parses the information in the seek command in the local machine, that is, parses and extracts the "daughter” information, and sends the "daughter” information to the storage.
  • "Daughter” is a database of target information and corresponding feature information, and determines the daughter's feature information by storing the daughter's target information as the daughter's facial features, that is, the outline and position of the entire face and facial features, and the robot in the subsequent process. In this, based on this, find and determine the daughter.
  • Another embodiment of the present step is that after the robot analysis module S20 receives the instruction of searching, the instruction of the search is sent to the cloud, and the cloud parses the target information contained in the searched instruction, and sends the target information to the robot.
  • the robot determines the target feature information of the target object corresponding to the target object according to the target object information, and the robot searches and determines the target object based on the basis in the subsequent process.
  • the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and the robot analysis module S20 receives the instruction of “finding the daughter” and sends the instruction of “seeking the daughter” to the cloud.
  • the cloud parses the information in the search instruction, that is, parses and extracts the information of "daughter”, and sends the information to the robot.
  • the robot receives the parsed information and sends the message "daughter" to the storage "daughter".
  • a database of target information and corresponding feature information wherein the storage relationship between the specific target feature information and the target information is detailed later, and the daughter's feature information is determined by storing the daughter's target information, and the robot is in the subsequent process. In this, based on this, find and determine the daughter.
  • Another embodiment of the present step is that after the robot analysis module S20 receives the instruction of searching, the instruction of the search is sent to the cloud, and the cloud parses the target information contained in the searched instruction, and the cloud sends the target information to the cloud.
  • the cloud database storing the target information and the target feature information, and determining the target feature information of the target object corresponding to the target according to the target information in the cloud, the cloud transmitting the target feature information to the robot, and the robot in the subsequent process, Based on this, the target object will be found and determined.
  • the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and the robot analysis module S20 receives the instruction of “finding the daughter” and sends the instruction of “seeking the daughter” to the cloud.
  • the cloud parses the information in the search instruction, that is, parses and extracts the information of “daughter”, determines the feature information of the daughter through the information, and sends the feature information to the robot, and the robot receives the feature information directly receiving the daughter, and This is the basis for seeking Find and confirm your daughter.
  • S30 a capture module, configured to: when the target object is not captured, start the walking device to perform local movement, perform image recognition on the video stream of the image capturing unit during the moving process, and determine an image that includes the target feature information, Capture the target object;
  • the robot After the analysis module S20 determines the feature information of the target object and the target object, if it is determined by the image recognition that there is no image with the target feature information in the acquired video stream, the robot starts its own walking device to move the robot, and moves In the process, the robot passes the acquired video stream of the camera unit, and through image recognition technology, identifies the image in the video, determines whether the image contains the image of the target feature information, and captures the target object, wherein the target object is not captured.
  • the object includes the following situations: 1. The robot does not recognize the target feature information and the extended feature information corresponding to the target object in the image in the video stream, and the robot passes the audio and infrared to the target object, and then passes the measurement and the target object.
  • the distance is greater than a preset distance range between the preset robot and the target object, and the walking device is activated; 2. the robot does not recognize the target feature information and the extended feature information corresponding to the target object in the image in the video stream, and the robot passes After the audio and infrared are positioned to the target object, The distance between the measured object and the target object is smaller than the preset distance range between the preset robot and the target object, and the robot starts the walking device; 3. The robot captures the target feature of the target object, and the distance from the target object remains within the preset range. During the process, when the outline of the video image of the robot camera unit is not clear, the robot does not start the walking device temporarily.
  • the walking device is activated; 4.
  • the robot captures the target feature of the target object, and the distance from the target object remains within the preset range, such as the target object suddenly moving away
  • the walking device is activated. If the target object is far away and the robot finds that the target object is gradually approaching by the distance measurement before the robot locates the target object, the traveling device is not activated.
  • the robotic walking device receives the signal from the camera unit and converts it into an electrical signal of a controller that controls and electrically connects the traveling device, and the controller rotates the electrical signal to the driving device that activates the traveling device, and the driving device starts
  • the walking device realizes the movement of the robot, wherein the driving device may be a motor, the walking device may be a wheel, a crawler or a wheel and the like; the image recognition is to first store a picture in the robot and use it as a model, the robot processor
  • the model is first preprocessed and extracted between the outlines with lines, lines and lines The relative position of the line, the line and the line, the color covered in the outline, etc.
  • the robot processor After capturing a video image, in order to determine whether the target object needs to be captured in the current image, the robot processor sequentially images each frame in the image. Pre-processing, and extracting the contour with lines, the angle between the lines and the lines, the relative position of the lines and the lines, the colors covered in the contour, etc., and the image models in the database are compared and fitted. When the fitness reaches the set value, the captured target object is considered to exist in the video image.
  • the analysis module S20 determines the characteristic information of the daughter, and the robot searches for the daughter at home based on the feature information, and the robot first passes the position at which the instruction of the search is received.
  • Each audio and/or infrared locates the position of the daughter. If the daughter is within a preset distance range and the robot cannot capture the target feature of the daughter in a partial frame or a continuous frame image of the current video stream, the walking device is activated to cooperate with the camera. The unit captures the target feature information of the daughter.
  • the walking device is activated, and the video stream of the camera unit is used for image recognition to capture the daughter's Target features such as the contour features of the daughter's face.
  • a maintenance module configured to control the walking device to maintain a preset distance range between the local device and the target object after the target object is captured.
  • the robot can preset the distance range between the robot and the target object. After the robot step to the target object, the distance between the robot and the target object is first measured by the measuring device installed thereon, if the distance between the robot and the target object is far and not preset Within the distance range, the robot moves to a preset distance range from the target object by the walking device, and during the moving of the target object, the robot always maintains a preset distance range with the target object through the maintenance module S40.
  • the robot finds the daughter according to the daughter's characteristic information through the capture module S30, and the robot measures the distance between the robot and the daughter through its own measuring device, if the robot is far away from the daughter and is not preset. Within the distance range, the robot moves to the preset distance range with the daughter by the walking device, and during the walking of the daughter, the robot always maintains a preset distance range with the target object through the maintenance module S40.
  • the capturing module S10 further includes an acquiring unit S31: after capturing the target object, acquiring extended feature information of the target object other than the target feature information thereof, and failing to capture the target feature Positioning the information according to the extended feature information
  • the target part is amplified by the target object.
  • the expansion unit amplifies the expansion of the target object except the target feature information through the acquisition unit S31.
  • Feature information in the process of the caller's video call through the robot and the target object, due to the movement of the target object itself and/or the movement of the robot, the camera unit of the robot cannot capture the target feature information of the target object, and the robot is in the current video.
  • the robot When a partial frame or a continuous frame picture in the stream cannot clearly identify the contour with the line corresponding to the target feature, the angle between the line and the line, the relative position of the line and the line, the color covered in the outline, etc., the robot
  • the target feature information of the target object can be quickly found and focused by the extended feature information of the augmented portion other than the target object target feature information. After the target object captures the target object, the target feature information of the target object cannot be recognized because the target object suddenly moves away.
  • the robot In order to further capture the target object, the robot simultaneously identifies the target feature information and the extended feature information of the target object in the current video stream. If the augmented part of the target object is identified by the extended feature information, the target object is captured according to the augmented part, and the target feature of the target object is further determined based on the target object.
  • the robot collects other extended feature information of the daughter other than the facial feature information, such as the color and style of the daughter's clothes, pants, and shoes, through the collecting unit S31.
  • the color and shape of the hair, the color, shape, style of the hat, the outline of the body, arms, legs, etc. during the process of the mother and daughter passing the robot video call, if the daughter is sitting while sitting, in the process, It is possible that the robot cannot locate and capture the facial feature information of the daughter, and the robot can locate the target feature information and capture the target object through the extended feature information of the extended part.
  • the ratio decolorizes part of the frame or the continuous frame image, processes it as a black and white picture, extracts the outline feature of the body contour, and determines the target object as the extended feature information in the process of capturing the target object.
  • the acquiring unit S31 further includes a positioning unit S311: after positioning the augmented portion of the target object according to the extended feature information, the starting walking device continues to search for the target feature information around the augmented portion until the target feature information is continued. After targeting the target feature information The target object is now captured.
  • the robot When the robot cannot capture the target feature information of the target object, the robot first compares the feature information captured by the capture module S30 with the extended feature information collected by the previous acquisition unit S31, and locates the current capture module S30 by the positioning unit S311. In the extended part, the robot initiates the walking device to continue to search for the target feature information around the augmented portion until the camera unit captures and locates the target feature information, and then continues to capture the target in the video call.
  • the robot capture module S30 can capture the facial feature information of the daughter when the daughter is sitting as described above, and the robot capture module S30 can only capture after the daughter stands up.
  • the daughter's body part, the daughter is wearing a pink dress, the style is a round neck, sleeveless tutu, at this time the robot can be determined by the extended feature information collected before the daughter's body contour, skirt color, style
  • the captured portion is determined, and the orientation of the rotation of the robot camera unit is determined by the capturing portion, that is, the collar, the shoulder form, and the characteristic information of the daughter's body captured by the robot through the camera unit, and it is determined that the body part of the daughter is captured at this time.
  • the chest, and by lifting the camera unit, can capture the facial features of the daughter; if the daughter stands up and turns over the body, so that the back of the camera is facing the camera unit of the robot, the camera unit of the robot captures the daughter's torso Extended skirt color and style, extended profile information of the back profile, then the machine Then, the walking device is required to move around the torso and search for the target feature information by changing the angle of the camera unit until the camera unit searches for the target feature information, the robot adjusts the distance with the daughter, and the target feature information is located by the positioning unit S311, and then the capture module The S30 continues to capture the target object.
  • the robot If the robot captures the target feature information of the daughter and the distance from the daughter remains within the preset range, the robot does not start the walking when the video image outline is unclear.
  • the device after the image of the video image is clear, after identifying the image in the video stream, the recognized extended feature information is derived from the amplified portion of the daughter's back of the body, and the walking device is activated to capture the facial feature information of the daughter around the feature portion.
  • the extended feature information is collected from a moving scene image in which the image portion corresponding to the target feature information in the video stream moves together.
  • the target object belongs to the dynamic image in the video stream relative to other scenes.
  • the extended information is collected, the collected video stream moves together with the target feature information.
  • the extended feature on the target object for example, after the robot camera unit captures the target object, wherein the facial feature information is used as the target feature information, and the other line contours in the partial frame or the continuous frame picture change with the line outline of the face occur.
  • the portion that changes and can be surrounded by the contour of the face in a closed contour serves as an augmentation site. After the amplification site is determined, the contour of the augmentation site, the relative position of each contour, and the color of the contour coating are collected.
  • the daughter in the video stream is a dynamic object relative to the static item in the home, and the daughter's torso, clothes, hair, etc. are all following the movement of the daughter, according to the daughter in the video stream.
  • the facial feature information determines the daughter's object, the outline of the part of the video stream in which the partial or continuous frame picture changes with the contour of the daughter's face changes, the relative position of each contour and the outline are covered.
  • the color, and other line contours can be surrounded with the contour of the face with a part of the closed contour, and according to the position of the other line contours and the contour of the face, it is determined that the other contour belongs to a certain part of the body, and according to the partial frame or the continuous frame Whether the contour of the face in the picture changes relative to the position of other items and the position of other parts follows whether the position of the face changes relative to the position of other items to determine whether the daughter is in a state of motion, and if yes, it is exercise, otherwise the daughter is not in motion. .
  • the contour features of the face in the partial or continuous frame image of the video stream of the camera unit are changed relative to other items, and the contour position of the face is changed according to the change of the face feature position.
  • the change is changed with respect to the position of other items, the daughter is determined to be in a moving state, and the contours of other parts can be wrapped in a closed contour with the contour of the face, and the leg is determined according to the position of the contour of the other part relative to the face.
  • the analysis module S20 further includes a query unit S21.
  • the target information and the target feature information are stored in a database in a mapping relationship, and are used to query the database to determine target feature information according to the target information.
  • the target information and the target feature information are all stored in the database in a one-to-one correspondence relationship.
  • the query unit S21 can query the target feature information through the target information, and the database can be local.
  • the database can also be a cloud database connected to the robot. If it is a local database, after obtaining the target information, the target feature information can be directly determined locally, and if it is a cloud database, the robot sends the target information. After the cloud determines the target feature information corresponding to the target information, the target feature information is returned to the robot.
  • Xiaohong is the target
  • the mother when storing, the mother will store the common name of Xiaohong (including the common name of other family members) and Xiaohong's facial feature information, and also include collecting. Extended feature information, as shown in Table 2.
  • Table 2 shows the storage relationship between the target information and the target feature information in the database.
  • the maintaining module S40 further includes a measuring unit S41: in the step of controlling the walking device to maintain a preset distance range between the local device and the target object after capturing the target object, accompanied by the walking device The operation is to obtain the distance data detected by the local distance sensor and the target object. When the distance data exceeds the predetermined distance range, the control walking device starts to move first to perform the movement, otherwise the control walking device stops walking and pauses. mobile.
  • the distance sensor of the measuring unit S41 is always in the measurement state while maintaining the distance between the robot and the target object, and the distance between the robot itself and the target object is measured.
  • the robot automatically controls the walking device to start moving and moves. If the distance between the robot and the target object is measured within a preset range by the distance sensor, the robot automatically controls the walking device. Stop the walking device to pause the robot.
  • the robot After the robot captures the small red, the distance between the measuring unit S41 and the target object is measured during the distance between the measuring unit S41 and the target object.
  • the robot automatically controls the walking device to start walking and move, if the distance between the robot and Xiaohong is measured by the distance sensor in a preset range. Inside, the robot automatically controls the walking device to stop the walking device so that the robot pauses and moves over the distance range.
  • the robot automatically controls the walking device to start walking and move, making the robot more intelligent.
  • the target information is a name or an indicator of the target object.
  • the analysis module S20 After the analysis module S20 receives the instruction of the incoming caller, the object information carried in the instruction is parsed, and the target object information is the name or indicator of the target object, such as the name of the person, the computer, and the like.
  • the analysis module S20 parses out the computer in the "find computer” and As the target object indicator, such as the computer is the indicator, that is, the target information; if the daughter's name is Xiaohong, the mother's instruction to find is "Look for Xiaohong", in which "Look for Xiaohong” character information, analysis
  • the module S20 parses out the red in the "find red” and uses it as an indicator of the target object, that is, the target information red is the name of the target object of the daughter, that is, the target information; in addition, the indicator may also be
  • the information stored by the calling party terminal and the target object triggers the information of the target object at the calling party terminal, and the calling party terminal generates an indicator corresponding to the target object by the information of the target object, and issues an indicator of the target object to the robot.
  • the mother as the caller stores information related to her daughter Xiaohong on her terminal, such as the name Xiaohong, the image representing Xiaohong, mother Triggered on the terminal daughter Alice's name or image on behalf of the red, the caller terminal generates red looking indicator, and the indicator is sent to the robot.
  • the target feature information is facial feature information of the target object.
  • the target feature information of the target object is entered in advance, and the facial feature change can best express a person's mood or express the state at the time.
  • the facial feature is preferably used as the target feature information, which is convenient for the parent during the subsequent video call. Or the family members during the video call, or the video and/or photos taken, can first observe the emotions of the children and/or other family members at home through the facial expression.
  • the collecting unit S31 further includes a monitoring unit S312: for monitoring the extended feature information of the extended part after detecting that the extended feature information of the target object is changed.
  • the robot can save the extended feature information of the target object to the next extended feature information.
  • the robot monitoring unit S312 detects that the extended feature information of the stored target object has changed, and then re-acquires the new extended feature information, so that after the target object is extended, the target information needs to be captured, and the target object can be quickly located. Go to the target object's target feature information and capture the target object.
  • the extended feature information stored in the database is also the information of the pink dress.
  • the robot determines the object wearing the white dress by capturing the target feature information.
  • the monitoring unit S312 finds that the extension feature information of the small red body and the white dress is re-acquired after the change of the clothing extension feature information is changed.
  • the distance sensor is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit.
  • the distance sensor in the maintenance module S40 is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit, and the binocular ranging imaging device is convenient to use, and can initially determine the distance between the robot and the target object, and the ultrasonic wave.
  • the error of long distance ranging is small and the effect is good.
  • the infrared sensor has small distance measurement error and good effect.
  • the invention combines with each other to optimize the distance error of the robot in the distance.
  • the extended feature information includes one or any of a plurality of features: a torso part feature, a clothing part feature, a facial contour feature, a hair outline feature information, or audio feature information.
  • the extended feature information collected by the collecting unit S31 includes one or any of a plurality of features: a trunk part feature, a clothing part feature, a face outline feature, a hair outline feature information, or audio feature information.
  • the capturing module S30 includes a positioning unit S311, and the local audio and/or infrared positioning unit acquires the target position when the target object is captured by the local audio and/or infrared positioning unit. To determine the starting direction of the walking device.
  • an audio and/or infrared positioning unit that acquires the position of the target by turning on the audio and/or infrared positioning unit during capturing of the target object, and thereby determining the walking of the robotic walking device at the beginning direction.
  • the robot after determining that the target object is Xiaohong, Xiaohong laughs at the front of the robot at this time, and the robot acquires the audio of Xiaohong through the audio positioning unit, and locates the small The red position is directly in front of the robot.
  • the robot directly activates the walking device to move the robot forward.
  • the robot uses the infrared light of the infrared positioning unit to radiate the infrared light reflected by the surrounding scene and the environment to determine the location of Xiaohong.
  • the position is the right front of the robot, and the robot activates the walking device and moves to the right front to find the red.
  • the measuring unit S41 is further configured to: when capturing the target object, when the obstacle is encountered, the local machine measures the distance between the local device and the obstacle through the distance sensor, and controls the walking device to bypass and / or away from the obstacle, continue to capture the target object after detouring and / or away from the obstacle.
  • the robot In the process of searching for Xiaohong, the robot will inevitably encounter obstacles, such as stools and walls in the home. It is also possible to measure the distance between the robot and the obstacles in the figure by the distance sensor of the measuring unit S41. With the orientation unchanged, the walking device is controlled to bypass and/or move away from the stool, the wall, and continue to capture the red.
  • the maintenance module S40 further includes a voice module S50, as shown in FIG. 6, for starting the voice reminding unit and issuing a voice reminder when the local machine moves within a distance range from the target object.
  • a voice module S50 as shown in FIG. 6, for starting the voice reminding unit and issuing a voice reminder when the local machine moves within a distance range from the target object.
  • the robot In order to ensure that the target object can receive the message sent by the parent in time when the calling party initiates the video call, the robot prompts the voice reminding unit when the robot captures the target object and moves to a preset distance from the target object. And send a voice reminder.
  • the robot finds Xiaohong and moves to the distance range from Xiaohong, it will send a voice reminder to Xiaohong. For example, if the mother calls, the mother calls, and the mother calls, fast. Answer the call, answer the call quickly, and answer the call quickly.
  • the video module S10 further includes:
  • S11 a shooting unit, after the calling party hangs up the video call, the robot camera unit continuously collects a video of the target object;
  • the robot continuously collects the child's home video by shooting the image unit S11.
  • the shooting unit S11 plays in the continuous collection of Xiaohong's home, learning and other videos.
  • S12 A transmission unit, the robot sends the video to a terminal connected thereto, and sends a text reminder and/or a voice reminder to the terminal.
  • the robot After the video capture of the shooting unit S11 is completed, the robot sends the collected video to the terminal and/or the cloud of the family member connected thereto through the transmission unit S12, and sends a text reminder and/or a voice reminder to the terminal through the transmission unit S12, and collects the video. After a video, continue to capture the next video.
  • the video shooting unit S11 plays the video of Xiaohong at home, it is sent to the family member's terminal and/or the cloud through the transmission unit S12, such as a mobile phone, a computer, an ipad, etc., and/or connected thereto.
  • the transmission unit S12 such as a mobile phone, a computer, an ipad, etc.
  • a text reminder and/or a voice reminder is sent to the terminal of the family member through the transmission unit S12, for example, a video with a small red play; if the robot is not connected to the cloud, it is only sent to the terminal, and then When it is sent to the cloud and the terminal, or when the terminal is all turned off, only the cloud is sent, and when any terminal is turned on, an alert message is sent.
  • the activation unit 60 is further configured to: when the robot acquires the target object, according to a change of the target object facial feature and/or audio feature and/or an interaction instruction issued by the target object, The local device initiates a voice interaction unit and/or initiates a video call to a mobile terminal connected to the local device.
  • the robot changes the facial features of the target object through the camera unit during the process of collecting the target object. If the child is crying, the robot starting unit 60 activates the human-computer interaction unit of the machine to make the child happy; according to the audio characteristics of the target object The change, such as the child is losing his temper, the robot determines that the child belongs to the temper state through the audio feature, the robot starting unit 60 activates the human-computer interaction unit of the machine to comfort the child; the robot sends an interactive instruction according to the target object, such as a child When the machine proposes a flower in English, the robot answers according to the question raised by the child, and the English of the flower is flower; and if the target child sends a call to the dad to the robot, the robot starting unit 50 activates the video module S10 to the mobile phone of the father. The terminal issues a video call request.
  • the robot determines that Xiaohong is crying now by the change of the characteristics of the little red face and the change of the audio feature, and the robot starting unit 60 activates the human-computer interaction unit to tell the story to Xiaohong. Or tell a joke, etc., to make a little red happy; and as Xiaohong gives the robot an instruction to listen to the song, the robot startup unit 60 activates the song function to sing to Xiaohong; Another example is Xiaohong and the robot saying that I want to learn Tang poetry. Then the robot determines the stage of intellectual development of Xiaohong according to the question of Xiao Hongping, and gives Xiao Hong the Tang poem that is suitable for the learning of Xiaohong intelligence stage.
  • the photographing unit S11 is further configured to, during the process of acquiring the target object video, the photographing unit further includes a photographing function to change according to a facial feature and/or an audio feature of the target object and/or Or an interactive instruction issued by the target object takes a picture of the target object.
  • the shooting unit S11 of the robot further includes a photographing function.
  • the robot collects the target object video, the facial features of the target object change. If the target object smiles really, the camera unit captures the target object at this time. State; as in the target object has been silently talking to a person, the camera unit also captures the state of the target object at this time; and as the target object gives the robot a picture of me and the dog, the robot is based on The instruction of the target object activates the camera function of the camera unit, and takes a picture of the target object and the dog.
  • the transmission unit S12 further includes:
  • S13 a receiving unit, configured to receive an interaction instruction of the target object
  • the target feature information of the family members at home can be stored in the database local to the robot and/or the cloud database connected to the robot, so members stored in the database can send interactive instructions to the robot, and the robot first receives the current target object.
  • members of the Xiaohong family include Grandpa, Grandma, Dad, Mom, and Xiaohong himself, and the database stores the target feature information of all family members, that is, facial feature information.
  • the current family members include grandfather, grandma, and small. Red three people, if the current target identified by Xiaohong is Xiaohong, when multiple people send interactive commands to the robot at the same time, only the interactive command sent by Xiaohong is accepted.
  • S14 an analyzing unit, configured to parse the interaction information included in the interaction instruction, and extract an indicator corresponding to the local function unit;
  • the robot After the robot acquires the interactive instruction of the target object, it needs to parse the information contained in the interactive instruction, and parse the indicator corresponding to the local functional unit in the interactive instruction, so as to open the functional unit of the local machine.
  • the robot analyzes the "Little Duck Story” and “Speak” in the instruction, transforms the “Little Duck Story” into a database or network to search for “Little Duck Story” and extract it. , "talk” to change the indicator that initiates the speech unit.
  • the interaction instruction issued by the target object includes a functional indication that can achieve the purpose of the target object, and according to the indicator parsed by the analysis unit S14, the functional unit that realizes the purpose of the target object is started, and the target object is issued. Instructions.
  • the interactive instruction issued by Xiaohong mentioned above "tell me the story of the duckling", after parsing by the analyzing unit S14, searching for "the story of the duckling” through the database and/or the network and extracting it, starting the robot
  • the voice function tells Xiaohong the story of "Little Duck”.
  • the database can be a local database or a cloud database. When searching, the database and the network can be searched at the same time, or only the local database can be searched when there is no network connection.
  • the interactive instruction issues a voice instruction for the target and/or a button corresponding to the function unit that the target clicks on the local machine.
  • the robot has a sensor for receiving voice, and a physical function button for man-machine interaction is set on the robot. If the robot is provided with a touch screen, the function mortgage can also be a virtual touch button.
  • the present invention also provides a terminal, including a processor, for executing a program to execute various steps of the robot video call control method, for example, the robot establishes various mobile terminals with a mother such as a mobile phone, a computer, an ipad.
  • a mother such as a mobile phone, a computer, an ipad.
  • the terminal downloads a video stream that is connected to the control and connected to the robot, and transmits the home situation acquired by the local camera unit to the mother's mobile terminal, since the mother wants to see the daughter Xiaohong in the mobile terminal now In the state of the home, and the video stream does not contain the image of Xiaohong at this time, the mother initiated a search for the "find daughter" to the robot on the mobile terminal, and the robot receives the instruction of "finding the daughter" and is in the local machine.
  • the information in the instruction of the search is parsed, that is, the information of "daughter” is parsed and extracted, and the characteristic information of the daughter is determined locally by the information, and the feature information is the facial feature of the daughter, that is, the outline of the entire face and the facial features.
  • the robot finds the daughter at home based on this characteristic information.
  • the robot first receives the finger in the search.
  • the camera unit of the robot rotates 360 degrees, and the image is recognized by the video stream of the camera unit to capture the target feature of the daughter. If the daughter is not captured, the robot starts its own walking device and moves, and the robot moves.
  • the video stream is acquired by the camera unit, and the image recognition technology is used to check whether there is a daughter's facial feature information in the current video image. If the robot finds the daughter according to the daughter's feature information, the robot passes its own measuring device. Measuring the distance between the robot and the daughter. If the robot is far away from the daughter and is not within the preset distance range, the robot moves to the preset distance from the daughter through the walking device, and during the walking of the daughter, the robot always Keep a preset distance range from your daughter Xiaohong.
  • processor of this embodiment may further implement other steps of the method in the foregoing embodiment.
  • processors in the foregoing method and no further details are provided herein.
  • Fig. 9 shows a video callable mobile robot (hereinafter referred to as a video call mobile robot collectively referred to as a device) that can implement robot video call control according to the present invention.
  • the device conventionally includes a processor 1010 and a computer program product or computer readable medium in the form of a memory 1020.
  • the memory 1020 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • the memory 1020 has a memory space 1030 for executing program code 1031 of any of the above method steps.
  • storage space 1030 for program code may include various program code 1031 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG.
  • the storage unit may have a storage section or a storage space or the like arranged similarly to the storage 1020 in FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit comprises program code 1031' for performing the steps of the method according to the invention, ie code that can be read by, for example, a processor such as 1010, which when executed by the device causes the device to perform the above Each step in the described method.
  • steps, measures, and solutions in the various operations, methods, and processes that have been discussed in the present invention may be alternated, changed, combined, or deleted. Further, other steps, measures, and schemes of the various operations, methods, and processes that have been discussed in the present invention may be alternated, modified, rearranged, decomposed, combined, or deleted. Further, the steps, measures, and solutions in the prior art having various operations, methods, and processes disclosed in the present invention may also be alternated, changed, rearranged, decomposed, combined, or deleted.

Abstract

A robot video call control method, device and a terminal provided by the present invention mainly relate to an intelligent robot capable of realizing a video call via a remote control. The method comprises the following steps: establishing a video call with a calling party and transmitting video streams acquired natively to the calling party; receiving a homing instruction initiated by the calling party, parsing target object information contained therein, and using this as the basis to determine corresponding target feature information; starting a walking device to execute movement of the native when a target object is not captured, and performing image recognition on the video streams during the moving to determine an image containing the target feature information so as to capture the target object; and controlling the walking device to maintain a preset distance range between the native and the target object after capturing the target object. The walking device of the present invention cooperates with a camera unit to capture the target object quickly through the image recognition technology during the moving, performs the video call and reduces loneliness of children through a human-computer interaction function.

Description

机器人视频通话控制方法、装置及终端Robot video call control method, device and terminal 技术领域Technical field
本发明涉及自动控制技术领域,特别是涉及一种机器人视频通话控制方法、装置及终端。The present invention relates to the field of automatic control technologies, and in particular, to a robot video call control method, apparatus and terminal.
背景技术Background technique
现如今,人们在日常生活及工作中多方面应用到互联网,与互联网相关的各种智能产品也应运而生。特别地,智能机器人作为其中的一种智能产品,可以取代或者协助人类完成一些工作,常应用于各行各业。目前智能机器人已逐渐走件千家万户代替人们处理家里的日常家务工作,尽管这样的机器人具有简单自动控制和移动的功能,但是并不能满足现代的需求,特别地,现代的年轻父母常年埋头于工作中,对孩子的日常照顾仅有饮食起居,不能常常陪伴孩子,错失陪伴孩子成长和智慧发育的机会,尽管现有机器人已经深入到千家万户,但是常常充当打扫家务的替代人员,虽然有一些机器人在家庭中也会实现简单的对小孩的看护陪伴功能,但随着小孩的生长发育及娱乐学习需求,机器人也会逐渐失去看护陪伴的意义。因此,在现有的机器人还无法做到远程视频通信、移动、人机交互等综合的智能处理,且现有机器人的智能化程度较低及智商范围小不能自由调节,导致实际使用不便。Nowadays, people apply to the Internet in many aspects of daily life and work, and various intelligent products related to the Internet have emerged. In particular, intelligent robots, as one of the smart products, can replace or assist humans in some work, and are often used in all walks of life. At present, intelligent robots have gradually replaced thousands of households to deal with daily household chores. Although such robots have simple automatic control and movement functions, they cannot meet modern needs. In particular, modern young parents are always immersed in At work, the daily care of the child is only a diet, can not often accompany the child, miss the opportunity to accompany the child's growth and wisdom development, although the existing robot has penetrated into thousands of households, but often acts as a substitute for cleaning the housework, although there are Some robots will also implement simple child care companionship in the family, but with the child's growth and entertainment needs, the robot will gradually lose the meaning of nursing companionship. Therefore, in the existing robots, comprehensive intelligent processing such as remote video communication, mobile, human-computer interaction, etc. cannot be performed, and the existing robots have low intelligence and small IQ range cannot be freely adjusted, resulting in inconvenient practical use.
发明内容Summary of the invention
为了解决上述问题,本发明提供一种机器人视频通话控制方法及其相应的装置。In order to solve the above problems, the present invention provides a robot video call control method and a corresponding device thereof.
相应的,本发明的另一目标在于提供一种终端,以便运行依据前一目标所述的方法实现的程序。Accordingly, it is another object of the present invention to provide a terminal for operating a program implemented in accordance with the method described in the previous object.
为实现上述目标,本发明采用如下技术方案:In order to achieve the above objectives, the present invention adopts the following technical solutions:
第一方面,本发明的一种机器人视频通话控制方法,包括如下步骤:建立与呼叫方的视频通话,向呼叫方传输本机摄像单元获取的视频流;接 收所述呼叫方发起的寻的指令,解析所述寻的指令所包含的目标物信息,依据目标物信息确定相应的目标对象的目标特征信息;在未捕捉到所述目标对象时,启动行走装置执行本机移动,在移动过程中对摄像单元的视频流进行图像识别,确定包含所述目标特征信息的图像,以捕捉目标对象;当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围。In a first aspect, a robot video call control method of the present invention includes the following steps: establishing a video call with a calling party, and transmitting a video stream obtained by the local camera unit to the calling party; Receiving the instruction of the calling by the calling party, parsing the target information included in the seeking instruction, determining the target feature information of the corresponding target object according to the target information; and starting the walking when the target object is not captured The device performs local movement, performs image recognition on the video stream of the camera unit during the movement, determines an image containing the target feature information to capture the target object, and controls the walking device to make the local device after capturing the target object Maintain a preset distance range from the target object.
第二方面,本发明还提供了一种机器人视频通话控制装置,包括:至少一个处理器;以及,至少一个存储器,其与所述至少一个处理器可通信地连接;所述至少一个存储器包括处理器可执行的指令,当所述处理器可执行的指令由所述至少一个处理器执行时,致使所述装置执行至少以下操作:建立与呼叫方的视频通话,向呼叫方传输本机摄像单元获取的视频流;接收所述呼叫方发起的寻的指令,解析所述寻的指令所包含的目标物信息,依据目标物信息确定相应的目标对象的目标特征信息;在未捕捉到所述目标对象时,启动行走装置执行本机移动,在移动过程中对摄像单元的视频流进行图像识别,确定包含所述目标特征信息的图像,以捕捉目标对象;当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围。In a second aspect, the present invention also provides a robot video call control apparatus comprising: at least one processor; and at least one memory communicably coupled to the at least one processor; the at least one memory including processing The instructions executable by the processor, when executed by the at least one processor, cause the apparatus to perform at least the following operations: establishing a video call with the calling party, transmitting the local camera unit to the calling party Obtaining a video stream; receiving an instruction of the calling party initiated by the calling party, parsing object information included in the seeking instruction, determining target feature information of the corresponding target object according to the target information; and not capturing the target When the object is activated, the walking device is activated to perform local movement, and the video stream of the camera unit is image-recognized during the movement, and the image containing the target feature information is determined to capture the target object; when the target object is captured, the control is performed. The walking device maintains a preset distance range between the unit and the target object.
第三方面,本发明还提供了一种可视频通话移动式机器人,包括处理器,该处理器用于运行程序以执行所述的机器人视频通话控制方法的各个步骤。In a third aspect, the present invention also provides a video callable mobile robot comprising a processor for running a program to perform the steps of the robot video call control method.
第四方面,本发明还提供了一种计算机程序,包括计算机可读代码,当可视频通话移动式机器人运行所述计算机可读代码时,导致所述机器人视频通话控制方法被执行。In a fourth aspect, the present invention also provides a computer program comprising computer readable code that causes the robotic video call control method to be executed when the video callable mobile robot runs the computer readable code.
第五方面,本发明还提供了计算机可读介质,其中存储了第四方面所述的计算机程序。In a fifth aspect, the invention provides a computer readable medium storing the computer program of the fourth aspect.
与现有技术相比较,本发明具备如下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
1、本发明提供的机器人视频通话控制方法、装置及终端,通过预先存储的图像、人物的关系及与人物实现远程连接的联系方式,并在人物远程发送远程控制时,建立了与发起远程控制人物的视频,并向其传输视频流, 采用图像识别技术从动态的视频流中捕捉远程人物指定的目标对象,并通过视频摄像单元实现远程人物与目标对象的视频通话的功能。1. The method, device and terminal for controlling a video call of a robot provided by the present invention establish and initiate remote control through pre-stored images, relationship of characters, and contact information with a person to realize remote connection, and when remote control is transmitted remotely by a character. The video of the character and transfer the video stream to it, Image recognition technology is used to capture the target object specified by the remote character from the dynamic video stream, and realize the video call function of the remote character and the target object through the video camera unit.
2、本发明在捕捉所述目标对象的过程中,以预先存储的目标对象的图像作为特征信息进行识别,在捕捉到所述目标对象的后,采集所述目标对象的扩展特征信息,以便于在后续捕捉所述目标对象时可通过扩展特征信息实现对目标对象的快速捕捉。2, in the process of capturing the target object, the image of the target object stored in advance is used as the feature information for identification, and after the target object is captured, the extended feature information of the target object is collected, so as to facilitate A fast capture of the target object can be achieved by extending the feature information when subsequently capturing the target object.
3、本发明中设有语音提醒,保证了小孩及时接收父母发起的视频通话,有助于家长随时随地监护小孩的动态。在捕捉所述目标对象的过程中,主要采用了3. The voice reminder is provided in the invention to ensure that the child receives the video call initiated by the parent in time, which helps the parent to monitor the child's activity anytime and anywhere. In the process of capturing the target object, mainly adopted
4、本方法中涉及了语音和/或红外定位,使机器人在寻找小孩过程中能更快的定位小孩的位置,大幅度提高了人物识别时间及准确率,在捕捉到小孩后及与小孩的和交互过程中,通过测距装置,时刻保持与小孩在预设的距离范围内,保证了小孩安全、清晰的接收小孩信息及最大范围的捕捉小孩状态。4. The method involves voice and/or infrared positioning, so that the robot can locate the child's position more quickly while searching for children, greatly improving the time and accuracy of the person recognition, after capturing the child and with the child. During the interaction process, the distance measuring device is kept within a preset distance from the child, thereby ensuring the child's safe and clear reception of the child information and the maximum range of capturing the child's state.
5、本发明集成有视频通信、移动、人机交互的娱乐学习功能,任何存储在机器人数据库中的人物都可以发起启动机器人的指令,通过接收并解析指令,完成和/或启动指令中涉及的任务及功能,在本发明提供的机器人还可通过观察学习,向小孩提供与其年龄和/或智商相匹配的人机交互活动,使本机在陪伴小孩的过程,达到最大可能的开发小孩的智力,减少小孩的孤独感的作用,在实际生活中,更具有实用价值。5. The invention integrates the entertainment learning function of video communication, mobile and human-computer interaction, and any character stored in the robot database can initiate an instruction to start the robot, by receiving and parsing the instruction, completing and/or starting the instruction involved in the instruction. Tasks and functions, the robot provided by the present invention can also provide children with human-computer interaction activities matching their age and/or IQ through observation and learning, so that the machine can achieve the maximum possible development of children's intelligence in the process of accompanying children. To reduce the child's loneliness, it is more practical in real life.
附图说明DRAWINGS
图1为本发明一实施例的机器人视频通话控制方法流程图;1 is a flowchart of a method for controlling a video call of a robot according to an embodiment of the present invention;
图2为本发明另一实施例的机器人视频通话控制方法流程图;2 is a flowchart of a method for controlling a video call of a robot according to another embodiment of the present invention;
图3为本发明又一实施例的机器人视频通话控制方法流程图;3 is a flowchart of a method for controlling a video call of a robot according to still another embodiment of the present invention;
图4为本发明再一实施例的机器人视频通话控制方法流程图;4 is a flowchart of a method for controlling a video call of a robot according to still another embodiment of the present invention;
图5为本发明一实施例的机器人视频通话控制装置结构示意图;FIG. 5 is a schematic structural diagram of a video call control apparatus for a robot according to an embodiment of the present invention; FIG.
图6为本发明另一实施例的机器人视频通话控制装置子结构示意图; 6 is a schematic structural diagram of a sub-frame of a robot video call control device according to another embodiment of the present invention;
图7为本发明又一实施例的机器人视频通话控制方法结构示意图;FIG. 7 is a schematic structural diagram of a method for controlling a video call of a robot according to still another embodiment of the present invention; FIG.
图8为本发明再一实施例的机器人视频通话控制方法结构示意图;FIG. 8 is a schematic structural diagram of a method for controlling a video call of a robot according to still another embodiment of the present invention; FIG.
图9为执行根据本发明方法的可视频通话移动式机器人的框图;Figure 9 is a block diagram of a video callable mobile robot performing the method according to the present invention;
以及as well as
图10为用于保持或者携带实现根据本发明的方法的程序代码的存储单元示意图。Figure 10 is a schematic diagram of a memory unit for holding or carrying program code implementing a method in accordance with the present invention.
具体实施方式detailed description
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。The embodiments of the present invention are described in detail below, and the examples of the embodiments are illustrated in the drawings, wherein the same or similar reference numerals are used to refer to the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the drawings are intended to be illustrative of the invention and are not to be construed as limiting.
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。The singular forms "a", "an", "the" It is to be understood that the phrase "comprise" or "an" Integers, steps, operations, components, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element. Further, "connected" or "coupled" as used herein may include either a wireless connection or a wireless coupling. The phrase "and/or" used herein includes all or any one and all combinations of one or more of the associated listed.
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。Those skilled in the art will appreciate that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. It should also be understood that terms such as those defined in a general dictionary should be understood to have meaning consistent with the meaning in the context of the prior art, and will not be idealized or excessive unless specifically defined as here. The formal meaning is explained.
本技术领域技术人员可以理解,这里所使用的“终端”、“终端设备”既包括无线信号接收器的设备,其仅具备无发射能力的无线信号接收器的设备,又 包括接收和发射硬件的设备,其具有能够在双向通信链路上,进行双向通信的接收和发射硬件的设备。这种设备可以包括:蜂窝或其他通信设备,其具有单线路显示器或多线路显示器或没有多线路显示器的蜂窝或其他通信设备;PCS(Personal Communications Service,个人通信系统),其可以组合语音、数据处理、传真和/或数据通信能力;PDA(Personal Digital Assistant,个人数字助理),其可以包括射频接收器、寻呼机、互联网/内联网访问、网络浏览器、记事本、日历和/或GPS(Global Positioning System,全球定位系统)接收器;常规膝上型和/或掌上型计算机或其他设备,其具有和/或包括射频接收器的常规膝上型和/或掌上型计算机或其他设备。这里所使用的“终端”、“终端设备”可以是便携式、可运输、安装在交通工具(航空、海运和/或陆地)中的,或者适合于和/或配置为在本地运行,和/或以分布形式,运行在地球和/或空间的任何其他位置运行。这里所使用的“终端”、“终端设备”还可以是通信终端、上网终端、音乐/视频播放终端,例如可以是PDA、MID10(MobileInternet Device,移动互联网设备)和/或具有音乐/视频播放功能的移动电话,也可以是智能电视、机顶盒等设备。Those skilled in the art can understand that the "terminal" and "terminal device" used herein include both a wireless signal receiver device and a device having only a wireless signal receiver without a transmitting capability. A device comprising receiving and transmitting hardware having a receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such devices may include cellular or other communication devices having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data Processing, fax, and/or data communication capabilities; PDA (Personal Digital Assistant), which can include radio frequency receivers, pagers, Internet/Intranet access, web browsers, notepads, calendars, and/or GPS (Global Positioning System (Global Positioning System) receiver; conventional laptop and/or palmtop computer or other device having a conventional laptop and/or palmtop computer or other device that includes and/or includes a radio frequency receiver. As used herein, "terminal", "terminal device" may be portable, transportable, installed in a vehicle (aviation, sea and/or land), or adapted and/or configured to operate locally, and/or Run in any other location on the Earth and/or space in a distributed form. The "terminal" and "terminal device" used herein may also be a communication terminal, an internet terminal, a music/video playing terminal, and may be, for example, a PDA, a MID 10 (Mobile Internet Device), and/or have a music/video playing function. Mobile phones can also be smart TVs, set-top boxes and other devices.
本发明所涉及的机器人可以理解人类语言,用人类语言同操作者对话,在通过编程的形式使它自身的“意识”中单独形成了一种使它得以“生存”的外界环境。它能分析出现的情况,能调整自己的动作以达到操作者所提出的机器人可实现的要求。机器人可通过编程等,使其智力达到儿童的程度。机器人能独自行走,能“看到”东西并分析看到的东西,能服从指令并用人类语言回答问题。更重要的是它具有“理解”能力。The robot involved in the present invention can understand the human language, talks with the operator in human language, and forms an external environment in which it can "survive" in its own "consciousness" through programming. It analyzes what is happening and adjusts its movements to meet the operator's requirements for robots. Robots can be programmed to make their intelligence reach the level of children. Robots can walk alone, "seeing" things and analyzing what they see, obeying instructions and answering questions in human language. More importantly, it has the ability to "understand".
本发明所述的机器人视频通话控制方法,使得家庭成员除在家以为的任何地方,可通过通信终端随时随地的监护家中的小孩,查看家里的情况,家里的小孩在想父母和/或家里的其他成员时,也可通过机器人一端向他们发出付出及时通讯,同时机器人在捕捉目标对象的过程中,通过及时的监测目标对象扩展部位的扩展特征的变化,及时更新已存储的扩展特征信息,做到更快速更准确的捕捉目标对象,特别是目标对象的目标特征,在外家庭成员发出向机器人发出指示符之后,机器人能够及时的接收消息并传输家里的状况的视频图像,本发明所述的机器人具有人机交互功能,也能起到陪玩、解答问题、帮助学习 的作用。The robot video call control method according to the present invention enables a family member to view the situation of the home at any time and anywhere in the home of the guardian through the communication terminal, and the child at home is thinking about the parents and/or other at home. When members are members, they can also send timely communication to them through the robot end. At the same time, in the process of capturing the target object, the robot can update the stored extended feature information in time by monitoring the change of the extended features of the extended part of the target object. Faster and more accurate capture of the target object, in particular the target feature of the target object, after the external family member issues an indicator to the robot, the robot can receive the message in time and transmit the video image of the situation at home, the robot of the present invention has Human-computer interaction function can also play, answer questions and help learn The role.
以下实施例所揭示的机器人视频通话控制方法,如图1所示,包括步骤:The robot video call control method disclosed in the following embodiments, as shown in FIG. 1, includes the following steps:
S100:建立与呼叫方的视频通话,向呼叫方传输本机摄像单元获取的视频流;S100: Establish a video call with the calling party, and transmit the video stream obtained by the local camera unit to the calling party;
机器人存储的信息中记录了机器人与家庭成员关联关系,并与家庭成员之间的直接通信终端建立了连接方式,步骤S100中的呼叫方即为家庭成员,例如家庭成员的各移动终端如手机、电脑、ipad等都存储有与机器人连接的应用程序,其中,应用程序可以是与机器人直接相关的控制机器人的App,也可以是控制机器人的网页链接,为了做到实时监护看管小孩的作用,家庭成员通过提前设置并开启机器人的视频通信传输的功能,直接建立与家庭成员通信终端之间的视频通话,或者在接到家庭成员的传输指令时,建立与家庭成员之间的视频通话,并实时传输视频图像到家庭成员的终端,或者是在接收到在外一位家庭成员通过控制机器人的App应用程序或者进入控制机器的网页应用程序的呼叫,并发出了视频通话请求,则直接接受家庭成员的视频通话请求,并向发起视频通话的家庭成员即呼叫方传输本机获取的视频流,其中机器人传输过去的视频流也是在呼叫方的安装有控制机器人的App移动终端上显示,或者是在呼叫方移动终端打开的网页应用程序上显示,实现了实时的视频通话。The information stored in the robot records the relationship between the robot and the family member, and establishes a connection mode with the direct communication terminal between the family members. The calling party in step S100 is a family member, for example, each mobile terminal of the family member, such as a mobile phone. Computers, ipads, etc. all store applications connected to the robot, wherein the application can be an app that controls the robot directly related to the robot, or a web link that controls the robot, in order to perform real-time monitoring of the role of the child, the family The member establishes and activates the video communication transmission function of the robot in advance, directly establishes a video call with the communication terminal of the family member, or establishes a video call with the family member when receiving the transmission instruction of the family member, and real-time Transmitting a video image to a terminal of a family member, or directly receiving a call from a family member who controls the bot's App application or entering a web application that controls the machine, and issues a video call request, directly accepts the family member's Video call request and to The family member who is the video call, that is, the calling party transmits the video stream acquired by the local machine, wherein the past video stream transmitted by the robot is also displayed on the calling mobile terminal of the calling party where the controlling robot is installed, or the webpage opened by the calling mobile terminal. Displayed on the app, real-time video calls are implemented.
S200:接收所述呼叫方发起的寻的指令,解析所述寻的指令所包含的目标物信息,依据目标物信息确定相应的目标对象的目标特征信息;S200: receiving an instruction of the calling party initiated by the calling party, parsing object information included in the seeking instruction, and determining target feature information of the corresponding target object according to the target information;
机器人接受并向呼叫方发回了视频流,呼叫方可通过视频直接观察到家里的情况,如果呼叫方在视频中不能看到自己想要看到目标对象时,呼叫方可在自己这边的移动终端上发出寻的指令,寻的指令中包含有目标物的相关信息,且这些信息是存储在机器人中的,或者和机器人相连接的云端中,机器人接收寻的指令后,在本机中解析寻的指令中包含的目标物信息,再根据目标物信息确定与目标物对应的目标对象的目标特征信息,机器人在后续过程中,将以此为依据寻找并确定目标对象。The robot accepts and sends back the video stream to the caller. The caller can directly observe the situation at home through the video. If the caller cannot see the target object in the video, the caller can be on his own side. The mobile terminal issues a seek instruction, and the search instruction includes information about the target object, and the information is stored in the robot or in the cloud connected to the robot. After the robot receives the seek instruction, the robot is in the local machine. The target object information included in the searched instruction is parsed, and the target feature information of the target object corresponding to the target object is determined according to the target object information, and the robot searches and determines the target object based on the basis in the subsequent process.
具体的,例如妈妈在移动终端向机器人发起了“寻找女儿”的寻的指令,机器人接收“寻找女儿”的寻的指令并在本机中解析寻的指令中的信息,即解析并提取出“女儿”这一信息,将“女儿”这一信息发送到存储“女儿”这一 目标信息及对应特征信息的数据库,并在通过存储女儿这一目标信息确定女儿的特征信息,其中具体的目标特征信息与目标物信息之间的存储关系于后文详述,特征信息为女儿的脸部特征,例如整个脸部和五官的轮廓与位置,机器人在后续过程中,将以此为依据寻找并确定女儿。Specifically, for example, the mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and the robot receives the “seeking daughter” search command and parses the information in the seek instruction in the local machine, that is, parses and extracts “ "Daughter" message, sending the message "daughter" to the store "daughter" a database of target information and corresponding feature information, and determining the feature information of the daughter by storing the target information of the daughter, wherein the storage relationship between the specific target feature information and the target information is detailed later, and the feature information is the daughter's Facial features, such as the contour and position of the entire face and facial features, will be used by the robot to find and determine the daughter based on this.
本步骤的另一实施方式即为机器人接收到了寻的指令后,将寻的指令发送给云端,云端解析出寻的指令中包含的目标物信息,并将目标物信息发送给机器人,机器人根据目标物信息确定与目标物对应的目标对象的目标特征信息,机器人在后续过程中,将以此为依据寻找并确定目标对象。Another embodiment of the present step is that after the robot receives the instruction of seeking, the robot sends the searched instruction to the cloud, and the cloud parses the target information contained in the searched instruction, and sends the target information to the robot, and the robot according to the target. The object information determines the target feature information of the target object corresponding to the target object, and the robot will find and determine the target object based on the basis in the subsequent process.
具体的,例如上述妈妈在移动终端向机器人发起了“寻找女儿”的寻的指令,机器人接收“寻找女儿”的寻的指令后,将“寻找女儿”的寻的指令发送给云端,云端解析寻的指令中的信息,即解析并提取出“女儿”这一信息,并将该信息发送给机器人,机器人接收已解析的信息,将“女儿”这一信息发送到存储“女儿”这一目标信息及对应特征信息的数据库,并在通过存储女儿这一目标信息确定女儿的特征信息,机器人在后续过程中,将以此为依据寻找并确定女儿。Specifically, for example, the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and after receiving the instruction of “seeking the daughter”, the robot sends the instruction of “seeking the daughter” to the cloud, and the cloud analyzes the search. The information in the instruction, that is, parsing and extracting the information of "daughter", and sending the information to the robot, the robot receives the parsed information, and sends the message "daughter" to the target information storing "daughter" And a database corresponding to the characteristic information, and determining the characteristic information of the daughter by storing the target information of the daughter, the robot will find and determine the daughter based on the basis in the subsequent process.
本步骤的另一实施方式即为机器人接收到了寻的指令后,将寻的指令发送给云端,云端解析出寻的指令中包含的目标物信息,云端将所述目标物信息发送到存储目标物信息及目标物特征信息的云端数据库中,并在云端根据目标物信息确定与目标物对应的目标对象的目标特征信息,云端再将目标特征信息发送给机器人,机器人在后续过程中,将以此为依据寻找并确定目标对象。Another embodiment of the present step is that after the robot receives the seek instruction, the robot sends the search command to the cloud, and the cloud parses the target information contained in the search command, and the cloud sends the target information to the storage target. The cloud and the target feature information are in the cloud database, and the target feature information of the target object corresponding to the target object is determined according to the target information in the cloud, and the cloud sends the target feature information to the robot, and the robot will use the target in the subsequent process. Based on finding and determining the target object.
具体的,例如上述妈妈在移动终端向机器人发起了“寻找女儿”的寻的指令,机器人接收“寻找女儿”的寻的指令后,将“寻找女儿”的寻的指令发送给云端,云端解析寻的指令中的信息,即解析并提取出“女儿”这一信息,通过该信息确定女儿的特征信息,并将该特征信息发送给机器人,机器人接收直接接收女儿的特征信息,并以此为依据寻找并确定女儿。Specifically, for example, the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and after receiving the instruction of “seeking the daughter”, the robot sends the instruction of “seeking the daughter” to the cloud, and the cloud analyzes the search. The information in the instruction, that is, parsing and extracting the information of "daughter", determining the characteristic information of the daughter through the information, and transmitting the characteristic information to the robot, and the robot receives the characteristic information directly receiving the daughter, and based thereon Find and identify your daughter.
S300:在未捕捉到所述目标对象时,启动行走装置执行本机移动,在移动过程中对摄像单元的视频流进行图像识别,确定包含所述目标特征信息的图像,以捕捉目标对象;S300: when the target object is not captured, start the walking device to perform local movement, perform image recognition on the video stream of the imaging unit during the moving process, and determine an image including the target feature information to capture the target object;
在步骤S200中确定了目标对象及目标对象的特征信息之后,如果在已获取 的视频流中通过图像识别确定没有带有目标特征信息的图像,机器人则启动自身的行走装置,使机器人移动,在移动过程中,机器人通过摄像单元的获取的视频流,并通过图像识别技术,识别视频中的图像,确定视频中是否包含有目标特征信息的图像,并以此来捕捉目标对象,其中,未捕捉到对象包括以下几种情形:1、机器人对视频流中的图像中未识别到与目标对象对应的目标特征信息及扩展特征信息,同时机器人通过音频、红外定位到目标对象后,通过测量与目标对象的距离大于预设的机器人与目标对象的预设的距离范围,启动行走装置;2、机器人对视频流中的图像中未识别到与目标对象对应的目标特征信息及扩展特征信息,同时机器人通过音频、红外定位到目标对象后,通过测量与目标对象的距离小于预设的机器人与目标对象的预设的距离范围,机器人启动行走装置;3、机器人捕捉到目标对象的目标特征,且与目标对象的距离保持在预设范围内过程中,机器人摄像单元出现视频图像轮廓不清晰时,机器人暂不启动行走装置,待视频图像轮廓清晰后,如果对视频流中的图像中未识别到与目标对象对应的目标特征信息及扩展特征信息,同时机器人通过音频、红外定位到目标对象后,启动行走装置;4、机器人捕捉到目标对象的目标特征,且与目标对象的距离保持在预设范围内过程中,如目标对象突然远离导致机器人不能识别目标对象的目标特征时,启动行走装置,如果目标对象远离后且在机器人定位到目标对象的位置之前,机器人通过测距发现目标对象逐渐靠近,则不启动行走装置。机器人行走装置接收来自摄像单元的信号,并将其转化成控制行走装置并与其电连接的控制器的电信号,控制器将所述电信号转移动到启动行走装置的驱动装置上,驱动装置启动行走装置实现机器人的移动,其中驱动装置可以是电机,行走装置可以是轮、履带或者轮履复合式等;图像识别是首先在机器人中存储一张图片,并将其作为一个模型,机器人处理器对此模型的首先进行预处理,并提取出其中带有线条的轮廓、线条与线条之间的角度、线条与线条的相对位置、轮廓中所包覆的色彩等,在捕捉到一视频图像后,为了确定当前图像中是否需要捕捉的目标对象,机器人处理器依次对图像中的每一帧图片进行进行预处理,并提取出其中带有线条的轮廓、线条与线条之间的角度、线条与线条的相对位置、轮廓中所包覆的色彩等与数据库中的图片模型进行对比拟合,在其拟合度达到设定值时则认为视频图像中存在捕捉的 目标对象。After the feature information of the target object and the target object is determined in step S200, if it has been acquired In the video stream, image recognition is performed without image with target feature information, and the robot activates its own walking device to move the robot. During the moving process, the robot passes the acquired video stream of the camera unit and passes image recognition technology. Identifying the image in the video, determining whether the image contains the image of the target feature information, and capturing the target object, wherein the uncaptured object includes the following situations: 1. The robot does not recognize the image in the video stream. To the target feature information and the extended feature information corresponding to the target object, and after the robot locates the target object through audio and infrared, the distance between the target and the target object is greater than the preset distance range of the preset robot and the target object, and the walking is started. The device does not recognize the target feature information and the extended feature information corresponding to the target object in the image in the video stream, and after the robot locates the target object through audio and infrared, the distance between the target and the target object is less than the preset. Preset distance between the robot and the target object The robot starts the walking device; 3. When the robot captures the target feature of the target object, and the distance from the target object remains within the preset range, the robot does not start the walking device when the video image is unclear. After the outline of the video image is clear, if the target feature information and the extended feature information corresponding to the target object are not recognized in the image in the video stream, and the robot is positioned to the target object through audio and infrared, the walking device is activated; The robot captures the target feature of the target object, and the distance from the target object remains within the preset range. If the target object suddenly moves away from the target feature that causes the robot to fail to recognize the target object, the walking device is activated, if the target object is far away and Before the robot locates the position of the target object, the robot finds that the target object is gradually approaching by ranging, and does not activate the walking device. The robotic walking device receives the signal from the camera unit and converts it into an electrical signal of a controller that controls and electrically connects the traveling device, and the controller rotates the electrical signal to the driving device that activates the traveling device, and the driving device starts The walking device realizes the movement of the robot, wherein the driving device may be a motor, the walking device may be a wheel, a crawler or a wheel and the like; the image recognition is to first store a picture in the robot and use it as a model, the robot processor The model is first preprocessed, and the contour with lines, the angle between the lines and the lines, the relative position of the lines and the lines, the colors covered in the outline, etc. are extracted, after capturing a video image. In order to determine whether the target object needs to be captured in the current image, the robot processor sequentially preprocesses each frame of the image in the image, and extracts the contour with lines, the angle between the lines and the lines, and the line and The relative position of the lines, the colors covered in the outline, etc. Comparative fitting, the fitting when it reaches the set value is considered to exist in the video image captured target.
具体的,如上述妈妈通过机器人查看女儿的情况,在步骤S200中确定了女儿的特征信息,机器人以此特征信息为依据在家中寻找女儿,机器人首先是在自己接收到寻的指令的位置处,通过各音频和/或红外定位女儿的位置,若女儿在预设的距离范围内,且机器人不能在当前视频流的部分帧或者连续帧图像中捕捉到女儿的目标特征,则启动行走装置,配合摄像单元捕捉女儿的目标特征信息,若通过各音频和/或红外定位女儿的位置,若女儿不在预设的距离范围内,启动行走装置,并通过摄像单元的视频流进行图像识别,以捕捉女儿的目标特征如女儿脸部的轮廓特征。Specifically, if the mother views the daughter through the robot, the characteristic information of the daughter is determined in step S200, and the robot searches for the daughter at home based on the feature information, and the robot firstly at the position where he receives the instruction to seek. Positioning the daughter by each audio and/or infrared, if the daughter is within a preset distance range, and the robot cannot capture the target feature of the daughter in a partial frame or a continuous frame image of the current video stream, the walking device is activated to cooperate The camera unit captures the target feature information of the daughter. If the position of the daughter is positioned by each audio and/or infrared, if the daughter is not within the preset distance range, the walking device is activated, and the image is recognized by the video stream of the camera unit to capture the daughter. The target features are the contour features of the daughter's face.
S400:当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围。S400: After capturing the target object, controlling the walking device to maintain a preset distance range between the local machine and the target object.
机器人可预设机器人与目标对象的距离范围且为M,在机器人捕捉到目标对象后,首先通过安装在其上的测量装置如超声波传感器测量机器人与目标对象的距离L,若M≤L,即为机器人与目标对象的距离较远且不在预设的距离范围内,则机器人通过行走装置移动到与目标对象预设的距离范围内,且在目标对象移动过程中,机器人始终与目标对象保持预设的距离范围,若M≥L,即为机器人与目标对象的距离较近,此时,机器人只需保持与目标对象预设的距离范围。The robot can preset the distance range between the robot and the target object and is M. After the robot captures the target object, the distance between the robot and the target object is first measured by a measuring device such as an ultrasonic sensor mounted thereon, if M≤L, If the distance between the robot and the target object is far away and is not within the preset distance range, the robot moves to the preset distance range from the target object by the walking device, and the robot always keeps with the target object during the moving process of the target object. If the distance is set, if M≥L, the distance between the robot and the target object is relatively close. At this time, the robot only needs to maintain the preset distance range from the target object.
具体的,如上述妈妈寻找女儿,在步骤S300中,若机器人依据女儿的特征信息寻找到了女儿,其中,机器人与女儿的预设距离为3m,则机器人通过自身的测量装置测量机器人与女儿的距离6m大于预设距离3m,则机器人通过行走装置移动到与女儿预设的距离范围内,且在女儿行走的过程中,机器人始终与目标对象保持预设的3m距离范围。Specifically, if the mother searches for a daughter, in step S300, if the robot finds the daughter according to the characteristic information of the daughter, wherein the preset distance between the robot and the daughter is 3 m, the robot measures the distance between the robot and the daughter through its own measuring device. When 6m is greater than the preset distance of 3m, the robot moves to the preset distance range with the daughter through the walking device, and the robot always maintains a preset distance of 3m from the target object during the walking of the daughter.
S410:当捕捉到所述目标对象后,采集所述目标对象的属于其目标特征信息之外的扩展特征信息,在不能捕捉所述目标特征信息时,依据所述扩展特征信息定位所述目标对象的扩增部位实现目标对象捕捉。S410: After capturing the target object, collecting extended feature information of the target object that belongs to the target feature information, and when the target feature information cannot be captured, positioning the target object according to the extended feature information The amplified part achieves the target object capture.
为了在后续的时间内快速寻找和/或定位目标对象的目标特征,在机器人捕捉到目标对象后,通过视频采集目标对象上除了目标特征信息之外的扩增部位的扩展特征信息,在呼叫方通过机器人和目标对象视频通话的过 程中,由于目标对象自身的移动和/或机器人的移动,导致机器人的视频不能捕捉到目标的对象的目标特征信息,即机器人在当前视频流中的部分帧或者连续帧图片中不能清晰的识别目标特征对应的带有线条的轮廓、线条与线条之间的角度、线条与线条的相对位置、轮廓中所包覆的色彩等时,机器人可以通过目标对象目标特征信息以外的扩增部位的扩展特征信息快速寻找并聚焦到目标对象的目标特征信息。如上述机器捕捉到目标对象后,由于目标对象突然远离而不能识别目标对象的目标特征信息,为了进一步快速的捕捉目标对象,机器人在当前视频流中同时识别目标对象的目标特征信息和扩展特征信息,若通过扩展特征信息识别到了目标对象的扩增部位,则依据所述扩增部位捕捉到了目标对象,并再以此为依据定位目标对象的目标特征。In order to quickly find and/or locate the target feature of the target object in a subsequent time, after the robot captures the target object, the extended feature information of the amplified part other than the target feature information on the target object is collected by the video, on the calling side. Video call via robot and target object In the process, due to the movement of the target object and/or the movement of the robot, the video of the robot cannot capture the target feature information of the target object, that is, the robot cannot clearly recognize in the partial frame or the continuous frame picture in the current video stream. When the target feature corresponds to the outline with lines, the angle between the line and the line, the relative position of the line and the line, the color covered in the outline, etc., the robot can expand through the amplification part other than the target feature information of the target object. The feature information quickly finds and focuses on the target feature information of the target object. After the target object captures the target object, the target feature information of the target object cannot be recognized because the target object suddenly moves away. In order to further capture the target object, the robot simultaneously identifies the target feature information and the extended feature information of the target object in the current video stream. If the augmented part of the target object is identified by the extended feature information, the target object is captured according to the augmented part, and the target feature of the target object is further determined based on the target object.
具体的,如上文所述机器人在捕捉到女儿之后,通过摄像单元采集女儿身上的除了脸部特征信息之外的其他扩展特征信息,如女儿衣服、裤子、鞋子的颜色和款式,头发的颜色以及形态,帽子的颜色、形状、款式,身体、手臂、腿部的轮廓等,在妈妈和女儿通过机器人视频通话的过程中,如果女儿时站时坐时走,在这过程中,就有可能导致机器人不能定位并捕捉到女儿的脸部特征信息,机器人便可通过扩展部位的扩展特征信息定位到目标特征信息并实现对目标对象的捕捉。例如,摄像单元捕捉到女儿后,在采集身体部位的的特征时,在获取部分帧或者连续帧图片中身体部位的身穿衣服的颜色,再按照R:G:B=3:6:1的比值将部分帧或者连续帧图片去色,处理为黑白图片,提取出身体轮廓的轮廓特征,在后续捕捉目标对象的过程中以此为扩展特征信息确定目标对象。Specifically, after the robot captures the daughter as described above, the camera captures other extended feature information of the daughter other than the facial feature information, such as the color and style of the daughter's clothes, pants, shoes, and the color of the hair. Form, the color, shape, style of the hat, the outline of the body, arms, legs, etc., during the video call of the mother and daughter through the robot, if the daughter is sitting while sitting, in the process, it may lead to The robot can not locate and capture the facial feature information of the daughter, and the robot can locate the target feature information and capture the target object through the extended feature information of the extended part. For example, after the camera unit captures the daughter, when acquiring the features of the body part, the color of the body part wearing the part of the frame or the continuous frame picture is acquired, and then according to R:G:B=3:6:1. The ratio decolorizes part of the frame or the continuous frame image, processes it as a black and white picture, extracts the outline feature of the body contour, and determines the target object as the extended feature information in the process of capturing the target object.
进一步的,依据所述扩展特征信息定位所述目标对象的扩增部位之后,启动行走装置环绕该扩增部位继续搜寻所述目标特征信息,直到定位到该目标特征信息之后方才实现目标对象捕捉。Further, after the augmented portion of the target object is located according to the extended feature information, the walking device is started to continue searching for the target feature information around the augmented portion until the target feature information is acquired.
在机器人不能捕捉到目标对象的目标特征信息时,机器人首先是通过摄像单元捕捉到的特征信息和之前采集的扩展特征信息进行对比并定位现在摄像单元捕捉到的扩展部位,机器人启动行走装置移动围绕扩增部位继续搜寻目标特征信息,直到摄像单元捕捉并定位到目标特征信息之后,才 继续进行视频通话中对目标兑现过的捕捉。When the robot cannot capture the target feature information of the target object, the robot first compares the feature information captured by the camera unit with the previously acquired extended feature information and locates the extended portion captured by the camera unit, and the robot starts the walking device to move around. The amplified portion continues to search for the target feature information until the camera unit captures and locates the target feature information. Continue to capture the target in the video call.
具体的,在妈妈和女儿通过机器人视频通话的过程中,如上述在女儿坐着时,机器人能捕捉到女儿的脸部特征信息,女儿站起来之后,机器人只能捕捉到女儿的身体部分,女儿身上穿的是一件粉色连衣裙,款式为圆领、无袖的蓬蓬裙,此时机器人可通过之前采集的扩展特征信息即女儿身体的轮廓、裙子的颜色、款式确定此时捕捉的部位,且通过捕捉部位确定机器人摄像单元转动的方位,即机器人通过摄像单元捕捉的裙子的领子、肩部形式以及女儿躯体的特征信息,确定此时捕捉到女儿的身体部位为前胸,且通过上抬摄像单元便可捕捉到女儿的脸部特征信息;如女儿站起来并转过身体,使自己的后背对着机器人的摄像单元,机器人的摄像单元捕捉到的是女儿躯干扩展部位的裙子颜色和款式、后背轮廓的扩展特征信息,那么机器人则需要启动行走装置环绕躯干移动并通过转变摄像单元的角度搜寻目标特征信息,一直到摄像单元搜寻到目标特征信息,机器人调整与女儿的距离,并定位目标特征信息之后,才继续对目标对象进行捕捉,又如机器人捕捉到女儿的目标特征信息,且与女儿的距离保持在预设范围内过程中,机器人摄像单元出现视频图像轮廓不清晰时,机器人暂不启动行走装置,待视频图像轮廓清晰后,通过对视频流中的图像识别后,识别的扩展特征信息来自女儿的身体背部的扩增部位,启动行走装置环绕该特征部位捕捉女儿的脸部特征信息。Specifically, during the video call between the mother and the daughter through the robot, as the above is sitting, the robot can capture the facial features of the daughter. After the daughter stands up, the robot can only capture the body part of the daughter, the daughter. Wearing a pink dress, the style is a round neck, sleeveless tutu. At this time, the robot can determine the captured position by the extended feature information collected before, that is, the outline of the daughter's body, the color and style of the skirt. And determining, by the capturing part, the orientation of the rotation of the robot camera unit, that is, the collar of the skirt captured by the camera unit, the form of the shoulder, and the characteristic information of the body of the daughter, determining that the body part of the daughter is captured as the front chest and is raised by the lifting The camera unit can capture the facial features of the daughter; if the daughter stands up and turns over the body, so that the back of the camera is facing the camera unit of the robot, the camera unit of the robot captures the color of the skirt of the extension of the daughter's torso and The extended feature information of the style and the back profile, then the robot needs to start the walking device Moving around the torso and searching for the target feature information by changing the angle of the camera unit, until the camera unit searches for the target feature information, the robot adjusts the distance with the daughter, and locates the target feature information before continuing to capture the target object, such as a robot. When the target feature information of the daughter is captured, and the distance from the daughter is kept within the preset range, when the outline of the video image of the robot camera unit is unclear, the robot temporarily does not activate the walking device, and after the outline of the video image is clear, the video is passed. After the image recognition in the stream, the recognized extended feature information is derived from the amplified portion of the daughter's back of the body, and the walking device is activated to capture the facial feature information of the daughter around the feature portion.
优选的,所述扩展特征信息采集自所述视频流中与所述目标特征信息相对应的图像部分共同运动的动景图像。Preferably, the extended feature information is collected from a moving scene image in which the image portion corresponding to the target feature information in the video stream moves together.
机器人捕捉到目标对象后,目标对象在视频流中相对于其他景象属于动态的图像,在采集扩展信息时,采集视频流中与目标特征信息共同运动的目标对象上的扩展特征,例如机器人摄像单元捕捉到目标对象后,其中脸部特征信息作为目标特征信息,将部分帧或者连续帧图片中的随着脸部的线条轮廓的变化其他线条轮廓发生变化且可与脸部轮廓被包围在一个封闭轮廓中的部位作为扩增部位,确定扩增部位后采集扩增部位的轮廓、各轮廓的相对位置及轮廓包覆的色彩。After the robot captures the target object, the target object belongs to the dynamic image in the video stream relative to other scenes. When the extended information is collected, the extended features on the target object in the video stream that move together with the target feature information are collected, such as a robot camera unit. After capturing the target object, wherein the facial feature information is used as the target feature information, the other line contours in the partial frame or the continuous frame picture change with the line contour of the face and may be surrounded by the face contour in a closed The part in the contour is used as an amplification site, and the contour of the amplified part, the relative position of each contour, and the color of the contour coating are collected after the amplification site is determined.
具体的,如上述机器人捕捉到女儿后,在视频流中女儿相对于家里的 静态物品属于动态的对象,女儿的躯干、衣服、头发等都是跟随女儿运动而共同运动,根据视频流中女儿的脸部特征信息确定了女儿这一对象时,采集视频流中部分帧或者连续帧图片随着女儿脸部轮廓的变化的其他线条轮廓发生变化的部位的轮廓、各轮廓的相对位置及轮廓包覆的色彩,且其他线条轮廓可与脸部轮廓被包围与一个封闭的轮廓中的部位,并根据其他线条轮廓与脸部轮廓的位置确定其他轮廓属于身体的某一部分,同时根据部分帧或者连续帧图片中脸部的轮廓相对其他物品位置的是否变化及其他部位的位置跟随脸部位置相对其他物品位置是否变化确定女儿是否处于运动状态,若都为是,则为运动,否则女儿不处于运动状态。如在女儿走路过程中,读取摄像单元视频流中部分帧或者连续帧图片中脸部的轮廓特征相对其他物品位置发生变化,且对比脸部特征位置的变化有其他轮廓跟随脸部的轮廓位置变化相对其他物品位置发生变化,确定女儿为运动状态,且其他部位轮廓可与脸部轮廓被包裹在一个封闭轮廓中,根据其他部位相对脸部的轮廓的位置确定为腿部,则腿部确定为扩增部位,并采集部分帧或者连续帧图片中扩增部位的扩展特征信息。Specifically, after the robot captures the daughter, the daughter is relative to the home in the video stream. The static item belongs to the dynamic object. The daughter's torso, clothes, hair, etc. all move together following the daughter's movement. When the daughter's object is determined according to the facial feature information of the daughter in the video stream, some frames or consecutive frames in the video stream are collected. The outline of the part of the frame picture that changes with the contour of the other side of the daughter's face, the relative position of each outline, and the color of the outline, and other line outlines can be surrounded with the outline of the face and in a closed outline The location, and according to the position of other line contours and the contour of the face, it is determined that other contours belong to a certain part of the body, and according to whether the contour of the face in the partial frame or the continuous frame image changes relative to the position of other articles and the position of other parts follows the face. Whether the position of the part changes relative to the position of other items determines whether the daughter is in a state of motion, and if yes, it is exercise, otherwise the daughter is not in a state of motion. For example, during the process of the daughter walking, the contour features of the face in the partial or continuous frame image of the video stream of the camera unit are changed relative to other items, and the contour position of the face is changed according to the change of the face feature position. The change is changed with respect to the position of other items, the daughter is determined to be in a moving state, and the contours of other parts can be wrapped in a closed contour with the contour of the face, and the leg is determined according to the position of the contour of the other part relative to the face. For amplifying the part, and acquiring the extended feature information of the amplified part in the partial frame or the continuous frame picture.
进一步的,所述目标物信息与目标特征信息之间以映射关系存储于数据库中,通过查询该数据库而实现依据目标物信息确定目标特征信息。Further, the target information and the target feature information are stored in a database in a mapping relationship, and the target feature information is determined according to the target information by querying the database.
目标物信息和目标特征信息等都是以一一对应的关系映射存储于数据库中,在确定了目标物信息之后,就可通过目标物信息查询到目标特征信息,且数据库可以是本地数据库,也可以是和机器人相连接的云端数据库。如果是本地数据库,则获取到目标物信息后,可在本地直接确定目标特征信息,如果是云端的数据库,则机器人将目标物信息发送到云端,云端确定与目标物信息对应的目标特征信息之后,将目标特征信息返回给机器人。The target information and the target feature information are all stored in the database in a one-to-one correspondence relationship. After the target information is determined, the target feature information can be queried through the target information, and the database can be a local database. It can be a cloud database connected to the robot. If it is a local database, after obtaining the target information, the target feature information can be directly determined locally. If it is a cloud database, the robot sends the target information to the cloud, and the cloud determines the target feature information corresponding to the target information. , return the target feature information to the robot.
具体的,如小红为目标物,在存储的时,将妈妈对小红的常见称呼女儿(包括其他家庭成员对小红称呼)、小红脸部特征信息对应存储,同时还包括采集的扩展特征信息,如表1,表1为目标物信息与目标特征信息在数据库中的存储关系。 Specifically, such as Xiaohong as the target, when storing, the mother will store the common name of Xiaohong (including other family members to Xiaohong) and Xiaohong's facial feature information, and also include the collected extension. The feature information, as shown in Table 1, is the storage relationship between the target information and the target feature information in the database.
Figure PCTCN2017116674-appb-000001
Figure PCTCN2017116674-appb-000001
进一步的,当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围的步骤中,伴随所述行走装置的运行,获取本机距离传感器侦测的与目标对象之间的距离数据,当该距离数据超过所述预定距离范围时,控制行走装置开始先走而执行移动,否则控制行走装置停止行走而暂停移动。Further, in the step of controlling the walking device to maintain the preset distance range between the local device and the target object after capturing the target object, acquiring the local distance sensor detection target and the target along with the running of the walking device The distance data between the objects, when the distance data exceeds the predetermined distance range, controls the walking device to start moving first, and otherwise controls the walking device to stop walking and pause the movement.
在捕捉到目标对象后,机器人在维持自身与目标对象的距离过程中,其距离传感器一直处于测量状态,测量机器人自身与目标对象之间的距离,在目标对象运动的过程中,当机器人与目标对象之间的距离超过预设的范围时,机器人自动控制行走装置开始行走而移动,如果通过距离传感器测出机器人与目标对象的距离在预设的范围内,机器人自动控制行走装置停止行走装置使机器人暂停移动。After capturing the target object, the distance sensor is always in the measurement state while maintaining the distance between the robot and the target object. The distance between the robot itself and the target object is measured. During the movement of the target object, the robot and the target When the distance between the objects exceeds the preset range, the robot automatically controls the walking device to start moving and moves. If the distance between the robot and the target object is measured by the distance sensor within a preset range, the robot automatically controls the walking device to stop the walking device. The robot pauses to move.
具体的,如上述机器人在捕捉到小红后,在机器人的行走装置维持自身与小红的距离过程中,其距离传感器一直处于测量状态,测量机器人自身与目标对象之间的距离,在小红运动的过程中,当机器人与小红之间的距离超过预设的范围时,机器人自动控制行走装置开始行走而移动,如果通过距离传感器测出机器人与小红的距离在预设的范围内,机器人自动控制行走装置停止行走装置使机器人暂停移动,在距离范围超过预设范围时,则机器人自动控制行走装置开始行走而移动,使得机器人更为智能化。Specifically, after the robot captures the small red, the distance sensor is always in the measurement state while the robot's walking device maintains its distance from the small red, and the distance between the robot itself and the target object is measured. During the movement, when the distance between the robot and Xiaohong exceeds the preset range, the robot automatically controls the walking device to start walking and moves. If the distance between the robot and the red red is measured by the distance sensor, the distance is within a preset range. The robot automatically controls the walking device to stop the walking device to suspend the movement of the robot. When the distance range exceeds the preset range, the robot automatically controls the walking device to start walking and move, making the robot more intelligent.
所述目标物信息为目标对象的名称或者指示符。The target information is a name or an indicator of the target object.
在步骤S200接收到来之呼叫方的指令后,解析出指令中携带的目标物信息,目标物信息即为目标对象的名称或者指示符,如人物的名字、电脑等。After receiving the instruction of the incoming caller in step S200, the target object information carried in the command is parsed, and the target object information is the name or indicator of the target object, such as the name of the person, the computer, and the like.
具体的,如上述妈妈作为呼叫方在其终端发出的寻的指令为“寻找电 脑”,其中“寻找电脑”字符信息,步骤S200解析出“寻找电脑”中的电脑,并将其作为目标对象的指示符,如电脑即为指示符,即目标物信息;如女儿的名字为小红,妈妈发出的寻的指令为“寻找小红”,其中“寻找小红”字符信息,步骤S200解析出“寻找小红”中的小红,并将其作为目标对象的指示符,即目标物信息小红即为女儿目标对象的名称,即目标物信息;另外,指示符也可以是呼叫方终端存储的与目标对象的信息,在呼叫方终端触发目标对象的信息,呼叫方终端将该目标对象的信息生成和目标对象相对应的指示符,并向机器人发出该目标对象的指示符,如妈妈作为呼叫方在自己的终端上存储有和女儿小红相关的信息,如名字小红、代表小红的图像,妈妈在终端上触发了女儿小红的名字或者代表小红的图像,则呼叫方终端生成寻找小红的指示符,并将该指示符发向机器人。Specifically, the above-mentioned mother’s instruction as a calling party at the terminal is “looking for electricity”. "brain", in which "find computer" character information, step S200 parses out the computer in "find computer" and uses it as an indicator of the target object, such as a computer as an indicator, that is, target information; for example, the daughter's name is Xiaohong, the instruction issued by the mother is "Looking for Xiaohong", in which "Look for Xiaohong" character information, step S200 parses out the little red in "Look for Xiaohong" and uses it as an indicator of the target object, ie The target information Xiaohong is the name of the daughter's target object, that is, the target information; in addition, the indicator may also be the information stored by the calling party terminal and the target object, and the information of the target object is triggered at the calling party terminal, and the calling party terminal will The information of the target object generates an indicator corresponding to the target object, and sends an indicator of the target object to the robot, such as the mother as the calling party stores information related to the daughter Xiaohong on the terminal, such as the name Xiaohong. On behalf of Xiaohong’s image, the mother triggers the daughter’s name on the terminal or the image representing Xiaohong, and the caller’s terminal generates a search for Xiaohong. Character shown, and send the indicator to the robot.
优选的,所述目标特征信息为所述目标对象的脸部特征信息。Preferably, the target feature information is facial feature information of the target object.
目标对象的目标特征信息是提前录入的,脸部特征变化最能表现一个人心情或者表达此时的状态,在录入时,优选脸部特征作为目标特征信息,便于在后续视频通话过程中,父母或在外家庭成员在视频通话过程中,或者拍摄的视频和/或照片中,可以首先通过脸部的表情观察家中儿童和/或其他家庭成员的喜怒哀乐。The target feature information of the target object is entered in advance, and the facial feature change can best express a person's mood or express the state at the time. When entering, the facial feature is preferably used as the target feature information, which is convenient for the parent during the subsequent video call. Or the family members during the video call, or the video and/or photos taken, can first observe the emotions of the children and/or other family members at home through the facial expression.
在其中一个实施例中,还包括如下步骤:监测到所述目标对象的扩展特征信息发生变化后,重新采集所述扩展部位的所述扩展特征信息。In one embodiment, the method further includes the following steps: after detecting that the extended feature information of the target object changes, re-collecting the extended feature information of the extended part.
机器人可以对目标对象的扩展特征信息保存至下一次扩展特征信息发生变化后,机器人监测到存储的目标对象的扩展特征信息发生变后则重新采集新的扩展特征信息,以便于在目标对象扩展特征信息变化后,需要捕捉目标对象,能快速的定位到目标对象的目标特征信息并捕捉目标对象。The robot can save the extended feature information of the target object to the next time the extended feature information changes, and after the robot detects that the extended feature information of the stored target object changes, the new extended feature information is re-acquired, so as to expand the feature in the target object. After the information changes, the target object needs to be captured, and the target feature information of the target object can be quickly located and the target object can be captured.
具体的,如上述小红穿的是粉色连衣裙,数据库存储的扩展特征信息同样是粉色连衣裙的信息,在小红换成一条白色连衣裙后,机器人通过捕捉目标特征信息确定穿白色连衣裙的对象为小红后,并发现其服装扩展特征信息发生变化后,则重新采集小红身体与白色连衣裙的扩展特征信息。Specifically, as the above-mentioned Xiaohong wears a pink dress, the extended feature information stored in the database is also the information of the pink dress. After the red is replaced by a white dress, the robot determines the object wearing the white dress by capturing the target feature information. After red, and found that the clothing extension feature information has changed, the extended feature information of the small red body and the white dress is re-acquired.
优选的,所述距离传感器为超声波传感器、红外线传感器或者包含所述摄像单元在内的双目测距摄像装置。 Preferably, the distance sensor is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit.
在步骤S400中涉及到的距离传感器为超声波传感器、红外线传感器或者包含所述摄像单元在内的双目测距摄像装置,双目测距摄像装置使用方便,可初步确定机器人与目标对象的距离,超声波对远距离测距的误差小,效果好,红外传感器对近距离的测距误差小,效果好,本发明通过相互结合,使得机器人在远近距离上的测距误差达到优化。The distance sensor involved in step S400 is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit, and the binocular ranging imaging device is convenient to use, and can initially determine the distance between the robot and the target object. Ultrasonic has small error in long distance ranging and good effect. The infrared sensor has small distance measurement error and good effect. The invention combines with each other to optimize the distance error of the robot in the distance.
优选的,所述扩展特征信息包括一个或任意多个以下特征:躯干部位特征、服装部位特征、脸部轮廓特征、毛发轮廓特征信息或音频特征信息。Preferably, the extended feature information includes one or any of a plurality of features: a torso part feature, a clothing part feature, a facial contour feature, a hair outline feature information, or audio feature information.
在步骤S410中,所述扩展特征信息包括一个或任意多个以下特征:躯干部位特征、服装部位特征、脸部轮廓特征、毛发轮廓特征信息或音频特征信息。In step S410, the extended feature information includes one or any of a plurality of features: a torso part feature, a clothing part feature, a face outline feature, a fur outline feature information, or audio feature information.
进一步的,本机还包括音频和/或红外定位单元,在捕捉所述目标对象过程中时,本机开启音频和/或红外定位单元获取所述目标物位置,以确定行走装置的起始走向。Further, the machine further includes an audio and/or infrared positioning unit, when the target object is captured, the local device turns on the audio and/or infrared positioning unit to acquire the target position to determine the starting direction of the walking device. .
在机器人中还包括音频和/或红外定位单元,在捕捉目标对象过程中,通过开启音频和/或红外定位单元获取所述目标物的位置,并以此来确定机器人行走装置的开始时的行走方向。Also included in the robot is an audio and/or infrared positioning unit that acquires the position of the target by turning on the audio and/or infrared positioning unit during capturing of the target object, and thereby determining the walking of the robotic walking device at the beginning direction.
具体的,如上述在确定了目标对象为小红后,小红此时在机器人的正前方哈哈大笑,机器人通过音频定位单元获取到小红的音频,并定位到小红的位置为机器人的正前方,此时机器人则直接启动行走装置使机器人往正前方移动;又如机器人通过红外定位单元的红外灯辐射感受周围景物和环境反射回来的红外光确定小红所在的位置为机器人的右前方,则机器人启动行走装置并向右前方移动寻找小红。Specifically, as described above, after determining that the target object is Xiaohong, Xiaohong laughs at the front of the robot at this time, and the robot acquires the audio of Xiaohong through the audio positioning unit, and locates the position of the red red as the robot. Directly forward, at this time, the robot directly activates the walking device to move the robot forward; if the robot radiates through the infrared light of the infrared positioning unit to sense the surrounding scenery and the infrared light reflected from the environment, the position of the small red is determined to be the right front of the robot. , the robot starts the walking device and moves to the right front to find the red.
进一步的,在捕捉所述目标对象过程中,在遇到障碍物时,本机通过距离传感器测量本机与所述障碍物的距离,控制行走装置绕行和/或远离所述障碍物,在绕行和/或远离所述障碍物后继续捕捉所述目标对象。Further, in the process of capturing the target object, when the obstacle is encountered, the local machine measures the distance between the local device and the obstacle through the distance sensor, and controls the walking device to bypass and/or stay away from the obstacle. The target object continues to be captured after being bypassed and/or away from the obstacle.
机器人在寻找小红的过程中,不可避免的会遇到障碍物,如家中凳子、墙体等,同样可以通过距离传感器测量机器人与图中障碍物的距离,在寻找小红的方位不变的情况下,控制行走装置绕过和/或远离凳子、墙体,并继续捕捉小红。 In the process of searching for Xiaohong, the robot will inevitably encounter obstacles, such as stools and walls in the home. It is also possible to measure the distance between the robot and the obstacles in the figure through the distance sensor, and find the orientation of the red. In this case, the walking device is controlled to bypass and/or move away from the stool, the wall, and continue to capture the red.
进一步的,所述本机还包括语音提醒单元,当本机移动到与所述目标对象的距离范围内时,启动所述语音提醒单元,并发出语音提醒。Further, the local machine further includes a voice reminding unit, when the local machine moves within a distance range from the target object, the voice reminding unit is activated, and a voice reminder is issued.
为了确保在呼叫方发起视频通话时,目标对象能够及时的接收到父母发来的消息,在机器人捕捉到目标对象并移动到与目标对象预设的距离范围内时,启动机器人的语音提醒单元,并发出语音提醒。In order to ensure that the target object can receive the message sent by the parent in time when the calling party initiates the video call, the robot prompts the voice reminding unit when the robot captures the target object and moves to a preset distance from the target object. And send a voice reminder.
具体的,如上述机器人寻找到了小红,并移动到与小红预设的距离范围内,则向小红发出语音提醒,如:妈妈来电话了,妈妈来电话了,妈妈来电话了,快接电话,快接电话,快接电话。Specifically, if the above-mentioned robot finds Xiaohong and moves to the distance range from Xiaohong, it will send a voice reminder to Xiaohong. For example, if the mother calls, the mother calls, and the mother calls, fast. Answer the call, answer the call quickly, and answer the call quickly.
在其中一个实施例中,如图3,还包括如下步骤:In one embodiment, as shown in FIG. 3, the method further includes the following steps:
S500:所述呼叫方挂断所述视频通话后,所述机器人摄像单元持续采集所述目标对象的视频;S500: After the calling party hangs up the video call, the robot camera unit continuously collects a video of the target object;
为了保证呼叫方可以更多的了解小孩及小孩在家中的状态,在呼叫方挂断电话后,机器人通过摄像单元持续的采集小孩的在家中的视频。In order to ensure that the caller can learn more about the state of the child and the child at home, after the caller hangs up the phone, the robot continuously collects the video of the child's home through the camera unit.
具体的,如上述妈妈作为呼叫方挂断了与女儿小红的视频通话,而机器人并没有关闭摄像单元,而是在持续的采集小红的在家中玩耍,学习等的视频。Specifically, as the above-mentioned mother hangs up a video call with her daughter Xiaohong as a caller, and the robot does not turn off the camera unit, but continuously collects videos of Xiaohong’s at home, learning, and the like.
S600:所述机器人将所述视频发送到与其连接的终端,且在向所述终端发送文字提醒和/或语音提醒。S600: The robot sends the video to a terminal connected thereto, and sends a text reminder and/or a voice reminder to the terminal.
在一段视频采集完成后,机器人将采集的视频发送到与其连接的家庭成员的终端和/或云端,并向终端发送文字提醒和/或语音提醒,且采集一段视频后,再持续采集下一段视频。After a video capture is completed, the robot sends the collected video to the terminal and/or cloud of the family member connected to it, and sends a text reminder and/or a voice reminder to the terminal, and after collecting a video, the next video is continuously collected. .
具体的,如上述机器人将小红在家中玩耍的视频后,将其发送到家庭成员的终端和/或云端,如手机、电脑、ipad等和/或与其相连接的云端中,在视频发送成功后向家庭成员的终端发送文字提醒和/或语音提醒,如:有小红玩耍的视频了;如果机器人没有和云端连接则仅发送到终端,有则发送到云端和终端,或者终端在全都属于关闭状态时,则仅发送云端,且在开启任一终端时,发送提醒消息。Specifically, after the above-mentioned robot plays the video of Xiaohong at home, it is sent to the family member's terminal and/or the cloud, such as a mobile phone, a computer, an ipad, etc., and/or a cloud connected thereto, and the video is successfully transmitted. Send text reminders and/or voice reminders to the terminal of the family member, such as: there is a video with Xiaohong playing; if the robot is not connected to the cloud, it will only be sent to the terminal, and if it is sent to the cloud and the terminal, or the terminal is all When the status is off, only the cloud is sent, and when any terminal is turned on, an alert message is sent.
优选的,所述机器人在采集所述目标对象时,根据所述目标对象脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令,所述本机 启动语音交互单元和/或向与所述本机连接的移动终端发起视频通话。Preferably, when the robot acquires the target object, according to a change of the target object facial feature and/or audio feature and/or an interaction instruction issued by the target object, the local machine Initiating a voice interaction unit and/or initiating a video call to a mobile terminal connected to the local machine.
机器人通过摄像单元在采集目标对象的过程中,根据目标对象的脸部特征变化,如小孩在哭,则机器人启动本机的人机交互单元,逗小孩开心;根据目标对象的音频特征的变化,如小孩在发脾气,机器人通过音频特征确定小孩属于发脾气的状态,则机器人启动本机的人机交互单元,宽慰小孩;机器人根据目标对象发出的交互指令,如小孩向机器提出花朵用英语怎么说,则机器人根据小孩提出的问题回答,花朵的英语为flower;再如目标对象小孩向机器人发出给爸爸打电话,则机器人向爸爸的手机终端发出视频通话请求。The robot changes the facial features of the target object through the camera unit during the process of collecting the target object. If the child is crying, the robot activates the human-computer interaction unit of the machine to make the child happy; according to the change of the audio characteristics of the target object, If the child is angry, the robot determines that the child belongs to the temper state through the audio characteristics, the robot activates the human-computer interaction unit of the machine to comfort the child; the robot sends an instruction according to the target object, such as the child submits flowers to the machine in English. Say, the robot answers according to the question raised by the child, the English of the flower is flower; and if the target child sends a call to the dad to the robot, the robot sends a video call request to the mobile terminal of the father.
具体的,如上述机器人在采集小红的视频过程中,通过小红脸部特征的变化和音频特征的变化,确定小红现在正在哭,机器人则启动人机交互单元给小红讲故事或者讲笑话等,逗小红开心;又如小红给机器人发出我想听歌的指令,机器人则向小红唱歌;又如小红跟机器人说我想学习唐诗,则机器人根据小红平时的提问情况确定小红的智力发展阶段,给小红念适合小红智力阶段学习的唐诗,并解析。Specifically, in the process of collecting the video of Xiaohong, the robot determines that Xiaohong is crying now by the change of the characteristics of the little red face and the change of the audio feature, and the robot starts the human-computer interaction unit to tell Xiaohong or tell the story. Jokes, etc., make Xiaohong happy; and Xiaohong gives the robot instructions to listen to the song, and the robot sings to Xiaohong. If Xiaohong says to the robot that I want to learn Tang poetry, the robot will ask questions based on Xiaohong’s usual time. Determine Xiaohong's intellectual development stage, and give Xiao Hongyan a Tang poem suitable for Xiaohong's intellectual stage learning and analyze it.
进一步的,在采集所述目标对象视频过程中,所述摄像单元还包括拍照功能,以根据所述目标对象的脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令对所述目标对象进行拍照。Further, in the process of collecting the target object video, the camera unit further includes a photographing function to change according to a facial feature and/or an audio feature of the target object and/or an interactive instruction issued by the target object. Taking a picture of the target object.
机器人的摄像单元还包括拍照的功能,在机器人采集目标对象视频过程中,在目标对象的脸部特征发生变化,如目标对象笑得很开心的时候,则摄像单元抓拍下此时目标对象的状态;又如在目标对象一直在一个人默默的自言自语,则摄像单元同样抓拍下此时目标对象的状态;又如目标对象给机器人说拍一张我和狗狗的合照,机器人根据目标对象的指令启动摄像单元的拍照功能,拍下目标对象和狗狗的合照。The camera unit of the robot further includes a photographing function. When the robot collects the target object video, the facial features of the target object change. If the target object smiles happily, the camera unit captures the state of the target object at this time. As in the target object has been silently talking to a person, the camera unit also captures the state of the target object at this time; and as the target object gives the robot a picture of me and the dog, the robot according to the target The object's command activates the camera function of the camera unit, and takes a picture of the target object and the dog.
在其中一个实施例中,所述目标对象发出交互指令后,还包括如下步骤:In one embodiment, after the target object issues an interactive instruction, the method further includes the following steps:
S700:接收所述目标对象的交互指令;S700: Receive an interaction instruction of the target object.
在家中的家庭成员的目标特征信息都可以存储在机器人本地的数据库中和/或和机器人连接的云端数据库,因此存储在数据库的成员都可以向机 器人发送交互指令,机器人首先是接收当前的目标对象发出的交互指令。The target feature information of the family members at home can be stored in the local database of the robot and/or the cloud database connected to the robot, so members stored in the database can go to the machine. The person sends an interactive instruction, and the robot first receives the interactive instruction issued by the current target object.
具体的,如小红家庭成员包括爷爷、奶奶、爸爸、妈妈和小红自己,并且数据库中存储了所有家庭成员的目标特征信息,即脸部特征信息,当前家中的人物包括爷爷、奶奶和小红三个人,如果当前小红识别的目标对象为小红,在多个人同时向机器人发送交互指令时,则只接受小红发送的交互指令。Specifically, members of the Xiaohong family include Grandpa, Grandma, Dad, Mom, and Xiaohong himself, and the database stores the target feature information of all family members, that is, facial feature information. The current family members include grandfather, grandma, and small. Red three people, if the current target identified by Xiaohong is Xiaohong, when multiple people send interactive commands to the robot at the same time, only the interactive command sent by Xiaohong is accepted.
S800:解析所述交互指令中包含的交互信息,提取与本机功能单元相对应的指示符;S800: Parse the interaction information included in the interaction instruction, and extract an indicator corresponding to the local function unit;
在机器人获取到目标对象的交互指令后,需要对交互指令中包含的信息进行解析,解析出交互指令中和本机功能单元相对应的指示符,以便于开启本机的功能单元。After the robot acquires the interactive instruction of the target object, it needs to parse the information contained in the interactive instruction, and parse the indicator corresponding to the local functional unit in the interactive instruction, so as to open the functional unit of the local machine.
具体的,如上述机器人接收了小红发送交互指令,交互指令为“给我讲小鸭子的故事”,机器人解析出指令中“小鸭子的故事”以及“讲”,将“小鸭子的故事”转变为在数据库或者网络中搜索“小鸭子的故事”,并将其提取出来,将“讲”转变启动语音单元的指示符。Specifically, if the above-mentioned robot receives Xiaohong to send an interactive instruction, the interactive instruction is “Give me a story about the duckling”, and the robot analyzes the “Little Duck Story” and “Speak” in the instruction, and “The Story of the Little Duck” Change to search for "Little Duck's Story" in the database or network, and extract it, and change the "speak" to start the indicator of the speech unit.
S900:启动与所述指示符相对应的功能单元。S900: Start a functional unit corresponding to the indicator.
在人机交互中,目标对象发出的交互指令中包含有能实现目标对象目的的功能性指示,根据步骤S800解析出的指示符,启动实现目标对象目的的功能性单元,并执行目标对象发出的指令。In the human-computer interaction, the interaction instruction issued by the target object includes a functional indication that can achieve the purpose of the target object, and according to the indicator parsed in step S800, the functional unit that realizes the purpose of the target object is started, and the target object is issued. instruction.
具体的,如上述小红发出的交互指令“给我讲小鸭子的故事”,经过步骤S800的解析后,通过数据库和/或网络搜索“小鸭子的故事”并将其提取出来,启动机器人的语音功能,给小红讲“小鸭子的故事”,其中数据库可以是本地数据库也可以是云端数据库,在搜索的时候可以数据库和网络同时搜索,或者在没有网络连接的时候只搜索本地数据库。Specifically, the interactive instruction issued by Xiaohong, "tell me the story of the duckling", after the parsing of step S800, searches for "the story of the duckling" through the database and/or the network, and extracts it, and starts the robot. The voice function tells Xiaohong the story of "Little Duck". The database can be a local database or a cloud database. When searching, the database and the network can be searched at the same time, or only the local database can be searched when there is no network connection.
进一步的,所述交互指令为所述目标物发出语音指令和/或所述目标物在本机上点击的与所述功能单元对应的按键。Further, the interactive instruction issues a voice instruction for the target and/or a button corresponding to the function unit that the target clicks on the local machine.
机器人有接收语音的传感器,同时在机器人上设置有人机交互的实体功能按键,如果机器人设有触摸屏,则功能按揭也可以是虚拟的触摸键。The robot has a sensor for receiving voice, and a physical function button for man-machine interaction is set on the robot. If the robot is provided with a touch screen, the function mortgage can also be a virtual touch button.
如图5,本发明还提供了一种机器人视频通话控制装置,包括以下模 块:As shown in FIG. 5, the present invention also provides a robot video call control apparatus, including the following modes. Piece:
S10:视频模块,用于建立与呼叫方的视频通话,向呼叫方传输本机摄像单元获取的视频流;S10: a video module, configured to establish a video call with the calling party, and transmit the video stream obtained by the local camera unit to the calling party;
机器人存储的信息中记录了机器人与家庭成员关联关系,并与家庭成员之间的直接通信终端建立了连接方式,呼叫方即为家庭成员,例如家庭成员的各移动终端如手机、电脑、ipad等都存储有与机器人连接的应用程序,其中,应用程序可以是与机器人直接相关的控制机器人的App,也可以是控制机器人的网页链接,为了做到实时监护看管小孩的作用,家庭成员通过提前设置并开启机器人的视频通信传输的功能,视频模块S10直接建立与家庭成员通信终端之间的视频通话,或者在接到家庭成员的传输指令时,视频模块S10建立与家庭成员之间的视频通话,并实时传输视频图像到家庭成员的终端,或者是在接收到在外一位家庭成员通过控制机器人的App应用程序或者进入控制机器的网页应用程序的呼叫,并发出了视频通话请求,视频模块S10则直接接受家庭成员的视频通话请求,并向发起视频通话的家庭成员即呼叫方传输本机获取的视频流,其中机器人传输过去的视频流也是在呼叫方的安装有控制机器人的App移动终端上显示,或者是在呼叫方移动终端打开的网页应用程序上显示,实现了实时的视频通话。The information stored in the robot records the relationship between the robot and the family members, and establishes a connection mode with the direct communication terminal between the family members. The calling party is a family member, for example, various mobile terminals of the family members such as a mobile phone, a computer, an ipad, etc. There are stored applications connected to the robot, wherein the application can be an app that controls the robot directly related to the robot, or a web link that controls the robot. In order to perform the role of real-time monitoring of the child, the family members are set in advance. And turning on the function of the video communication transmission of the robot, the video module S10 directly establishes a video call with the communication terminal of the family member, or when receiving the transmission instruction of the family member, the video module S10 establishes a video call with the family member. And transmitting the video image to the family member's terminal in real time, or receiving a video call request by receiving a call from a family member who controls the bot's App application or entering the control machine's web application, and the video module S10 is Directly accepting family members The video call request transmits the video stream acquired by the local party to the calling party that initiates the video call, wherein the past video stream transmitted by the robot is also displayed on the calling mobile terminal of the calling party where the controlling robot is installed, or is in the call The mobile application opened by the mobile terminal displays the real-time video call.
S20:分析模块,用于接收所述呼叫方发起的寻的指令,解析所述寻的指令所包含的目标物信息,依据目标物信息确定相应的目标对象的目标特征信息;S20: an analysis module, configured to receive an instruction of the calling party initiated by the calling party, parse target information included in the seeking instruction, and determine target feature information of the corresponding target object according to the target information;
视频模块S10建立了与呼叫方的视频通话,机器人接受并向呼叫方发回了视频流,呼叫方可通过视频直接观察到家里的情况,如果呼叫方在视频中不能看到自己想要看到目标对象时,呼叫方可在自己这边的移动终端上发出寻的指令,寻的指令中包含有目标物的相关信息,且这些信息是存储在机器人中的,或者和机器人相连接的云端中,机器人分析模块S20接收寻的指令后,在本机中解析寻的指令中包含的目标物信息,再根据目标物信息确定与目标物对应的目标对象的目标特征信息,机器人在后续过程中,将以此为依据寻找并确定目标对象。The video module S10 establishes a video call with the calling party, and the robot accepts and sends back the video stream to the calling party, and the calling party can directly observe the situation at home through the video, if the calling party cannot see that he wants to see in the video. When the target object is used, the caller can issue a seek command on the mobile terminal on the side of the caller. The search command contains information about the target object, and the information is stored in the robot or in the cloud connected to the robot. After receiving the homing instruction, the robot analysis module S20 parses the target object information included in the homing instruction in the local machine, and then determines the target feature information of the target object corresponding to the target object according to the target object information, and the robot in the subsequent process, Based on this, the target object will be found and determined.
具体的,例如妈妈在移动终端向机器人发起了“寻找女儿”的寻的指 令,机器人分析模块S20接收“寻找女儿”的寻的指令并在本机中解析寻的指令中的信息,即解析并提取出“女儿”这一信息,将“女儿”这一信息发送到存储“女儿”这一目标信息及对应特征信息的数据库,并在通过存储女儿这一目标信息确定女儿的特征信息为女儿的脸部特征,即整个脸部和五官的轮廓与位置,机器人在后续过程中,将以此为依据寻找并确定女儿。Specifically, for example, the mother initiated the search for the "seeking daughter" to the robot on the mobile terminal. So, the robot analysis module S20 receives the "seeking daughter" search command and parses the information in the seek command in the local machine, that is, parses and extracts the "daughter" information, and sends the "daughter" information to the storage. "Daughter" is a database of target information and corresponding feature information, and determines the daughter's feature information by storing the daughter's target information as the daughter's facial features, that is, the outline and position of the entire face and facial features, and the robot in the subsequent process. In this, based on this, find and determine the daughter.
本步骤的另一实施方式即为机器人分析模块S20接收到寻的指令后,将寻的指令发送给云端,云端解析出寻的指令中包含的目标物信息,并将目标物信息发送给机器人,机器人根据目标物信息确定与目标物对应的目标对象的目标特征信息,机器人在后续过程中,将以此为依据寻找并确定目标对象。Another embodiment of the present step is that after the robot analysis module S20 receives the instruction of searching, the instruction of the search is sent to the cloud, and the cloud parses the target information contained in the searched instruction, and sends the target information to the robot. The robot determines the target feature information of the target object corresponding to the target object according to the target object information, and the robot searches and determines the target object based on the basis in the subsequent process.
具体的,例如上述妈妈在移动终端向机器人发起了“寻找女儿”的寻的指令,机器人分析模块S20接收“寻找女儿”的寻的指令后,将“寻找女儿”的寻的指令发送给云端,云端解析寻的指令中的信息,即解析并提取出“女儿”这一信息,并将该信息发送给机器人,机器人接收已解析的信息,将“女儿”这一信息发送到存储“女儿”这一目标信息及对应特征信息的数据库,其中具体的目标特征信息与目标物信息之间的存储关系于后文详述,并在通过存储女儿这一目标信息确定女儿的特征信息,机器人在后续过程中,将以此为依据寻找并确定女儿。Specifically, for example, the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and the robot analysis module S20 receives the instruction of “finding the daughter” and sends the instruction of “seeking the daughter” to the cloud. The cloud parses the information in the search instruction, that is, parses and extracts the information of "daughter", and sends the information to the robot. The robot receives the parsed information and sends the message "daughter" to the storage "daughter". a database of target information and corresponding feature information, wherein the storage relationship between the specific target feature information and the target information is detailed later, and the daughter's feature information is determined by storing the daughter's target information, and the robot is in the subsequent process. In this, based on this, find and determine the daughter.
本步骤的另一实施方式即为机器人分析模块S20接收到寻的指令后,将寻的指令发送给云端,云端解析出寻的指令中包含的目标物信息,云端将所述目标物信息发送到存储目标物信息及目标物特征信息的云端数据库中,并在云端根据目标物信息确定与目标物对应的目标对象的目标特征信息,云端在将目标特征信息发送给机器人,机器人在后续过程中,将以此为依据寻找并确定目标对象。Another embodiment of the present step is that after the robot analysis module S20 receives the instruction of searching, the instruction of the search is sent to the cloud, and the cloud parses the target information contained in the searched instruction, and the cloud sends the target information to the cloud. The cloud database storing the target information and the target feature information, and determining the target feature information of the target object corresponding to the target according to the target information in the cloud, the cloud transmitting the target feature information to the robot, and the robot in the subsequent process, Based on this, the target object will be found and determined.
具体的,例如上述妈妈在移动终端向机器人发起了“寻找女儿”的寻的指令,机器人分析模块S20接收“寻找女儿”的寻的指令后,将“寻找女儿”的寻的指令发送给云端,云端解析寻的指令中的信息,即解析并提取出“女儿”这一信息,通过该信息确定女儿的特征信息,并将该特征信息发送给机器人,机器人接收直接接收女儿的特征信息,并以此为依据寻 找并确定女儿。Specifically, for example, the above-mentioned mother initiates a search for “looking for a daughter” to the robot at the mobile terminal, and the robot analysis module S20 receives the instruction of “finding the daughter” and sends the instruction of “seeking the daughter” to the cloud. The cloud parses the information in the search instruction, that is, parses and extracts the information of “daughter”, determines the feature information of the daughter through the information, and sends the feature information to the robot, and the robot receives the feature information directly receiving the daughter, and This is the basis for seeking Find and confirm your daughter.
S30:捕捉模块,用于在未捕捉到所述目标对象时,启动行走装置执行本机移动,在移动过程中对摄像单元的视频流进行图像识别,确定包含所述目标特征信息的图像,以捕捉目标对象;S30: a capture module, configured to: when the target object is not captured, start the walking device to perform local movement, perform image recognition on the video stream of the image capturing unit during the moving process, and determine an image that includes the target feature information, Capture the target object;
分析模块S20确定了目标对象及目标对象的特征信息之后,如果在已获取的视频流中通过图像识别确定没有带有目标特征信息的图像,机器人则启动自身的行走装置,使机器人移动,在移动过程中,机器人通过摄像单元的获取的视频流,并通过图像识别技术,识别视频中的图像,确定视频中是否包含有目标特征信息的图像,并以此来捕捉目标对象,其中,未捕捉到对象包括以下几种情形:1、机器人对视频流中的图像中未识别到与目标对象对应的目标特征信息及扩展特征信息,同时机器人通过音频、红外定位到目标对象后,通过测量与目标对象的距离大于预设的机器人与目标对象的预设的距离范围,启动行走装置;2、机器人对视频流中的图像中未识别到与目标对象对应的目标特征信息及扩展特征信息,同时机器人通过音频、红外定位到目标对象后,通过测量与目标对象的距离小于预设的机器人与目标对象的预设的距离范围,机器人启动行走装置;3、机器人捕捉到目标对象的目标特征,且与目标对象的距离保持在预设范围内过程中,机器人摄像单元出现视频图像轮廓不清晰时,机器人暂不启动行走装置,待视频图像轮廓清晰后,如果对视频流中的图像中未识别到与目标对象对应的目标特征信息及扩展特征信息,同时机器人通过音频、红外定位到目标对象后,启动行走装置;4、机器人捕捉到目标对象的目标特征,且与目标对象的距离保持在预设范围内过程中,如目标对象突然远离导致机器人不能识别目标对象的目标特征时,启动行走装置,如果目标对象远离后且在机器人定位到目标对象的位置之前,机器人通过测距发现目标对象逐渐靠近,则不启动行走装置。机器人行走装置接收来自摄像单元的信号,并将其转化成控制行走装置并与其电连接的控制器的电信号,控制器将所述电信号转移动到启动行走装置的驱动装置上,驱动装置启动行走装置实现机器人的移动,其中驱动装置可以是电机,行走装置可以是轮、履带或者轮履复合式等;图像识别是首先在机器人中存储一张图片,并将其作为一个模型,机器人处理器对此模型的首先进行预处理,并提取出其中带有线条的轮廓、线条与线条之间的 角度、线条与线条的相对位置、轮廓中所包覆的色彩等,在捕捉到一视频图像后,为了确定当前图像中是否需要捕捉的目标对象,机器人处理器依次对图像中的每一帧图片进行预处理,并提取出其中带有线条的轮廓、线条与线条之间的角度、线条与线条的相对位置、轮廓中所包覆的色彩等与数据库中的图片模型进行对比拟合,在其拟合度达到设定值时则认为视频图像中存在捕捉的目标对象。After the analysis module S20 determines the feature information of the target object and the target object, if it is determined by the image recognition that there is no image with the target feature information in the acquired video stream, the robot starts its own walking device to move the robot, and moves In the process, the robot passes the acquired video stream of the camera unit, and through image recognition technology, identifies the image in the video, determines whether the image contains the image of the target feature information, and captures the target object, wherein the target object is not captured. The object includes the following situations: 1. The robot does not recognize the target feature information and the extended feature information corresponding to the target object in the image in the video stream, and the robot passes the audio and infrared to the target object, and then passes the measurement and the target object. The distance is greater than a preset distance range between the preset robot and the target object, and the walking device is activated; 2. the robot does not recognize the target feature information and the extended feature information corresponding to the target object in the image in the video stream, and the robot passes After the audio and infrared are positioned to the target object, The distance between the measured object and the target object is smaller than the preset distance range between the preset robot and the target object, and the robot starts the walking device; 3. The robot captures the target feature of the target object, and the distance from the target object remains within the preset range. During the process, when the outline of the video image of the robot camera unit is not clear, the robot does not start the walking device temporarily. After the outline of the video image is clear, if the target feature information and the extended feature corresponding to the target object are not recognized in the image in the video stream. Information, while the robot is positioned to the target object through audio and infrared, the walking device is activated; 4. The robot captures the target feature of the target object, and the distance from the target object remains within the preset range, such as the target object suddenly moving away When the robot cannot recognize the target feature of the target object, the walking device is activated. If the target object is far away and the robot finds that the target object is gradually approaching by the distance measurement before the robot locates the target object, the traveling device is not activated. The robotic walking device receives the signal from the camera unit and converts it into an electrical signal of a controller that controls and electrically connects the traveling device, and the controller rotates the electrical signal to the driving device that activates the traveling device, and the driving device starts The walking device realizes the movement of the robot, wherein the driving device may be a motor, the walking device may be a wheel, a crawler or a wheel and the like; the image recognition is to first store a picture in the robot and use it as a model, the robot processor The model is first preprocessed and extracted between the outlines with lines, lines and lines The relative position of the line, the line and the line, the color covered in the outline, etc. After capturing a video image, in order to determine whether the target object needs to be captured in the current image, the robot processor sequentially images each frame in the image. Pre-processing, and extracting the contour with lines, the angle between the lines and the lines, the relative position of the lines and the lines, the colors covered in the contour, etc., and the image models in the database are compared and fitted. When the fitness reaches the set value, the captured target object is considered to exist in the video image.
具体的,如上述妈妈通过机器人查看女儿的情况,分析模块S20确定了女儿的特征信息,机器人以此特征信息为依据在家中寻找女儿,机器人首先是在自己接收到寻的指令的位置处,通过各音频和/或红外定位女儿的位置,若女儿在预设的距离范围内,且机器人不能在当前视频流的部分帧或者连续帧图像中捕捉到女儿的目标特征,则启动行走装置,配合摄像单元捕捉女儿的目标特征信息,若通过各音频和/或红外定位女儿的位置,若女儿不在预设的距离范围内,启动行走装置,并通过摄像单元的视频流进行图像识别,以捕捉女儿的目标特征如女儿脸部的轮廓特征。Specifically, if the mother views the daughter through the robot, the analysis module S20 determines the characteristic information of the daughter, and the robot searches for the daughter at home based on the feature information, and the robot first passes the position at which the instruction of the search is received. Each audio and/or infrared locates the position of the daughter. If the daughter is within a preset distance range and the robot cannot capture the target feature of the daughter in a partial frame or a continuous frame image of the current video stream, the walking device is activated to cooperate with the camera. The unit captures the target feature information of the daughter. If the position of the daughter is positioned by each audio and/or infrared, if the daughter is not within the preset distance range, the walking device is activated, and the video stream of the camera unit is used for image recognition to capture the daughter's Target features such as the contour features of the daughter's face.
S40:维持模块,用于当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围。S40: A maintenance module, configured to control the walking device to maintain a preset distance range between the local device and the target object after the target object is captured.
机器人可预设机器人与目标对象的距离范围,在机器人步骤到目标对象后,首先通过安装在其上的测量装置测量机器人与目标对象的距离,若机器人与目标对象的距离较远且不在预设的距离范围内,则机器人通过行走装置移动到与目标对象预设的距离范围内,且在目标对象移动过程中,机器人通过维持模块S40始终与目标对象保持预设的距离范围。The robot can preset the distance range between the robot and the target object. After the robot step to the target object, the distance between the robot and the target object is first measured by the measuring device installed thereon, if the distance between the robot and the target object is far and not preset Within the distance range, the robot moves to a preset distance range from the target object by the walking device, and during the moving of the target object, the robot always maintains a preset distance range with the target object through the maintenance module S40.
具体的,如上述妈妈寻找女儿,通过捕捉模块S30机器人依据女儿的特征信息寻找到了女儿,则机器人通过自身的测量装置测量机器人与女儿的距离,若机器人与女儿的距离较远且不在预设的距离范围内,则机器人通过行走装置移动到与女儿预设的距离范围内,且在女儿行走的过程中,机器人通过维持模块S40始终与目标对象保持预设的距离范围。Specifically, if the mother searches for a daughter, the robot finds the daughter according to the daughter's characteristic information through the capture module S30, and the robot measures the distance between the robot and the daughter through its own measuring device, if the robot is far away from the daughter and is not preset. Within the distance range, the robot moves to the preset distance range with the daughter by the walking device, and during the walking of the daughter, the robot always maintains a preset distance range with the target object through the maintenance module S40.
进一步的,所述捕捉模块S10还包括采集单元S31:用于在捕捉到所述目标对象后,采集所述目标对象的属于其目标特征信息之外的扩展特征信息,在不能捕捉所述目标特征信息时,依据所述扩展特征信息定位所述 目标对象的扩增部位实现目标对象捕捉。Further, the capturing module S10 further includes an acquiring unit S31: after capturing the target object, acquiring extended feature information of the target object other than the target feature information thereof, and failing to capture the target feature Positioning the information according to the extended feature information The target part is amplified by the target object.
为了在后续的时间内快速寻找和/或定位目标对象的目标特征,在机器人通过捕捉模块S10捕捉到目标对象后,通过采集单元S31采集目标对象上除了目标特征信息之外的扩增部位的扩展特征信息,在呼叫方通过机器人和目标对象视频通话的过程中,由于目标对象自身的移动和/或机器人的移动,导致机器人的摄像单元不能捕捉到目标的对象的目标特征信息,机器人在当前视频流中的部分帧或者连续帧图片中不能清晰的识别目标特征对应的带有线条的轮廓、线条与线条之间的角度、线条与线条的相对位置、轮廓中所包覆的色彩等时,机器人可以通过目标对象目标特征信息以外的扩增部位的扩展特征信息快速寻找并聚焦到目标对象的目标特征信息。如上述机器捕捉到目标对象后,由于目标对象突然远离而不能识别目标对象的目标特征信息,为了进一步快速的捕捉目标对象,机器人在当前视频流中同时识别目标对象的目标特征信息和扩展特征信息,若通过扩展特征信息识别到了目标对象的扩增部位,则依据所述扩增部位捕捉到了目标对象,并再以此为依据定位目标对象的目标特征。In order to quickly find and/or locate the target feature of the target object in a subsequent time, after the robot captures the target object through the capture module S10, the expansion unit amplifies the expansion of the target object except the target feature information through the acquisition unit S31. Feature information, in the process of the caller's video call through the robot and the target object, due to the movement of the target object itself and/or the movement of the robot, the camera unit of the robot cannot capture the target feature information of the target object, and the robot is in the current video. When a partial frame or a continuous frame picture in the stream cannot clearly identify the contour with the line corresponding to the target feature, the angle between the line and the line, the relative position of the line and the line, the color covered in the outline, etc., the robot The target feature information of the target object can be quickly found and focused by the extended feature information of the augmented portion other than the target object target feature information. After the target object captures the target object, the target feature information of the target object cannot be recognized because the target object suddenly moves away. In order to further capture the target object, the robot simultaneously identifies the target feature information and the extended feature information of the target object in the current video stream. If the augmented part of the target object is identified by the extended feature information, the target object is captured according to the augmented part, and the target feature of the target object is further determined based on the target object.
具体的,如上文所述机器人在捕捉模块S10捕捉到女儿之后,通过采集单元S31采集女儿身上的除了脸部特征信息之外的其他扩展特征信息,如女儿衣服、裤子、鞋子的颜色和款式,头发的颜色以及形态,帽子的颜色、形状、款式,身体、手臂、腿部的轮廓等,在妈妈和女儿通过机器人视频通话的过程中,如果女儿时站时坐时走,在这过程中,就有可能导致机器人不能定位并捕捉到女儿的脸部特征信息,机器人便可通过扩展部位的扩展特征信息定位到目标特征信息并实现对目标对象的捕捉。例如,摄像单元捕捉到女儿后,在采集身体部位的的特征时,在获取部分帧或者连续帧图片中身体部位的身穿衣服的颜色,再按照R:G:B=3:6:1的比值将部分帧或者连续帧图片去色,处理为黑白图片,提取出身体轮廓的轮廓特征,在后续捕捉目标对象的过程中以此为扩展特征信息确定目标对象。Specifically, after the capturing module S10 captures the daughter, the robot collects other extended feature information of the daughter other than the facial feature information, such as the color and style of the daughter's clothes, pants, and shoes, through the collecting unit S31. The color and shape of the hair, the color, shape, style of the hat, the outline of the body, arms, legs, etc., during the process of the mother and daughter passing the robot video call, if the daughter is sitting while sitting, in the process, It is possible that the robot cannot locate and capture the facial feature information of the daughter, and the robot can locate the target feature information and capture the target object through the extended feature information of the extended part. For example, after the camera unit captures the daughter, when acquiring the features of the body part, the color of the body part wearing the part of the frame or the continuous frame picture is acquired, and then according to R:G:B=3:6:1. The ratio decolorizes part of the frame or the continuous frame image, processes it as a black and white picture, extracts the outline feature of the body contour, and determines the target object as the extended feature information in the process of capturing the target object.
进一步的,所述采集单元S31还包括定位单元S311:用于依据所述扩展特征信息定位所述目标对象的扩增部位之后,启动行走装置环绕该扩增部位继续搜寻所述目标特征信息,直到定位到该目标特征信息之后方才实 现目标对象捕捉。Further, the acquiring unit S31 further includes a positioning unit S311: after positioning the augmented portion of the target object according to the extended feature information, the starting walking device continues to search for the target feature information around the augmented portion until the target feature information is continued. After targeting the target feature information The target object is now captured.
在机器人不能捕捉到目标对象的目标特征信息时,机器人首先是通过捕捉模块S30捕捉到的特征信息和之前采集单元S31采集的扩展特征信息进行对比并通过定位单元S311定位现在捕捉模块S30捕捉到的扩展部位,机器人启动行走装置移动围绕扩增部位继续搜寻目标特征信息,直到摄像单元捕捉并定位到目标特征信息之后,才继续进行视频通话中对目标兑现过的捕捉。When the robot cannot capture the target feature information of the target object, the robot first compares the feature information captured by the capture module S30 with the extended feature information collected by the previous acquisition unit S31, and locates the current capture module S30 by the positioning unit S311. In the extended part, the robot initiates the walking device to continue to search for the target feature information around the augmented portion until the camera unit captures and locates the target feature information, and then continues to capture the target in the video call.
具体的,在妈妈和女儿通过机器人视频通话的过程中,如上述在女儿坐着时,机器人捕捉模块S30能捕捉到女儿的脸部特征信息,女儿站起来之后,机器人捕捉模块S30只能捕捉到女儿的身体部分,女儿身上穿的是一件粉色连衣裙,款式为圆领、无袖的蓬蓬裙,此时机器人可通过之前采集的扩展特征信息即女儿身体的轮廓、裙子的颜色、款式确定此时捕捉的部位,且通过捕捉部位确定机器人摄像单元转动的方位,即机器人通过摄像单元捕捉的裙子的领子、肩部形式以及女儿躯体的特征信息,确定此时捕捉到女儿的身体部位为前胸,且通过上抬摄像单元便可捕捉到女儿的脸部特征信息;如女儿站起来并转过身体,使自己的后背对着机器人的摄像单元,机器人的摄像单元捕捉到的是女儿躯干扩展部位的裙子颜色和款式、后背轮廓的扩展特征信息,那么机器人则需要启动行走装置环绕躯干移动并通过转变摄像单元的角度搜寻目标特征信息,一直到摄像单元搜寻到目标特征信息,机器人调整与女儿的距离,并由定位单元S311定位目标特征信息之后,捕捉模块S30才继续对目标对象进行捕捉,又如机器人捕捉到女儿的目标特征信息,且与女儿的距离保持在预设范围内过程中,机器人摄像单元出现视频图像轮廓不清晰时,机器人暂不启动行走装置,待视频图像轮廓清晰后,通过对视频流中的图像识别后,识别的扩展特征信息来自女儿的身体背部的扩增部位,启动行走装置环绕该特征部位捕捉女儿的脸部特征信息。Specifically, in the process of the mother and the daughter passing the robot video call, the robot capture module S30 can capture the facial feature information of the daughter when the daughter is sitting as described above, and the robot capture module S30 can only capture after the daughter stands up. The daughter's body part, the daughter is wearing a pink dress, the style is a round neck, sleeveless tutu, at this time the robot can be determined by the extended feature information collected before the daughter's body contour, skirt color, style At this time, the captured portion is determined, and the orientation of the rotation of the robot camera unit is determined by the capturing portion, that is, the collar, the shoulder form, and the characteristic information of the daughter's body captured by the robot through the camera unit, and it is determined that the body part of the daughter is captured at this time. The chest, and by lifting the camera unit, can capture the facial features of the daughter; if the daughter stands up and turns over the body, so that the back of the camera is facing the camera unit of the robot, the camera unit of the robot captures the daughter's torso Extended skirt color and style, extended profile information of the back profile, then the machine Then, the walking device is required to move around the torso and search for the target feature information by changing the angle of the camera unit until the camera unit searches for the target feature information, the robot adjusts the distance with the daughter, and the target feature information is located by the positioning unit S311, and then the capture module The S30 continues to capture the target object. If the robot captures the target feature information of the daughter and the distance from the daughter remains within the preset range, the robot does not start the walking when the video image outline is unclear. The device, after the image of the video image is clear, after identifying the image in the video stream, the recognized extended feature information is derived from the amplified portion of the daughter's back of the body, and the walking device is activated to capture the facial feature information of the daughter around the feature portion.
优选的,所述扩展特征信息采集自所述视频流中与所述目标特征信息相对应的图像部分共同运动的动景图像。Preferably, the extended feature information is collected from a moving scene image in which the image portion corresponding to the target feature information in the video stream moves together.
机器人捕捉到目标对象后,目标对象在视频流中相对于其他景象属于动态的图像,在采集扩展信息时,采集视频流中与目标特征信息共同运动 的目标对象上的扩展特征,例如机器人摄像单元捕捉到目标对象后,其中脸部特征信息作为目标特征信息,将部分帧或者连续帧图片中的随着脸部的线条轮廓的变化其他线条轮廓发生变化且可与脸部轮廓被包围在一个封闭轮廓中的部位作为扩增部位,确定扩增部位后采集扩增部位的轮廓、各轮廓的相对位置及轮廓包覆的色彩。After the robot captures the target object, the target object belongs to the dynamic image in the video stream relative to other scenes. When the extended information is collected, the collected video stream moves together with the target feature information. The extended feature on the target object, for example, after the robot camera unit captures the target object, wherein the facial feature information is used as the target feature information, and the other line contours in the partial frame or the continuous frame picture change with the line outline of the face occur. The portion that changes and can be surrounded by the contour of the face in a closed contour serves as an augmentation site. After the amplification site is determined, the contour of the augmentation site, the relative position of each contour, and the color of the contour coating are collected.
具体的,如上述机器人捕捉到女儿后,在视频流中女儿相对于家里的静态物品属于动态的对象,女儿的躯干、衣服、头发等都是跟随女儿运动而共同运动,根据视频流中女儿的脸部特征信息确定了女儿这一对象时,采集视频流中部分帧或者连续帧图片随着女儿脸部轮廓的变化的其他线条轮廓发生变化的部位的轮廓、各轮廓的相对位置及轮廓包覆的色彩,且其他线条轮廓可与脸部轮廓被包围与一个封闭的轮廓中的部位,并根据其他线条轮廓与脸部轮廓的位置确定其他轮廓属于身体的某一部分,同时根据部分帧或者连续帧图片中脸部的轮廓相对其他物品位置的是否变化及其他部位的位置跟随脸部位置相对其他物品位置是否变化确定女儿是否处于运动状态,若都为是,则为运动,否则女儿不处于运动状态。如在女儿走路过程中,读取摄像单元视频流中部分帧或者连续帧图片中脸部的轮廓特征相对其他物品位置发生变化,且对比脸部特征位置的变化有其他轮廓跟随脸部的轮廓位置变化相对其他物品位置发生变化,确定女儿为运动状态,且其他部位轮廓可与脸部轮廓被包裹在一个封闭轮廓中,根据其他部位相对脸部的轮廓的位置确定为腿部,则腿部确定为扩增部位,并采集部分帧或者连续帧图片中扩增部位的扩展特征信息。Specifically, after the robot captures the daughter, the daughter in the video stream is a dynamic object relative to the static item in the home, and the daughter's torso, clothes, hair, etc. are all following the movement of the daughter, according to the daughter in the video stream. When the facial feature information determines the daughter's object, the outline of the part of the video stream in which the partial or continuous frame picture changes with the contour of the daughter's face changes, the relative position of each contour and the outline are covered. The color, and other line contours can be surrounded with the contour of the face with a part of the closed contour, and according to the position of the other line contours and the contour of the face, it is determined that the other contour belongs to a certain part of the body, and according to the partial frame or the continuous frame Whether the contour of the face in the picture changes relative to the position of other items and the position of other parts follows whether the position of the face changes relative to the position of other items to determine whether the daughter is in a state of motion, and if yes, it is exercise, otherwise the daughter is not in motion. . For example, during the process of the daughter walking, the contour features of the face in the partial or continuous frame image of the video stream of the camera unit are changed relative to other items, and the contour position of the face is changed according to the change of the face feature position. The change is changed with respect to the position of other items, the daughter is determined to be in a moving state, and the contours of other parts can be wrapped in a closed contour with the contour of the face, and the leg is determined according to the position of the contour of the other part relative to the face. For amplifying the part, and acquiring the extended feature information of the amplified part in the partial frame or the continuous frame picture.
进一步的,所述分析模块S20还包括查询单元S21,所述目标物信息与目标特征信息之间以映射关系存储于数据库中,用于查询该数据库而实现依据目标物信息确定目标特征信息。Further, the analysis module S20 further includes a query unit S21. The target information and the target feature information are stored in a database in a mapping relationship, and are used to query the database to determine target feature information according to the target information.
目标物信息和目标特征信息等都是以一一对应的关系映射存储于数据库中,在确定了目标物信息之后,查询单元S21就可通过目标物信息查询到目标特征信息,且数据库可以是本地数据库,也可以是和机器人相连接的云端数据库。如果是本地数据库,则获取到目标物信息后,可在本地直接确定目标特征信息,如果是云端的数据库,则机器人将目标物信息发送 到云端,云端确定与目标物信息对应的目标特征信息之后,将目标特征信息返回给机器人。The target information and the target feature information are all stored in the database in a one-to-one correspondence relationship. After the target information is determined, the query unit S21 can query the target feature information through the target information, and the database can be local. The database can also be a cloud database connected to the robot. If it is a local database, after obtaining the target information, the target feature information can be directly determined locally, and if it is a cloud database, the robot sends the target information. After the cloud determines the target feature information corresponding to the target information, the target feature information is returned to the robot.
具体的,如小红为目标物,在存储的时候,将妈妈对小红的常见称呼女儿(包括其他家庭成员对其的常见称呼)、小红的脸部特征信息对应存储,同时还包括采集的扩展特征信息,如表2。Specifically, if Xiaohong is the target, when storing, the mother will store the common name of Xiaohong (including the common name of other family members) and Xiaohong's facial feature information, and also include collecting. Extended feature information, as shown in Table 2.
表2为目标物信息与目标特征信息在数据库中的存储关系。Table 2 shows the storage relationship between the target information and the target feature information in the database.
Figure PCTCN2017116674-appb-000002
Figure PCTCN2017116674-appb-000002
进一步的,所述维持模块S40还包括测量单元S41:用于当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围的步骤中,伴随所述行走装置的运行,获取本机距离传感器侦测的与目标对象之间的距离数据,当该距离数据超过所述预定距离范围时,控制行走装置开始先走而执行移动,否则控制行走装置停止行走而暂停移动。Further, the maintaining module S40 further includes a measuring unit S41: in the step of controlling the walking device to maintain a preset distance range between the local device and the target object after capturing the target object, accompanied by the walking device The operation is to obtain the distance data detected by the local distance sensor and the target object. When the distance data exceeds the predetermined distance range, the control walking device starts to move first to perform the movement, otherwise the control walking device stops walking and pauses. mobile.
在捕捉到目标对象后,机器人在维持自身与目标对象的距离过程中,其测量单元S41的距离传感器一直处于测量状态,测量机器人自身与目标对象之间的距离,在目标对象运动的过程中,当机器人与目标对象之间的距离超过预设的范围时,机器人自动控制行走装置开始行走而移动,如果通过距离传感器测出机器人与目标对象的距离在预设的范围内,机器人自动控制行走装置停止行走装置使机器人暂停移动。After the target object is captured, the distance sensor of the measuring unit S41 is always in the measurement state while maintaining the distance between the robot and the target object, and the distance between the robot itself and the target object is measured. During the movement of the target object, When the distance between the robot and the target object exceeds a preset range, the robot automatically controls the walking device to start moving and moves. If the distance between the robot and the target object is measured within a preset range by the distance sensor, the robot automatically controls the walking device. Stop the walking device to pause the robot.
具体的,如上述机器人在捕捉到小红后,在机器人的行走装置维持自身与小红的距离过程中,测量单元S41距离传感器一直处于测量状态,测量机器人自身与目标对象之间的距离,在小红运动的过程中,当机器人与小红之间的距离超过预设的范围时,机器人自动控制行走装置开始行走而移动,如果通过距离传感器测出机器人与小红的距离在预设的范围内,机器人自动控制行走装置停止行走装置使机器人暂停移动,在距离范围超过 预设范围时,则机器人自动控制行走装置开始行走而移动,使得机器人更为智能化。Specifically, after the robot captures the small red, the distance between the measuring unit S41 and the target object is measured during the distance between the measuring unit S41 and the target object. In the process of Xiaohong sports, when the distance between the robot and Xiaohong exceeds the preset range, the robot automatically controls the walking device to start walking and move, if the distance between the robot and Xiaohong is measured by the distance sensor in a preset range. Inside, the robot automatically controls the walking device to stop the walking device so that the robot pauses and moves over the distance range When the range is preset, the robot automatically controls the walking device to start walking and move, making the robot more intelligent.
进一步的,所述目标物信息为目标对象的名称或者指示符。Further, the target information is a name or an indicator of the target object.
在分析模块S20接收到来之呼叫方的指令后,解析出指令中携带的目标物信息,目标物信息即为目标对象的名称或者指示符,如人物的名字、电脑等。After the analysis module S20 receives the instruction of the incoming caller, the object information carried in the instruction is parsed, and the target object information is the name or indicator of the target object, such as the name of the person, the computer, and the like.
具体的,具体的,如上述妈妈作为呼叫方在其终端发出的寻的指令为“寻找电脑”,其中“寻找电脑”字符信息,分析模块S20解析出“寻找电脑”中的电脑,并将其作为目标对象的指示符,如电脑即为指示符,即目标物信息;如女儿的名字为小红,妈妈发出的寻的指令为“寻找小红”,其中“寻找小红”字符信息,分析模块S20解析出“寻找小红”中的小红,并将其作为目标对象的指示符,即目标物信息小红即为女儿目标对象的名称,即目标物信息;另外,指示符也可以是呼叫方终端存储的与目标对象的信息,在呼叫方终端触发目标对象的信息,呼叫方终端将该目标对象的信息生成和目标对象相对应的指示符,并向机器人发出该目标对象的指示符,如妈妈作为呼叫方在自己的终端上存储有和女儿小红相关的信息,如名字小红、代表小红的图像,妈妈在终端上触发了女儿小红的名字或者代表小红的图像,则呼叫方终端生成寻找小红的指示符,并将该指示符发向机器人。Specifically, specifically, the above-mentioned mother as the calling party sends a command in the terminal to "find the computer", wherein "find the computer" character information, the analysis module S20 parses out the computer in the "find computer" and As the target object indicator, such as the computer is the indicator, that is, the target information; if the daughter's name is Xiaohong, the mother's instruction to find is "Look for Xiaohong", in which "Look for Xiaohong" character information, analysis The module S20 parses out the red in the "find red" and uses it as an indicator of the target object, that is, the target information red is the name of the target object of the daughter, that is, the target information; in addition, the indicator may also be The information stored by the calling party terminal and the target object triggers the information of the target object at the calling party terminal, and the calling party terminal generates an indicator corresponding to the target object by the information of the target object, and issues an indicator of the target object to the robot. For example, the mother as the caller stores information related to her daughter Xiaohong on her terminal, such as the name Xiaohong, the image representing Xiaohong, mother Triggered on the terminal daughter Alice's name or image on behalf of the red, the caller terminal generates red looking indicator, and the indicator is sent to the robot.
优选的,所述目标特征信息为所述目标对象的脸部特征信息。Preferably, the target feature information is facial feature information of the target object.
目标对象的目标特征信息是提前录入的,脸部特征变化最能表现一个人心情或者表达此时的状态,在录入时,优选脸部特征作为目标特征信息,便于在后续视频通话过程中,父母或在外家庭成员在视频通话过程中,或者拍摄的视频和/或照片中,可以首先通过脸部的表情观察家中儿童和/或其他家庭成员的喜怒哀乐。The target feature information of the target object is entered in advance, and the facial feature change can best express a person's mood or express the state at the time. When entering, the facial feature is preferably used as the target feature information, which is convenient for the parent during the subsequent video call. Or the family members during the video call, or the video and/or photos taken, can first observe the emotions of the children and/or other family members at home through the facial expression.
在其中一个实施例中,所述采集单元S31还包括监测单元S312:用于监测到所述目标对象的扩展特征信息发生变化后,重新采集所述扩展部位的所述扩展特征信息。In one embodiment, the collecting unit S31 further includes a monitoring unit S312: for monitoring the extended feature information of the extended part after detecting that the extended feature information of the target object is changed.
机器人可以对目标对象的扩展特征信息保存至下一次扩展特征信息发 生变化后,机器人监测单元S312监测到存储的目标对象的扩展特征信息发生变后则重新采集新的扩展特征信息,以便于在目标对象扩展特征信息变化后,需要捕捉目标对象,能快速的定位到目标对象的目标特征信息并捕捉目标对象。The robot can save the extended feature information of the target object to the next extended feature information. After the change of the target, the robot monitoring unit S312 detects that the extended feature information of the stored target object has changed, and then re-acquires the new extended feature information, so that after the target object is extended, the target information needs to be captured, and the target object can be quickly located. Go to the target object's target feature information and capture the target object.
具体的,如上述小红穿的是粉色连衣裙,数据库存储的扩展特征信息同样是粉色连衣裙的信息,在小红换成一条白色连衣裙后,机器人通过捕捉目标特征信息确定穿白色连衣裙的对象为小红后,监测单元S312发现其服装扩展特征信息发生变化后,则重新采集小红身体与白色连衣裙的扩展特征信息。Specifically, as the above-mentioned Xiaohong wears a pink dress, the extended feature information stored in the database is also the information of the pink dress. After the red is replaced by a white dress, the robot determines the object wearing the white dress by capturing the target feature information. After the red, the monitoring unit S312 finds that the extension feature information of the small red body and the white dress is re-acquired after the change of the clothing extension feature information is changed.
优选的,所述距离传感器为超声波传感器、红外线传感器或者包含所述摄像单元在内的双目测距摄像装置。Preferably, the distance sensor is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit.
在维持模块S40中的距离传感器为超声波传感器、红外线传感器或者包含所述摄像单元在内的双目测距摄像装置,双目测距摄像装置使用方便,可初步确定机器人与目标对象的距离,超声波对远距离测距的误差小,效果好,红外传感器对近距离的测距误差小,效果好,本发明通过相互结合,使得机器人在远近距离上的测距误差达到优化。The distance sensor in the maintenance module S40 is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit, and the binocular ranging imaging device is convenient to use, and can initially determine the distance between the robot and the target object, and the ultrasonic wave. The error of long distance ranging is small and the effect is good. The infrared sensor has small distance measurement error and good effect. The invention combines with each other to optimize the distance error of the robot in the distance.
优选的,所述扩展特征信息包括一个或任意多个以下特征:躯干部位特征、服装部位特征、脸部轮廓特征、毛发轮廓特征信息或音频特征信息。Preferably, the extended feature information includes one or any of a plurality of features: a torso part feature, a clothing part feature, a facial contour feature, a hair outline feature information, or audio feature information.
采集单元S31采集的所述扩展特征信息包括一个或任意多个以下特征:躯干部位特征、服装部位特征、脸部轮廓特征、毛发轮廓特征信息或音频特征信息。The extended feature information collected by the collecting unit S31 includes one or any of a plurality of features: a trunk part feature, a clothing part feature, a face outline feature, a hair outline feature information, or audio feature information.
进一步的,所述捕捉模块S30包括定位单元S311,用本机音频和/或红外定位单元,在捕捉所述目标对象过程中时,本机开启音频和/或红外定位单元获取所述目标物位置,以确定行走装置的起始走向。Further, the capturing module S30 includes a positioning unit S311, and the local audio and/or infrared positioning unit acquires the target position when the target object is captured by the local audio and/or infrared positioning unit. To determine the starting direction of the walking device.
在机器人中还包括音频和/或红外定位单元,在捕捉目标对象过程中,通过开启音频和/或红外定位单元获取所述目标物的位置,并以此来确定机器人行走装置的开始时的行走方向。Also included in the robot is an audio and/or infrared positioning unit that acquires the position of the target by turning on the audio and/or infrared positioning unit during capturing of the target object, and thereby determining the walking of the robotic walking device at the beginning direction.
具体的,如上述在确定了目标对象为小红后,小红此时在机器人的正前方哈哈大笑,机器人通过音频定位单元获取到小红的音频,并定位到小 红的位置为机器人的正前方,此时机器人则直接启动行走装置使机器人往正前方移动;又如机器人通过红外定位单元的红外灯辐射感受周围景物和环境反射回来的红外光确定小红所在的位置为机器人的右前方,则机器人启动行走装置并向右前方移动寻找小红。Specifically, as described above, after determining that the target object is Xiaohong, Xiaohong laughs at the front of the robot at this time, and the robot acquires the audio of Xiaohong through the audio positioning unit, and locates the small The red position is directly in front of the robot. At this time, the robot directly activates the walking device to move the robot forward. For example, the robot uses the infrared light of the infrared positioning unit to radiate the infrared light reflected by the surrounding scene and the environment to determine the location of Xiaohong. The position is the right front of the robot, and the robot activates the walking device and moves to the right front to find the red.
进一步的,所述测量单元S41,还用于在捕捉所述目标对象过程中,在遇到障碍物时,本机通过距离传感器测量本机与所述障碍物的距离,控制行走装置绕行和/或远离所述障碍物,在绕行和/或远离所述障碍物后继续捕捉所述目标对象。Further, the measuring unit S41 is further configured to: when capturing the target object, when the obstacle is encountered, the local machine measures the distance between the local device and the obstacle through the distance sensor, and controls the walking device to bypass and / or away from the obstacle, continue to capture the target object after detouring and / or away from the obstacle.
机器人在寻找小红的过程中,不可避免的会遇到障碍物,如家中凳子、墙体等,同样可以通过测量单元S41的距离传感器测量机器人与图中障碍物的距离,在寻找小红的方位不变的情况下,控制行走装置绕过和/或远离凳子、墙体,并继续捕捉小红。In the process of searching for Xiaohong, the robot will inevitably encounter obstacles, such as stools and walls in the home. It is also possible to measure the distance between the robot and the obstacles in the figure by the distance sensor of the measuring unit S41. With the orientation unchanged, the walking device is controlled to bypass and/or move away from the stool, the wall, and continue to capture the red.
进一步的,所述维持模块S40之后还包括语音模块S50,如图6,用于在本机移动到与所述目标对象的距离范围内时,启动所述语音提醒单元,并发出语音提醒。Further, the maintenance module S40 further includes a voice module S50, as shown in FIG. 6, for starting the voice reminding unit and issuing a voice reminder when the local machine moves within a distance range from the target object.
为了确保在呼叫方发起视频通话时,目标对象能够及时的接收到父母发来的消息,在机器人捕捉到目标对象并移动到与目标对象预设的距离范围内时,启动机器人的语音提醒单元,并发出语音提醒。In order to ensure that the target object can receive the message sent by the parent in time when the calling party initiates the video call, the robot prompts the voice reminding unit when the robot captures the target object and moves to a preset distance from the target object. And send a voice reminder.
具体的,如上述机器人寻找到了小红,并移动到与小红预设的距离范围内,则向小红发出语音提醒,如:妈妈来电话了,妈妈来电话了,妈妈来电话了,快接电话,快接电话,快接电话。Specifically, if the above-mentioned robot finds Xiaohong and moves to the distance range from Xiaohong, it will send a voice reminder to Xiaohong. For example, if the mother calls, the mother calls, and the mother calls, fast. Answer the call, answer the call quickly, and answer the call quickly.
进一步的,如图7,所述视频模块S10还包括:Further, as shown in FIG. 7, the video module S10 further includes:
S11:拍摄单元,用于所述呼叫方挂断所述视频通话后,所述机器人摄像单元持续采集所述目标对象的视频;S11: a shooting unit, after the calling party hangs up the video call, the robot camera unit continuously collects a video of the target object;
为了保证呼叫方可以更多的了解小孩及小孩在家中的状态,在呼叫方挂断电话后,机器人通过拍摄像单元S11持续的采集小孩的在家中的视频。In order to ensure that the caller can learn more about the state of the child and the child at home, after the caller hangs up the call, the robot continuously collects the child's home video by shooting the image unit S11.
具体的,如上述妈妈作为呼叫方挂断了与女儿小红的视频通话,而机器人并没有关闭摄像单元,拍摄单元S11在持续的采集小红的在家中玩耍,学习等的视频。 Specifically, if the mom as the calling party hangs up the video call with the daughter Xiaohong, and the robot does not turn off the camera unit, the shooting unit S11 plays in the continuous collection of Xiaohong's home, learning and other videos.
S12:传输单元,所述机器人将所述视频发送到与其连接的终端,且在向所述终端发送文字提醒和/或语音提醒。S12: A transmission unit, the robot sends the video to a terminal connected thereto, and sends a text reminder and/or a voice reminder to the terminal.
拍摄单元S11一段视频采集完成后,机器人将采集的视频通过传输单元S12发送到与其连接的家庭成员的终端和/或云端,并通过传输单元S12向终端发送文字提醒和/或语音提醒,且采集一段视频后,再持续采集下一段视频。After the video capture of the shooting unit S11 is completed, the robot sends the collected video to the terminal and/or the cloud of the family member connected thereto through the transmission unit S12, and sends a text reminder and/or a voice reminder to the terminal through the transmission unit S12, and collects the video. After a video, continue to capture the next video.
具体的,如上述机器人拍摄单元S11将小红在家中玩耍的视频后,将其通过传输单元S12发送到家庭成员的终端和/或云端,如手机、电脑、ipad等和/或与其相连接的云端中,在视频发送成功后通过传输单元S12向家庭成员的终端发送文字提醒和/或语音提醒,如:有小红玩耍的视频了;如果机器人没有和云端连接则仅发送到终端,有则发送到云端和终端,或者终端在全都属于关闭状态时,则仅发送云端,且在开启任一终端时,发送提醒消息。Specifically, after the video shooting unit S11 plays the video of Xiaohong at home, it is sent to the family member's terminal and/or the cloud through the transmission unit S12, such as a mobile phone, a computer, an ipad, etc., and/or connected thereto. In the cloud, after the video is successfully sent, a text reminder and/or a voice reminder is sent to the terminal of the family member through the transmission unit S12, for example, a video with a small red play; if the robot is not connected to the cloud, it is only sent to the terminal, and then When it is sent to the cloud and the terminal, or when the terminal is all turned off, only the cloud is sent, and when any terminal is turned on, an alert message is sent.
优选的,还包括启动单元60,用于所述机器人在采集所述目标对象时,根据所述目标对象脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令,所述本机启动语音交互单元和/或向与所述本机连接的移动终端发起视频通话。Preferably, the activation unit 60 is further configured to: when the robot acquires the target object, according to a change of the target object facial feature and/or audio feature and/or an interaction instruction issued by the target object, The local device initiates a voice interaction unit and/or initiates a video call to a mobile terminal connected to the local device.
机器人通过摄像单元在采集目标对象的过程中,根据目标对象的脸部特征变化,如小孩在哭,则机器人启动单元60启动本机的人机交互单元,逗小孩开心;根据目标对象的音频特征的变化,如小孩在发脾气,机器人通过音频特征确定小孩属于发脾气的状态,则机器人启动单元60启动本机的人机交互单元,宽慰小孩;机器人根据目标对象发出的交互指令,如小孩向机器提出花朵用英语怎么说,则机器人根据小孩提出的问题回答,花朵的英语为flower;再如目标对象小孩向机器人发出给爸爸打电话,则机器人启动单元50启动视频模块S10,向爸爸的手机终端发出视频通话请求。The robot changes the facial features of the target object through the camera unit during the process of collecting the target object. If the child is crying, the robot starting unit 60 activates the human-computer interaction unit of the machine to make the child happy; according to the audio characteristics of the target object The change, such as the child is losing his temper, the robot determines that the child belongs to the temper state through the audio feature, the robot starting unit 60 activates the human-computer interaction unit of the machine to comfort the child; the robot sends an interactive instruction according to the target object, such as a child When the machine proposes a flower in English, the robot answers according to the question raised by the child, and the English of the flower is flower; and if the target child sends a call to the dad to the robot, the robot starting unit 50 activates the video module S10 to the mobile phone of the father. The terminal issues a video call request.
具体的,如上述机器人在采集小红的视频过程中,通过小红脸部特征的变化和音频特征的变化,确定小红现在正在哭,机器人启动单元60启动人机交互单元给小红讲故事或者讲笑话等,逗小红开心;又如小红给机器人发出我想听歌的指令,机器人启动单元60启动放歌曲功能向小红唱歌; 又如小红跟机器人说我想学习唐诗,则机器人根据小红平时的提问情况确定小红的智力发展阶段,给小红念适合小红智力阶段学习的唐诗,并解析。Specifically, in the process of collecting the video of Xiaohong, the robot determines that Xiaohong is crying now by the change of the characteristics of the little red face and the change of the audio feature, and the robot starting unit 60 activates the human-computer interaction unit to tell the story to Xiaohong. Or tell a joke, etc., to make a little red happy; and as Xiaohong gives the robot an instruction to listen to the song, the robot startup unit 60 activates the song function to sing to Xiaohong; Another example is Xiaohong and the robot saying that I want to learn Tang poetry. Then the robot determines the stage of intellectual development of Xiaohong according to the question of Xiao Hongping, and gives Xiao Hong the Tang poem that is suitable for the learning of Xiaohong intelligence stage.
进一步的,所述拍摄单元S11,还用于在采集所述目标对象视频过程中,所述摄像单元还包括拍照功能,以根据所述目标对象的脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令对所述目标对象进行拍照。Further, the photographing unit S11 is further configured to, during the process of acquiring the target object video, the photographing unit further includes a photographing function to change according to a facial feature and/or an audio feature of the target object and/or Or an interactive instruction issued by the target object takes a picture of the target object.
机器人的拍摄单元S11还包括拍照的功能,在机器人采集目标对象视频过程中,在目标对象的脸部特征发生变化,如目标对象笑得很开心的时候,则摄像单元抓拍下此时目标对象的状态;又如在目标对象一直在一个人默默的自言自语,则摄像单元同样抓拍下此时目标对象的状态;又如目标对象给机器人说拍一张我和狗狗的合照,机器人根据目标对象的指令启动摄像单元的拍照功能,拍下目标对象和狗狗的合照。The shooting unit S11 of the robot further includes a photographing function. When the robot collects the target object video, the facial features of the target object change. If the target object smiles happily, the camera unit captures the target object at this time. State; as in the target object has been silently talking to a person, the camera unit also captures the state of the target object at this time; and as the target object gives the robot a picture of me and the dog, the robot is based on The instruction of the target object activates the camera function of the camera unit, and takes a picture of the target object and the dog.
在其中一个实施例中,如图8,所述传输单元S12之后还包括:In one embodiment, as shown in FIG. 8, the transmission unit S12 further includes:
S13:接收单元,用于接收所述目标对象的交互指令;S13: a receiving unit, configured to receive an interaction instruction of the target object;
在家中的家庭成员的目标特征信息都可以存储在机器人本地的数据库中和/或和机器人连接的云端数据库,因此存储在数据库的成员都可以向机器人发送交互指令,机器人首先是接收当前的目标对象发出的交互指令。The target feature information of the family members at home can be stored in the database local to the robot and/or the cloud database connected to the robot, so members stored in the database can send interactive instructions to the robot, and the robot first receives the current target object. The interactive instruction issued.
具体的,如小红家庭成员包括爷爷、奶奶、爸爸、妈妈和小红自己,并且数据库中存储了所有家庭成员的目标特征信息,即脸部特征信息,当前家中的人物包括爷爷、奶奶和小红三个人,如果当前小红识别的目标对象为小红,在多个人同时向机器人发送交互指令时,则只接受小红发送的交互指令。Specifically, members of the Xiaohong family include Grandpa, Grandma, Dad, Mom, and Xiaohong himself, and the database stores the target feature information of all family members, that is, facial feature information. The current family members include grandfather, grandma, and small. Red three people, if the current target identified by Xiaohong is Xiaohong, when multiple people send interactive commands to the robot at the same time, only the interactive command sent by Xiaohong is accepted.
S14:分析单元,用于解析所述交互指令中包含的交互信息,提取与本机功能单元相对应的指示符;S14: an analyzing unit, configured to parse the interaction information included in the interaction instruction, and extract an indicator corresponding to the local function unit;
在机器人获取到目标对象的交互指令后,需要对交互指令中包含的信息进行解析,解析出交互指令中和本机功能单元相对应的指示符,以便于开启本机的功能单元。After the robot acquires the interactive instruction of the target object, it needs to parse the information contained in the interactive instruction, and parse the indicator corresponding to the local functional unit in the interactive instruction, so as to open the functional unit of the local machine.
具体的,如上述机器人接收了小红发送交互指令,交互指令为“给我 讲小鸭子的故事”,机器人解析出指令中“小鸭子的故事”以及“讲”,将“小鸭子的故事”转变为在数据库或者网络中搜索“小鸭子的故事”,并将其提取出来,将“讲”转变启动语音单元的指示符。Specifically, if the above robot receives a small red to send an interactive instruction, the interactive instruction is "give me Telling the story of the duckling, the robot analyzes the "Little Duck Story" and "Speak" in the instruction, transforms the "Little Duck Story" into a database or network to search for "Little Duck Story" and extract it. , "talk" to change the indicator that initiates the speech unit.
S15:启动单元,启动与所述指示符相对应的功能单元。S15: Activating unit, starting a functional unit corresponding to the indicator.
在人机交互中,目标对象发出的交互指令中包含有能实现目标对象目的的功能性指示,根据分析单元S14解析出的指示符,启动实现目标对象目的的功能性单元,并执行目标对象发出的指令。In the human-computer interaction, the interaction instruction issued by the target object includes a functional indication that can achieve the purpose of the target object, and according to the indicator parsed by the analysis unit S14, the functional unit that realizes the purpose of the target object is started, and the target object is issued. Instructions.
具体的,如上述小红发出的交互指令“给我讲小鸭子的故事”,经过分析单元S14的解析后,通过数据库和/或网络搜索“小鸭子的故事”并将其提取出来,启动机器人的语音功能,给小红讲“小鸭子的故事”,其中数据库可以是本地数据库也可以是云端数据库,在搜索的时候可以数据库和网络同时搜索,或者在没有网络连接的时候只搜索本地数据库。Specifically, the interactive instruction issued by Xiaohong mentioned above "tell me the story of the duckling", after parsing by the analyzing unit S14, searching for "the story of the duckling" through the database and/or the network and extracting it, starting the robot The voice function tells Xiaohong the story of "Little Duck". The database can be a local database or a cloud database. When searching, the database and the network can be searched at the same time, or only the local database can be searched when there is no network connection.
进一步的,所述交互指令为所述目标物发出语音指令和/或所述目标物在本机上点击的与所述功能单元对应的按键。Further, the interactive instruction issues a voice instruction for the target and/or a button corresponding to the function unit that the target clicks on the local machine.
机器人有接收语音的传感器,同时在机器人上设置有人机交互的实体功能按键,如果机器人设有触摸屏,则功能按揭也可以是虚拟的触摸键。The robot has a sensor for receiving voice, and a physical function button for man-machine interaction is set on the robot. If the robot is provided with a touch screen, the function mortgage can also be a virtual touch button.
本发明还提供了一种终端,包括处理器,该处理器用于运行程序以执行所述的机器人视频通话控制方法的各个步骤,例如:机器人建立了与妈妈的各移动终端如手机、电脑、ipad等的连接应用程序,且终端下载有与控制并与机器人连接App,并向妈妈的移动终端传输本机摄像单元获取的家里情况的视频流,由于妈妈想在移动终端中看到女儿小红现在在家里的状态,且此时的视频流中没有包含小红的图像,妈妈在移动终端向机器人发起了“寻找女儿”的寻的指令,机器人接收“寻找女儿”的寻的指令并在本机中解析寻的指令中的信息,即解析并提取出“女儿”这一信息,并在本地通过该信息确定女儿的特征信息,特征信息为女儿的脸部特征,即整个脸部和五官的轮廓与位置,机器人以此特征信息为依据在家中寻找女儿,机器人首先是在自己接收到寻的指令的位置处,机器人的摄像单元360度旋转,并通过摄像单元的视频流进行图像识别,以捕捉女儿的目标特征,若未捕捉到女儿,则机器人启动自身的行走装置并移动,机器人在移动过程 同时通过摄像单元获取视频流,并在视频图像中通过图像识别技术检查当前的视频图像中是否有女儿的脸部特征信息,若机器人依据女儿的特征信息寻找到了女儿,则机器人通过自身的测量装置测量机器人与女儿的距离,若机器人与女儿的距离较远且不在预设的距离范围内,则机器人通过行走装置移动到与女儿预设的距离范围内,且在女儿行走的过程中,机器人始终与女儿小红之间保持预设的距离范围。The present invention also provides a terminal, including a processor, for executing a program to execute various steps of the robot video call control method, for example, the robot establishes various mobile terminals with a mother such as a mobile phone, a computer, an ipad. Waiting for the connection application, and the terminal downloads a video stream that is connected to the control and connected to the robot, and transmits the home situation acquired by the local camera unit to the mother's mobile terminal, since the mother wants to see the daughter Xiaohong in the mobile terminal now In the state of the home, and the video stream does not contain the image of Xiaohong at this time, the mother initiated a search for the "find daughter" to the robot on the mobile terminal, and the robot receives the instruction of "finding the daughter" and is in the local machine. The information in the instruction of the search is parsed, that is, the information of "daughter" is parsed and extracted, and the characteristic information of the daughter is determined locally by the information, and the feature information is the facial feature of the daughter, that is, the outline of the entire face and the facial features. With the location, the robot finds the daughter at home based on this characteristic information. The robot first receives the finger in the search. At the position of the command, the camera unit of the robot rotates 360 degrees, and the image is recognized by the video stream of the camera unit to capture the target feature of the daughter. If the daughter is not captured, the robot starts its own walking device and moves, and the robot moves. Process At the same time, the video stream is acquired by the camera unit, and the image recognition technology is used to check whether there is a daughter's facial feature information in the current video image. If the robot finds the daughter according to the daughter's feature information, the robot passes its own measuring device. Measuring the distance between the robot and the daughter. If the robot is far away from the daughter and is not within the preset distance range, the robot moves to the preset distance from the daughter through the walking device, and during the walking of the daughter, the robot always Keep a preset distance range from your daughter Xiaohong.
进一步的,本实施例的处理器还可以实现上述实施例的方法的其他步骤,处理器的具体作用和实现方式可参见上述方法部分的实施例,在此不做赘述。Further, the processor of this embodiment may further implement other steps of the method in the foregoing embodiment. For the specific functions and implementation manners of the processor, refer to the embodiments in the foregoing method, and no further details are provided herein.
图9示出了可以实现根据本发明机器人视频通话控制的可视频通话移动式机器人(下述将可视频通话移动式机器人统称为设备)。该设备传统上包括处理器1010和以存储器1020形式的计算机程序产品或者计算机可读介质。存储器1020可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器1020具有用于执行上述方法中的任何方法步骤的程序代码1031的存储空间1030。例如,用于程序代码的存储空间1030可以包括分别用于实现上面的方法中的各种步骤的各个程序代码1031。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图10所述的便携式或者固定存储单元。该存储单元可以具有与图9中的存储器1020类似布置的存储段或者存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括用于执行根据本发明的方法步骤的程序代码1031’,即可以由例如诸如1010之类的处理器读取的代码,这些代码当由设备运行时,导致该设备执行上面所描述的方法中的各个步骤。Fig. 9 shows a video callable mobile robot (hereinafter referred to as a video call mobile robot collectively referred to as a device) that can implement robot video call control according to the present invention. The device conventionally includes a processor 1010 and a computer program product or computer readable medium in the form of a memory 1020. The memory 1020 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM. The memory 1020 has a memory space 1030 for executing program code 1031 of any of the above method steps. For example, storage space 1030 for program code may include various program code 1031 for implementing various steps in the above methods, respectively. The program code can be read from or written to one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG. The storage unit may have a storage section or a storage space or the like arranged similarly to the storage 1020 in FIG. The program code can be compressed, for example, in an appropriate form. Typically, the storage unit comprises program code 1031' for performing the steps of the method according to the invention, ie code that can be read by, for example, a processor such as 1010, which when executed by the device causes the device to perform the above Each step in the described method.
本技术领域技术人员可以理解,本技术方案不仅仅是应用到家里看管并与小孩娱乐学习的机器人身上,也可以应用到机器人监控、扫地等类型的可视频和/或通话和/或人机交互的机器人身上,同时也可应用于模拟其他生物的机械如机器狗,机器猫等,本方案中的目标对象可以是人类,也可以是家中的动物和/或其他的物品,如电脑、手机、开关等,同时也可以用计算机程序指令来实现这些结构图和/或框图和/或流图中的每个框以及这些结构图和/或框图和 /或流图中的框的组合。本技术领域技术人员可以理解,可以将这些计算机程序指令提供给通用计算机、专业计算机或其他可编程数据处理方法的处理器来实现,从而通过计算机或其他可编程数据处理方法的处理器来执行本发明公开的结构图和/或框图和/或流图的框或多个框中指定的方案。Those skilled in the art can understand that the technical solution is not only applied to a robot that is taken care of at home and learns with children, but also can be applied to a type of video and/or call and/or human interaction of robot monitoring, sweeping, and the like. The robot can also be applied to machines that simulate other creatures such as robot dogs, robot cats, etc. The target objects in this program can be humans, or animals and/or other items in the home, such as computers and mobile phones. Switches, etc., and computer program instructions can also be used to implement each of these blocks and/or block diagrams and/or flow diagrams and these block diagrams and/or block diagrams and / or a combination of boxes in the flow graph. Those skilled in the art will appreciate that these computer program instructions can be implemented by a general purpose computer, a professional computer, or a processor of other programmable data processing methods, such that the processor is executed by a computer or other programmable data processing method. The blocks of the disclosed structure and/or block diagrams and/or flow diagrams or blocks specified in the various blocks.
本技术领域技术人员可以理解,本发明中已经讨论过的各种操作、方法、流程中的步骤、措施、方案可以被交替、更改、组合或删除。进一步地,具有本发明中已经讨论过的各种操作、方法、流程中的其他步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。进一步地,现有技术中的具有与本发明中公开的各种操作、方法、流程中的步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。Those skilled in the art can understand that the steps, measures, and solutions in the various operations, methods, and processes that have been discussed in the present invention may be alternated, changed, combined, or deleted. Further, other steps, measures, and schemes of the various operations, methods, and processes that have been discussed in the present invention may be alternated, modified, rearranged, decomposed, combined, or deleted. Further, the steps, measures, and solutions in the prior art having various operations, methods, and processes disclosed in the present invention may also be alternated, changed, rearranged, decomposed, combined, or deleted.
以上所述仅是本发明的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。 The above is only a part of the embodiments of the present invention, and it should be noted that those skilled in the art can also make several improvements and retouchings without departing from the principles of the present invention. It should be considered as the scope of protection of the present invention.

Claims (41)

  1. 一种机器人视频通话控制方法,包括如下步骤:A robot video call control method includes the following steps:
    建立与呼叫方的视频通话,向呼叫方传输本机摄像单元获取的视频流;Establishing a video call with the calling party, and transmitting the video stream obtained by the local camera unit to the calling party;
    接收所述呼叫方发起的寻的指令,解析所述寻的指令所包含的目标物信息,依据目标物信息确定相应的目标对象的目标特征信息;Receiving an instruction of the calling party initiated by the calling party, parsing target object information included in the seeking instruction, and determining target feature information of the corresponding target object according to the target object information;
    在未捕捉到所述目标对象时,启动行走装置执行本机移动,在移动过程中对摄像单元的视频流进行图像识别,确定包含所述目标特征信息的图像,以捕捉目标对象;When the target object is not captured, the walking device is activated to perform local movement, and the video stream of the imaging unit is image-recognized during the movement, and an image containing the target feature information is determined to capture the target object;
    当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围。After capturing the target object, the walking device is controlled to maintain a preset distance range between the local machine and the target object.
  2. 根据权利要求1所述的控制方法,其特征在于,还包括步骤:The control method according to claim 1, further comprising the steps of:
    当捕捉到所述目标对象后,采集所述目标对象的属于其目标特征信息之外的扩展特征信息,在不能捕捉所述目标特征信息时,依据所述扩展特征信息定位所述目标对象的扩增部位实现目标对象捕捉。After the target object is captured, the extended feature information of the target object that belongs to the target feature information is acquired, and when the target feature information cannot be captured, the extended target information is located according to the extended feature information. The target part captures the target.
  3. 根据权利要求2所述的控制方法,其特征在于,依据所述扩展特征信息定位所述目标对象的扩增部位之后,启动行走装置环绕该扩增部位继续搜寻所述目标特征信息,直到定位到该目标特征信息之后方才实现目标对象捕捉。The control method according to claim 2, wherein after the augmented portion of the target object is located according to the extended feature information, the walking device is started to continue searching for the target feature information around the augmented portion until the positioning is performed. The target object capture is achieved only after the target feature information.
  4. 根据权利要求2所述的控制方法,所述扩展特征信息采集自所述视频流中与所述目标特征信息相对应的图像部分共同运动的动景图像。The control method according to claim 2, wherein the extended feature information is collected from a moving scene image in which the image portion corresponding to the target feature information in the video stream moves together.
  5. 根据权利要求1所述的控制方法,其特征在于,所述目标物信息与目标特征信息之间以映射关系存储于数据库中,通过查询该数据库而实现依据目标物信息确定目标特征信息。The control method according to claim 1, wherein the target object information and the target feature information are stored in a database in a mapping relationship, and the target feature information is determined according to the target information by querying the database.
  6. 根据权利要求1所述的控制方法,其特征在于,当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围的步骤中,伴随所述行走装置的运行,获取本机距离传感器侦测的与目标对象之间的距离数据,当该距离数据超过所述预定距离范围时,控制行走装置开始先走而执行移动,否则控制行走装置停止行走而暂停移动。 The control method according to claim 1, wherein when the target object is captured, the step of controlling the walking device to maintain the preset distance range between the local machine and the target object is accompanied by the operation of the traveling device And acquiring distance data between the target and the target object detected by the local distance sensor, and when the distance data exceeds the predetermined distance range, controlling the walking device to start moving first, and then controlling the walking device to stop walking and suspending the movement.
  7. 根据权利要求1所述的控制方法,其特征在于,所述目标物信息为目标对象的名称或者指示符。The control method according to claim 1, wherein the target information is a name or an indicator of the target object.
  8. 根据权利要求1所述的控制方法,其特征在于,所述目标特征信息为所述目标对象的脸部特征信息。The control method according to claim 1, wherein the target feature information is face feature information of the target object.
  9. 根据权利要求2所述的控制方法,其特征在于,还包括如下步骤:监测到所述目标对象的扩展特征信息发生变化后,重新采集所述扩展部位的所述扩展特征信息。The control method according to claim 2, further comprising the step of: retrieving the extended feature information of the extended part after detecting that the extended feature information of the target object changes.
  10. 根据权利要求6所述的控制方法,其特征在于,所述距离传感器为超声波传感器、红外线传感器或者包含所述摄像单元在内的双目测距摄像装置。The control method according to claim 6, wherein the distance sensor is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit.
  11. 根据权利要求2所述的控制方法,其特征在于,所述扩展特征信息包括一个或任意多个以下特征:躯干部位特征、服装部位特征、脸部轮廓特征、毛发轮廓特征信息或音频特征信息。The control method according to claim 2, wherein the extended feature information comprises one or any of a plurality of features: a torso part feature, a clothing part feature, a face outline feature, a hair outline feature information, or audio feature information.
  12. 根据权利要求1所述的控制方法,其特征在于,本机还包括音频和/或红外定位单元,在捕捉所述目标对象过程中时,本机开启音频和/或红外定位单元获取所述目标物位置,以确定行走装置的起始走向。The control method according to claim 1, wherein the local machine further comprises an audio and/or infrared positioning unit, wherein the local audio and/or infrared positioning unit acquires the target while capturing the target object The position of the object to determine the starting direction of the walking device.
  13. 根据权利要1所述的控制方法,其特征在于,在捕捉所述目标对象过程中,在遇到障碍物时,本机通过距离传感器测量本机与所述障碍物的距离,控制行走装置绕行和/或远离所述障碍物,在绕行和/或远离所述障碍物后继续捕捉所述目标对象。The control method according to claim 1, wherein in the process of capturing the target object, when the obstacle is encountered, the local machine measures the distance between the local machine and the obstacle through the distance sensor, and controls the walking of the walking device. And/or away from the obstacle, continuing to capture the target object after detouring and/or away from the obstacle.
  14. 根据权利要求1所述的控制方法,其特征在于,所述本机还包括语音提醒单元,当本机移动到与所述目标对象的距离范围内时,启动所述语音提醒单元,并发出语音提醒。The control method according to claim 1, wherein the local machine further comprises a voice reminding unit, when the local machine moves within a distance range from the target object, the voice reminding unit is activated, and a voice is emitted. remind.
  15. 根据权利要求1所述的控制方法,其特征在于,还包括如下步骤:The control method according to claim 1, further comprising the steps of:
    所述呼叫方挂断所述视频通话后,所述机器人摄像单元持续采集所述目标对象的视频;After the calling party hangs up the video call, the robot camera unit continuously collects the video of the target object;
    所述机器人将所述视频发送到与其连接的终端,且在向所述终端发送文字提醒和/或语音提醒。The robot transmits the video to a terminal connected thereto and transmits a text reminder and/or a voice reminder to the terminal.
  16. 根据权利要求15所述的控制方法,其特征在于,所述机器人在采 集所述目标对象时,根据所述目标对象脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令,所述本机启动语音交互单元和/或向与所述本机连接的移动终端发起视频通话。The control method according to claim 15, wherein said robot is in the process of picking When the target object is set, the local device initiates a voice interaction unit and/or to the book according to a change in the target object facial feature and/or audio feature and/or an interaction instruction issued by the target object The mobile terminal connected to the machine initiates a video call.
  17. 根据权利要求15所述的控制方法,其特征在于,在采集所述目标对象视频过程中,所述摄像单元还包括拍照功能,以根据所述目标对象的脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令对所述目标对象进行拍照。The control method according to claim 15, wherein in the process of acquiring the target object video, the camera unit further comprises a photographing function to change according to facial features and/or audio features of the target object. And/or an interactive instruction issued by the target object takes a picture of the target object.
  18. 根据权利要求15所述的控制方法,其特征在于,所述目标对象发出交互指令后,还包括如下步骤:The control method according to claim 15, wherein after the target object issues an interactive instruction, the method further includes the following steps:
    接收所述目标对象的交互指令;Receiving an interactive instruction of the target object;
    解析所述交互指令中包含的交互信息,提取与本机功能单元相对应的指示符;Parsing the interaction information included in the interaction instruction, and extracting an indicator corresponding to the local function unit;
    启动与所述指示符相对应的功能单元。A functional unit corresponding to the indicator is activated.
  19. 根据权利要求18所述的控制方法,其特征在于,所述交互指令为所述目标物发出语音指令和/或所述目标物在本机上点击的与所述功能单元对应的按键。The control method according to claim 18, wherein the interactive command issues a voice command for the target and/or a button corresponding to the function unit that the target clicks on the local device.
  20. 一种机器人视频通话控制装置,包括:A robot video call control device includes:
    至少一个处理器;At least one processor;
    以及,至少一个存储器,其与所述至少一个处理器可通信地连接;所述至少一个存储器包括处理器可执行的指令,当所述处理器可执行的指令由所述至少一个处理器执行时,致使所述装置执行至少以下操作:And at least one memory communicatively coupled to the at least one processor; the at least one memory comprising processor-executable instructions when the processor-executable instructions are executed by the at least one processor Causing the device to perform at least the following operations:
    建立与呼叫方的视频通话,向呼叫方传输本机摄像单元获取的视频流;Establishing a video call with the calling party, and transmitting the video stream obtained by the local camera unit to the calling party;
    接收所述呼叫方发起的寻的指令,解析所述寻的指令所包含的目标物信息,依据目标物信息确定相应的目标对象的目标特征信息;Receiving an instruction of the calling party initiated by the calling party, parsing target object information included in the seeking instruction, and determining target feature information of the corresponding target object according to the target object information;
    在未捕捉到所述目标对象时,启动行走装置执行本机移动,在移动过程中对本机摄像单元的视频流进行图像识别,确定包含所述目标特征信息的图像,以捕捉目标对象;When the target object is not captured, the walking device is activated to perform local movement, and the video stream of the local camera unit is image-recognized during the movement, and an image containing the target feature information is determined to capture the target object;
    当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围。 After capturing the target object, the walking device is controlled to maintain a preset distance range between the local machine and the target object.
  21. 根据权利要求20所述的控制装置,其特征在于,所述操作还包括:在捕捉到所述目标对象后,采集所述目标对象的属于其目标特征信息之外的扩展特征信息,在不能捕捉所述目标特征信息时,依据所述扩展特征信息定位所述目标对象的扩增部位实现目标对象捕捉。The control device according to claim 20, wherein the operation further comprises: after capturing the target object, acquiring extended feature information of the target object that belongs to the target feature information, and is unable to capture And the target feature information is obtained by positioning the augmented portion of the target object according to the extended feature information.
  22. 根据权利要求21所述的控制装置,其特征在于,所述操作还包括:依据所述扩展特征信息定位所述目标对象的扩增部位之后,启动行走装置环绕该扩增部位继续搜寻所述目标特征信息,直到定位到该目标特征信息之后方才实现目标对象捕捉。The control device according to claim 21, wherein the operation further comprises: after positioning the augmented portion of the target object according to the extended feature information, initiating the walking device to continue searching for the target around the augmented portion The feature information is not achieved until the target feature information is located.
  23. 根据权利要求21所述的控制装置,所述扩展特征信息采集自所述视频流中与所述目标特征信息相对应的图像部分共同运动的动景图像。The control device according to claim 21, wherein said extended feature information is collected from a moving scene image in which said image portion corresponding to said target feature information in said video stream moves together.
  24. 根据权利要求20所述的控制装置,其特征在于,所述目标物信息与目标特征信息之间以映射关系存储于数据库中,所述操作还包括:查询该数据库而实现依据目标物信息确定目标特征信息。The control device according to claim 20, wherein the target information and the target feature information are stored in a database in a mapping relationship, the operation further comprising: querying the database to determine a target according to the target information. Feature information.
  25. 根据权利要求20所述的控制装置,其特征在于,所述操作还包括:当捕捉到所述目标对象后,控制行走装置使本机与目标对象之间保持预设距离范围的步骤中,伴随所述行走装置的运行,获取本机距离传感器侦测的与目标对象之间的距离数据,当该距离数据超过所述预定距离范围时,控制行走装置开始先走而执行移动,否则控制行走装置停止行走而暂停移动。The control device according to claim 20, wherein said operation further comprises: in the step of controlling the walking device to maintain a predetermined distance range between the local device and the target object after capturing the target object, accompanied by The running of the walking device acquires distance data detected by the local distance sensor and the target object. When the distance data exceeds the predetermined distance range, the walking device is controlled to start moving first, otherwise the walking device is controlled. Stop walking and pause the move.
  26. 根据权利要求20所述的控制装置,其特征在于,所述目标物信息为目标对象的名称或者指示符。The control device according to claim 20, wherein the target information is a name or an indicator of the target object.
  27. 根据权利要求20所述的控制装置,其特征在于,所述目标特征信息为所述目标对象的脸部特征信息。The control device according to claim 20, wherein the target feature information is face feature information of the target object.
  28. 根据权利要求21所述的控制装置,其特征在于,所述操作还包括:监测到所述目标对象的扩展特征信息发生变化后,重新采集所述扩展部位的所述扩展特征信息。The control device according to claim 21, wherein the operation further comprises: after detecting that the extended feature information of the target object changes, re-acquiring the extended feature information of the extended portion.
  29. 根据权利要求25所述的控制装置,其特征在于,所述距离传感器为超声波传感器、红外线传感器或者包含所述摄像单元在内的双目测距摄像装置。 The control device according to claim 25, wherein the distance sensor is an ultrasonic sensor, an infrared sensor, or a binocular ranging imaging device including the imaging unit.
  30. 根据权利要求21所述的控制装置,其特征在于,所述扩展特征信息包括一个或任意多个以下特征:躯干部位特征、服装部位特征、脸部轮廓特征、毛发轮廓特征信息或音频特征信息。The control apparatus according to claim 21, wherein said extended feature information comprises one or any of a plurality of features: a trunk part feature, a clothing part feature, a face outline feature, hair outline feature information, or audio feature information.
  31. 根据权利要求20所述的控制装置,其特征在于,所述操作还包括:用本机音频和/或红外定位单元,在捕捉所述目标对象过程中时,本机开启音频和/或红外定位单元获取所述目标物位置,以确定行走装置的起始走向。The control device according to claim 20, wherein said operation further comprises: using a local audio and/or infrared positioning unit to locally turn on audio and/or infrared positioning while capturing said target object The unit acquires the target position to determine the starting direction of the walking device.
  32. 根据权利要20所述的控制装置,其特征在于,所述操作还包括:在捕捉所述目标对象过程中,在遇到障碍物时,本机通过距离传感器测量本机与所述障碍物的距离,控制行走装置绕行和/或远离所述障碍物,在绕行和/或远离所述障碍物后继续捕捉所述目标对象。The control device according to claim 20, wherein the operation further comprises: in the process of capturing the target object, the local unit measures the locality and the obstacle by the distance sensor when encountering an obstacle The distance is controlled by the walking device and/or away from the obstacle, and the target object continues to be captured after being bypassed and/or away from the obstacle.
  33. 根据权利要求20所述的控制装置,其特征在于,所述操作还包括:在本机移动到与所述目标对象的距离范围内时,启动所述语音提醒单元,并发出语音提醒。The control device according to claim 20, wherein the operation further comprises: when the local machine moves within a distance from the target object, activating the voice reminding unit and issuing a voice reminder.
  34. 根据权利要求20所述的控制装置,其特征在于,所述操作还包括:所述呼叫方挂断所述视频通话后,所述本机摄像单元持续采集所述目标对象的视频;The control device according to claim 20, wherein the operation further comprises: after the calling party hangs up the video call, the local camera unit continuously collects a video of the target object;
    所述机器人将所述视频发送到与其连接的终端,且在向所述终端发送文字提醒和/或语音提醒。The robot transmits the video to a terminal connected thereto and transmits a text reminder and/or a voice reminder to the terminal.
  35. 根据权利要求34所述的控制装置,其特征在于,还包括:The control device according to claim 34, further comprising:
    所述机器人在采集所述目标对象时,根据所述目标对象脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令,所述本机启动语音交互单元和/或向与所述本机连接的移动终端发起视频通话。When the robot acquires the target object, the local device initiates a voice interaction unit and/or according to a change of the target object facial feature and/or audio feature and/or an interaction instruction issued by the target object. The mobile terminal connected to the local device initiates a video call.
  36. 根据权利要求34所述的控制装置,其特征在于,所述操作还包括:在采集所述目标对象视频过程中,所述本机摄像单元还包括拍照功能,根据所述目标对象的脸部特征和/或音频特征的变化和/或所述目标对象发出的交互指令对所述目标对象进行拍照。The control device according to claim 34, wherein the operation further comprises: in the process of acquiring the target object video, the local camera unit further comprises a photographing function, according to a facial feature of the target object And/or a change in an audio feature and/or an interactive instruction issued by the target object takes a picture of the target object.
  37. 根据权利要求34所述的控制装置,其特征在于,所述操作还包括:The control device according to claim 34, wherein the operation further comprises:
    接收所述目标对象的交互指令; Receiving an interactive instruction of the target object;
    解析所述交互指令中包含的交互信息,提取与本机功能单元相对应的指示符;Parsing the interaction information included in the interaction instruction, and extracting an indicator corresponding to the local function unit;
    启动与所述指示符相对应的功能单元。A functional unit corresponding to the indicator is activated.
  38. 根据权利要求37所述的控制装置,其特征在于,所述交互指令为所述目标物发出语音指令和/或所述目标物在本机上点击的按键。The control device according to claim 37, wherein said interactive command issues a voice command for said target and/or a button for said target to click on the device.
  39. 一种可视频通话移动式机器人,包括处理器,该处理器用于执行权利要求1-19中任意一项所述的机器人视频通话控制方法。A video callable mobile robot comprising a processor for performing the robot video call control method according to any one of claims 1-19.
  40. 一种计算机程序,包括计算机可读代码,当可视频通话移动式机器人运行所述计算机可读代码时,导致权利要求1-19中的任一项权利要求所述的方法被执行。A computer program comprising computer readable code, when the video callable mobile robot runs the computer readable code, causing the method of any of claims 1-19 to be performed.
  41. 一种计算机可读介质,其中存储了如权利要求40所述的计算机程序。 A computer readable medium storing the computer program of claim 40.
PCT/CN2017/116674 2016-12-15 2017-12-15 Robot video call control method, device and terminal WO2018108176A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611157928.6 2016-12-15
CN201611157928.6A CN106791565A (en) 2016-12-15 2016-12-15 Robot video calling control method, device and terminal

Publications (1)

Publication Number Publication Date
WO2018108176A1 true WO2018108176A1 (en) 2018-06-21

Family

ID=58888280

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/116674 WO2018108176A1 (en) 2016-12-15 2017-12-15 Robot video call control method, device and terminal

Country Status (2)

Country Link
CN (1) CN106791565A (en)
WO (1) WO2018108176A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221602A (en) * 2019-05-06 2019-09-10 上海秒针网络科技有限公司 Target object method for catching and device, storage medium and electronic device
CN114079696A (en) * 2020-08-21 2022-02-22 海能达通信股份有限公司 Terminal calling method and device and electronic equipment

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791565A (en) * 2016-12-15 2017-05-31 北京奇虎科技有限公司 Robot video calling control method, device and terminal
CN107659608A (en) * 2017-07-24 2018-02-02 北京小豆儿机器人科技有限公司 A kind of emotional affection based on endowment robot shows loving care for system
CN107516367A (en) * 2017-08-10 2017-12-26 芜湖德海机器人科技有限公司 A kind of seat robot control method that personal identification is lined up based on hospital
CN107825428B (en) * 2017-12-08 2020-12-11 子歌教育机器人(深圳)有限公司 Operating system of intelligent robot and intelligent robot
CN108073112B (en) * 2018-01-19 2024-02-20 冠捷电子科技(福建)有限公司 Intelligent service type robot with role playing function
CN110191300B (en) * 2019-04-26 2020-02-14 特斯联(北京)科技有限公司 Visual call equipment and system of unmanned parking lot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190754A1 (en) * 2003-03-31 2004-09-30 Honda Motor Co., Ltd. Image transmission system for a mobile robot
US20060147001A1 (en) * 2004-12-07 2006-07-06 Young-Guk Ha System and method for service-oriented automatic remote control, remote server, and remote control agent
CN103718125A (en) * 2011-08-02 2014-04-09 微软公司 Finding a called party
CN103926912A (en) * 2014-05-07 2014-07-16 桂林赛普电子科技有限公司 Smart home monitoring system based on home service robot
CN104800950A (en) * 2015-04-22 2015-07-29 中国科学院自动化研究所 Robot and system for assisting autistic child therapy
CN105856260A (en) * 2016-06-24 2016-08-17 深圳市鑫益嘉科技股份有限公司 On-call robot
CN106162037A (en) * 2016-08-08 2016-11-23 北京奇虎科技有限公司 A kind of method and apparatus carrying out interaction during video calling
CN106791565A (en) * 2016-12-15 2017-05-31 北京奇虎科技有限公司 Robot video calling control method, device and terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6925357B2 (en) * 2002-07-25 2005-08-02 Intouch Health, Inc. Medical tele-robotic system
CN102025964A (en) * 2010-05-07 2011-04-20 中兴通讯股份有限公司 Video message leaving method and terminal
CN102176222B (en) * 2011-03-18 2013-05-01 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN104656653A (en) * 2015-01-15 2015-05-27 长源动力(北京)科技有限公司 Interactive system and method based on robot
CN105301997B (en) * 2015-10-22 2019-04-19 深圳创想未来机器人有限公司 Intelligent prompt method and system based on mobile robot
CN105182983A (en) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 Face real-time tracking method and face real-time tracking system based on mobile robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190754A1 (en) * 2003-03-31 2004-09-30 Honda Motor Co., Ltd. Image transmission system for a mobile robot
US20060147001A1 (en) * 2004-12-07 2006-07-06 Young-Guk Ha System and method for service-oriented automatic remote control, remote server, and remote control agent
CN103718125A (en) * 2011-08-02 2014-04-09 微软公司 Finding a called party
CN103926912A (en) * 2014-05-07 2014-07-16 桂林赛普电子科技有限公司 Smart home monitoring system based on home service robot
CN104800950A (en) * 2015-04-22 2015-07-29 中国科学院自动化研究所 Robot and system for assisting autistic child therapy
CN105856260A (en) * 2016-06-24 2016-08-17 深圳市鑫益嘉科技股份有限公司 On-call robot
CN106162037A (en) * 2016-08-08 2016-11-23 北京奇虎科技有限公司 A kind of method and apparatus carrying out interaction during video calling
CN106791565A (en) * 2016-12-15 2017-05-31 北京奇虎科技有限公司 Robot video calling control method, device and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110221602A (en) * 2019-05-06 2019-09-10 上海秒针网络科技有限公司 Target object method for catching and device, storage medium and electronic device
CN114079696A (en) * 2020-08-21 2022-02-22 海能达通信股份有限公司 Terminal calling method and device and electronic equipment

Also Published As

Publication number Publication date
CN106791565A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018108176A1 (en) Robot video call control method, device and terminal
US11080882B2 (en) Display control device, display control method, and program
CN106873773B (en) Robot interaction control method, server and robot
US20230305530A1 (en) Information processing apparatus, information processing method and program
US20170185276A1 (en) Method for electronic device to control object and electronic device
JP6759445B2 (en) Information processing equipment, information processing methods and computer programs
US20130123658A1 (en) Child-Care Robot and a Method of Controlling the Robot
CN114391163A (en) Gesture detection system and method
WO2019072104A1 (en) Interaction method and device
WO2020015682A1 (en) System and method for controlling unmanned aerial vehicle
JP7375748B2 (en) Information processing device, information processing method, and program
US11780097B2 (en) Information processing apparatus and method for processing information
WO2019087478A1 (en) Information processing device, information processing method, and program
US10339381B2 (en) Control apparatus, control system, and control method
CN107452381B (en) Multimedia voice recognition device and method
US20190295526A1 (en) Dialogue control device, dialogue system, dialogue control method, and recording medium
JPWO2020116233A1 (en) Information processing equipment, information processing methods, and programs
WO2019123744A1 (en) Information processing device, information processing method, and program
CN106997449A (en) Robot and face identification method with face identification functions
US20210316452A1 (en) Information processing device, action decision method and program
US11938625B2 (en) Information processing apparatus, information processing method, and program
US11687049B2 (en) Information processing apparatus and non-transitory computer readable medium storing program
TW200848229A (en) Method of locating objects using an autonomously moveable device
CN205334503U (en) Robot with face identification function
CN111479060B (en) Image acquisition method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17881594

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17881594

Country of ref document: EP

Kind code of ref document: A1