CN114603557A - Robot projection method and robot - Google Patents

Robot projection method and robot Download PDF

Info

Publication number
CN114603557A
CN114603557A CN202210225542.3A CN202210225542A CN114603557A CN 114603557 A CN114603557 A CN 114603557A CN 202210225542 A CN202210225542 A CN 202210225542A CN 114603557 A CN114603557 A CN 114603557A
Authority
CN
China
Prior art keywords
projection
space
robot
dimensional
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210225542.3A
Other languages
Chinese (zh)
Other versions
CN114603557B (en
Inventor
王嘉晋
张飞刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202210225542.3A priority Critical patent/CN114603557B/en
Publication of CN114603557A publication Critical patent/CN114603557A/en
Application granted granted Critical
Publication of CN114603557B publication Critical patent/CN114603557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a robot projection method and a robot, and relates to the technical field of robots. The projection method of an embodiment of the application comprises the following steps: and carrying out mapping identification on the surrounding environment so as to identify a plurality of three-dimensional spaces, and marking the coordinates of each three-dimensional space in the map. And calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of each three-dimensional space. And acquiring a projection instruction, and judging whether the projection instruction comprises the number information of the film viewers. And if the projection instruction comprises the number information of the film viewers, determining the projection space according to the number information of the film viewers and the capacity of each three-dimensional space. And if the projection instruction does not comprise the number information of the film viewers, determining the three-dimensional space where the robot is located currently as the projection space. And navigating to the projection space according to the coordinates of the three-dimensional space in the map. And performing environment identification on the projection space to determine a projection area. And adjusting projection parameters according to the projection area to determine the projection attitude. And finishing the projection operation according to the projection posture.

Description

Robot projection method and robot
Technical Field
The application relates to the technical field of robots, in particular to a robot projection method and a robot.
Background
More and more intelligent mobile robots are beginning to have projection interaction functionality. In the moving process of the robot, how to quickly and accurately find a proper projection plane and project a clear and stable picture becomes an important problem influencing the projection interaction function of the robot.
Disclosure of Invention
In view of this, the present application provides a robot projection method and a robot, so as to improve the projection interaction function of the robot.
The application provides a robot projection method in a first aspect, and the projection method comprises the following steps: and carrying out mapping identification on the surrounding environment so as to identify a plurality of three-dimensional spaces, and marking the coordinates of each three-dimensional space in the map. And calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of the three-dimensional space, wherein the capacity refers to the number of people who can hold the film for watching in the three-dimensional space. And acquiring a projection instruction, and judging whether the projection instruction comprises the number information of the film viewers. And if the projection instruction comprises the number information of the film viewers, determining the projection space according to the number information of the film viewers and the capacity of each three-dimensional space. And if the projection instruction does not comprise the number information of the film viewers, determining the three-dimensional space where the robot is located currently as the projection space. And navigating to the projection space according to the coordinates of the three-dimensional space in the map. And performing environment identification on the projection space to determine a projection area. And adjusting projection parameters according to the projection area to determine the projection attitude. And finishing the projection operation according to the projection posture.
A second aspect of the application provides a robot comprising a processor and a memory for storing a computer program or code which, when executed by the processor, implements the projection method of an embodiment of the application.
The embodiment of the application can quickly and accurately find a proper projection plane by building the surrounding environment for identification, then identifying the environment of the projection space and determining the projection posture by adjusting the projection parameters, thereby ensuring the quality of the projection plane and the projection effect.
Drawings
Fig. 1 is a flowchart of a projection method according to an embodiment of the present application.
Fig. 2 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 3 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 4 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 5 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 6 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 7 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 8 is a schematic view of an application scenario according to an embodiment of the present application.
Fig. 9 is a schematic configuration diagram of a robot according to an embodiment of the present application.
Fig. 10 is a schematic structural view of the multi-legged robot according to the embodiment of the present application.
Fig. 11 is an external view schematically showing the multi-legged robot according to the embodiment of the present application.
Description of the main elements
Robot 100
Processor 110
Memory 120
Wall surface 200
Multi-legged robot 300
Mechanical unit 301
Communication unit 302
Sensing unit 303
Interface unit 304
Storage unit 305
Display unit 306
Input unit 307
Control module 308
Power supply 309
Drive plate 3011
Motor 3012
Mechanical structure 3013
Fuselage main body 3014
Leg 3015
Foot 3016
Head structure 3017
Tail structure 3018
Object carrying structure 3019
Saddle structure 3020
Camera structure 3021
Display panel 3061
Touch panel 3071
Input device 3072
Touch detection device 3073
Touch controller 3074
Detailed Description
In the embodiments of the present application, "at least one" means one or more, "and" a plurality "means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, e.g., A and/or B may represent: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The terms "first," "second," "third," "fourth," and the like in the description and in the claims and drawings of the present application, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
It should be further noted that the methods disclosed in the embodiments of the present application or the methods shown in the flowcharts include one or more steps for implementing the methods, and the execution orders of the steps may be interchanged with each other, and some steps may be deleted without departing from the scope of the claims.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
A three-dimensional space includes various forms of three-dimensional spaces such as various offices/meeting rooms in office buildings, various classrooms in teaching buildings, various rooms in apartments or houses (e.g., living rooms, study rooms, kitchens, etc.).
And 2, the projection space refers to a three-dimensional space which is screened from the three-dimensional space and is used for robot projection.
The projection plane 3 refers to a plane determined in the projection space and used for projection of the robot, and includes, for example, a wall surface, a floor, a ceiling, or the like in the projection space.
4, projection area, refers to an area on the projection plane that serves as a projection screen.
The projection posture is the posture of the robot performing projection, and the state of the robot can be detected by a sensor to adjust and control the posture of the robot. The sensors may include position, attitude, pressure and acceleration sensors, etc.
And 6, the 3D camera can detect the distance from each point in the 2D image to the 3D camera through the data acquired by the 3D camera, and the three-dimensional space coordinate of each point in the 2D image can be acquired by adding the corresponding distance of each point to the coordinate of each point in the 2D image. The 3D camera can be used for face recognition, gesture recognition, human skeleton recognition, three-dimensional measurement, environment perception, three-dimensional map reconstruction and the like. Herein, the robot is configured with a camera, including a 3D camera.
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a flowchart of a projection method according to an embodiment of the present application.
Referring to fig. 1, the projection method is applied to a robot, and the robot is provided with a camera and a projector. The projection method may include the steps of:
s101, mapping and identifying the surrounding environment to identify a plurality of three-dimensional spaces.
In some embodiments, the robot may sense the surrounding environment by a lidar or a camera (e.g., a 3D camera), and map the surrounding environment using a Simultaneous Localization and Mapping (SLAM) technique. The robot starts to move from an unknown position in an unknown environment, self-positioning is carried out according to the position and the map in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning, so that the autonomous positioning and navigation of the robot are realized.
For example, when the robot is in a house, the robot performs mapping recognition on the environment in the house, and can recognize various three-dimensional spaces in the house, such as a living room, a bedroom, a study room and the like.
And S102, marking the coordinates of each three-dimensional space in the map.
In this embodiment, in the process of map building and identification, the robot can mark the coordinates of each three-dimensional space on the map.
S103, calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of each three-dimensional space.
Wherein, the capacity refers to the number of people in the three-dimensional space which can hold the film.
In this embodiment, the robot may calculate the area of each three-dimensional space according to the mapping recognition result, and then set the corresponding relationship between the area of each three-dimensional space and the number of people who can hold the movie according to the area of each three-dimensional space. For example, in some embodiments, the area of the three-dimensional space and the number of persons that can hold the film satisfy the following formula:
N≤S<(N+1)m2
wherein S is the area of the three-dimensional space, N is the number of people in the three-dimensional space capable of holding the film, and N is a positive integer. For example,when the area S of the three-dimensional space satisfies 8 ≤ S<9m2When the number of people in the three-dimensional space can accommodate the film is 8. For another example, when the area S of the three-dimensional space satisfies 12 ≦ S<13m2When the number of people in the three-dimensional space is 12, the number of people can be used for watching the film. For another example, when the area S of the three-dimensional space satisfies 20 ≦ S<21m2In this case, the number of persons in the three-dimensional space is 20.
And S104, acquiring a projection instruction.
The projection instruction is used for informing the robot to start a projection mode so as to search a projection space.
In some embodiments, the robot may acquire the projection instruction by recognizing a voice/text input, a touch operation, or a gesture of the user, and may also receive the projection instruction from the terminal application.
S105, determining whether the projection instruction comprises the number information of the viewers.
In step S105, if the projection command includes the number of viewers, step S106 is executed. If not, step S107 is executed.
And S106, determining a projection space according to the number information of the viewers and the capacity of each three-dimensional space.
It is understood that, in step S105, when it is determined that the projection instruction includes the audience size information, step S106 acquires the audience size information from the projection instruction, and determines the projection space according to the acquired audience size information and the capacity of each stereoscopic space.
For example, when the projection command received by the robot includes the number of people who viewed the film, for example, the projection command is "help me find a projection space of 8 people", the number of people who viewed the film, that is, "8 people", may be extracted from the projection command through keyword or semantic analysis.
If the touch module of the robot is provided with a control for triggering a projection instruction, when the user triggers the control through touch operation, the control prompts the user to continue inputting the number information of the film watching people, and after the user inputs the number information of the film watching people, the robot can directly acquire or extract the number information of the film watching people.
The camera of the robot can acquire the number information of the sightseeing people by recognizing the gesture action of the user, for example, the user can draw "projection" and "10" through the gesture action, the robot can recognize the projection instruction as "finding the projection space of 10 people" from the gesture action of the user through a fuzzy recognition algorithm, and then extracts the number information of the sightseeing people, namely "10 people" from the projection instruction through keyword or semantic analysis.
If an application program for controlling the robot is installed on the terminal, the user may input a projection instruction "find a projection space of 20 people" in the application program. When the robot receives a projection instruction from an application program, the number information of the viewers, namely '20 persons', can be extracted from the projection instruction through keywords or semantic analysis.
It is understood that after step S106 is performed, step S108 is performed.
And S107, determining the current stereo space of the robot as a projection space.
For example, when the projection command received by the robot is "horse, please project", since the number of people watching the movie is not queried from the projection command, it may be determined that the projection command does not include the number of people watching the movie. Therefore, the three-dimensional space where the robot is located can be directly determined as the projection space.
And S108, navigating to a projection space according to the coordinates of the three-dimensional space in the map.
In the present embodiment, when the robot determines to use a stereoscopic space as the projection space, it can navigate to the position of the stereoscopic space (i.e., the projection space) according to the coordinates of the stereoscopic space.
And S109, carrying out environment recognition on the projection space to determine a projection area.
In this embodiment, when the robot determines that the projection space is not occupied, the camera may be used to perform environment recognition on the projection space to find and determine a suitable projection area.
And S110, adjusting projection parameters according to the projection area to determine a projection posture.
The projection parameters at least comprise projection height, projection distance and projection angle.
In some embodiments, after the robot determines the projection area, the projection parameters may be adjusted according to the corresponding features of the identified object. For example, when the identification object is a sofa, the robot may simulate that the audience is located at the middle position of the front of the sofa, adjust the projection distance with the audience viewing distance as a reference, and adjust the projection height with the audience viewing height as a reference until finding a projection posture suitable for the audience viewing.
And S111, acquiring the environmental parameters.
The environmental parameters may include, among others, brightness and noise. The robot can test the light intensity of the current environment through the light sensor to acquire the brightness value. The robot may test the noise of the current environment by opening the microphone to obtain a noise value.
And S112, determining whether the environmental parameter is smaller than a preset threshold value. If the environmental parameter is smaller than the preset threshold, step S117 is performed. If not, the process returns to step S114.
Wherein the preset threshold is determined according to the attribute of the projector. For example, the brightness threshold is the maximum light intensity supported by the projector and the noise threshold is the maximum noise supported by the projector.
And when the environmental parameter is smaller than the preset threshold, the robot supports the projection operation in the current environment.
And S113, completing the projection operation according to the projection posture.
Wherein the projecting operation may include turning on the projector and starting to project the content.
And S114, responding to an operation instruction of a user, and adjusting at least one of the projection posture, the projection brightness and the volume.
In some embodiments, after the robot completes the projection operation, the user may trigger the operation instruction. The robot may acquire the operation instruction by recognizing a voice/text input, a touch operation, or a gesture action of the user, and may also receive the operation instruction from the terminal application.
For example, the user may click on the head of the robot with a finger to turn the projection brightness down. For example, the projection brightness decreases by 10% for each click of the robot head by the user. The user can press the head of the robot with the palm to increase the projection brightness. For example, the projection brightness increases by 10% for each time the user presses the head of the robot. The user may click on the tail of the robot with a finger to turn down the volume. For example, the volume is reduced by 10% per click of the tail of the robot by the user. The user can press the tail of the robot with the palm to turn up the volume. For example, the volume increases by 10% for each time the user presses the tail of the robot. The user can slide left/right on the head of the robot with fingers to control the robot to move left/right, thereby adjusting the position of the robot.
The user can trigger the operation instruction through voice, for example, the user can control the robot to find the projection plane again through voice 'help me find the local projection again'. The user can control the robot to adjust the projection brightness through voice 'dimming brightness' or 'dimming brightness'. The user can control the robot to adjust the volume by turning up the volume or turning down the volume through voice.
Referring to fig. 1 and fig. 2, after the step S106 is executed, the projection method may further include the following steps:
s201, determining whether the projection space is occupied.
In step S201, if the projection space is occupied, step S202 is performed. If not, the steps S108 to S114 in fig. 1 are executed in sequence.
In some embodiments, when the robot reaches the position of the projection space, it is possible to view or determine whether there is a person inside the projection space through the camera. When a person is inside the projection space, the robot determines that the projection space is occupied. Otherwise, the robot determines that the projection space is unoccupied.
In other embodiments, if the projection space is a meeting room, the robot may query or determine whether the projection space is occupied by accessing a meeting room reservation system.
And S202, determining whether other three-dimensional spaces meeting the people number condition exist.
In step S202, if it is determined that there is another three-dimensional space satisfying the number of people, step S203 is executed. If not, go to step S204.
S203, determining a projection space according to the distance from the position of the other three-dimensional space to the current position of the robot, and navigating to the projection space according to the coordinates of the other three-dimensional space in the map.
For example, the robot may query the distances from other stereo spaces to the current position of the robot, and determine a stereo space closest to the current position of the robot as the projection space.
In other embodiments, the robot may also determine the projection space from the history of other stereo spaces. For example, the robot may query the number of times it has used other three-dimensional spaces, determine one three-dimensional space that has been used the most as a projection space, and update the history again after the three-dimensional space has been used this time.
It is understood that the specific implementation of step S203 is substantially the same as step S108, and the detailed description thereof is omitted here.
It is understood that after step S203 is performed, steps S109 to S114 in fig. 1 are performed in sequence.
And S204, stopping the projection work and feeding back the result to the user.
For example, when the robot determines that there is no other three-dimensional space meeting the condition of the number of people, the projection work is stopped, and the user is prompted by voice that "no other projection space suitable for 8 people is found".
Referring to fig. 1 and fig. 3 together, fig. 3 is a sub-flowchart of step S106 in fig. 1. As shown in fig. 3, step S106 may include the following sub-steps:
s301, responding to the projection instruction, and inquiring a plurality of stereoscopic spaces meeting the people number condition.
Wherein, the number of people condition means that the number of people in the three-dimensional space is more than or equal to the number of people watching the film.
For example, when the number of persons in the three-dimensional space for holding the film is 8, and the number of persons for viewing the film obtained from the projection command is 5, the three-dimensional space meets the number of persons. When the number of people in the three-dimensional space for accommodating the film is 8, and the number of people for observing the film obtained from the projection instruction is 10, the three-dimensional space does not meet the number of people.
S302, determining a projection space according to the query result.
In some embodiments, when the robot does not inquire the person number-qualified stereo space, a prompt message can be sent out to inform the user that no person number-qualified stereo space exists currently. For example, the robot may prompt the user by voice "no suitable 8 people's projection space found".
It is understood that, in step S201 in fig. 2, when the robot determines that the projection space is occupied, it may be determined whether there is another stereoscopic space that meets the condition of the number of people according to the query result of step S302.
Referring to fig. 3 and fig. 4 together, fig. 4 is a sub-flowchart of step S302 in fig. 3. As shown in fig. 4, when all the stereo spaces meeting the condition of the number of people are queried, step S302 may include the following sub-steps:
s401, obtaining coordinates of all three-dimensional spaces meeting the people number condition in a map and the current position coordinates of the robot.
In this embodiment, during the process of map creation and identification, the robot may update the current position coordinates periodically or in real time.
S402, determining the distance from each three-dimensional space meeting the people number condition to the current position of the robot according to the coordinates of each three-dimensional space meeting the people number condition and the coordinates of the current position of the robot.
In this embodiment, the robot calculates the distance between two points (i.e., the three-dimensional space and the robot) on the map by using a plane geometry method according to the coordinates of the two points.
And S403, determining a projection space according to the distance or the historical records of all the stereoscopic spaces meeting the people number condition.
In some embodiments, the robot determines the projection space based on the distance from the three-dimensional space to the current location of the robot. For example, the robot may select a stereo space closest to the current position of the robot as the projection space.
In other embodiments, the robot determines the projection space based on a history of the volumetric space. For example, the robot may select one of the stereo spaces in the history as the projection space of this time. Wherein, the history refers to a history that the stereoscopic space has been used as a projection space. The history may be stored in an internal memory of the robot or in an external memory that can be called by the robot.
Referring to fig. 1 and 5 together, fig. 5 is a sub-flowchart of step S109 in fig. 1. As shown in fig. 5, step S109 may include the following sub-steps:
s501, carrying out environment recognition on the projection space to determine the projection direction.
In this embodiment, the robot performs environment recognition on the projection space, and can recognize a plane for projection, a projection direction, and an obstacle in the projection direction. The obstacle is an object located between the robot and a plane on which projection is possible, such as a table, a chair, a sofa, and the like.
For example, the robot identifies a piece of wall surface for projection, the direction from the robot to the wall surface is the projection direction, and the robot can also identify a sofa or a seat in the projection direction.
And S502, determining whether a projection plane with a size larger than a preset size exists in the projection direction.
In step S502, if it is determined that a projection plane larger than a preset size exists in the projection direction, step S503 is performed. If not, go to step S504.
Wherein the preset size is determined according to the property of the projector. For example, the robot determines whether there is more than 120 x 70 square centimeters (cm) in the projection direction2) The projection plane of (2).
S503, determining a projection area on the projection plane.
The size of the projection area is a multiple of the preset size. For example, when the length of the preset size is a centimeter (cm) and the width is b centimeters, the length of the projection area is n × a centimeters, the width of the projection area is n × b centimeters, and n is larger than or equal to 1.
In this embodiment, when the robot determines that a projection plane having a size larger than a preset size exists in the projection direction, a region having a size not smaller than the preset size is divided on the projection plane to serve as a projection region.
And S504, adjusting the rotation angle of the robot to adjust the projection direction.
In this embodiment, when the robot determines that there is no projection plane larger than the preset size in the projection direction, the body may be controlled to rotate to drive the projector to face other directions. Alternatively, the body of the robot is stationary and the projector is controlled to rotate so as to point in the other direction.
In some embodiments, the robot can find all projection planes in the projection space that meet the size requirement by adjusting the rotation angle. Wherein, satisfying the size requirement means that there is a projection plane larger than a preset size in the projection direction.
It is to be understood that adjusting the angle of rotation may include rotating clockwise or counterclockwise in the horizontal direction and/or the vertical direction.
For example, if the current perspective of the robot does not have a projection plane that meets the size requirement, the robot may rotate 90 degrees clockwise, again identifying the current environment, until a projection plane that meets the size requirement is found or rotated to 360 degrees. If the projection plane satisfying the size requirement is not found after the robot rotates 360 degrees, the user can be prompted by voice to "no suitable projection area". Wherein the rotation of 360 degrees may be 360 degrees in both horizontal and vertical directions. The rotation can be first 360 degrees in the horizontal direction and then 360 degrees in the vertical direction. Or the rotation can be performed by 360 degrees in the vertical direction and 360 degrees in the horizontal direction.
After all projection planes meeting the size requirement are acquired, the robot may randomly select one of the projection planes, or prompt the user to select one of the projection planes. For example, the robot may voice prompt the user to "please select the projection plane.
Referring to fig. 5 and fig. 6 together, fig. 6 is a sub-flowchart of step S501 in fig. 5. As shown in fig. 6, step S501 may include the following sub-steps:
s601, when the preset identification object exists in the projection space, acquiring the characteristic corresponding to the identification object.
In some embodiments, the robot counts the obstacle objects appearing in the projection space that has been used once by querying the history, and marks the obstacle objects that appear more than a preset threshold to obtain the identification object. In other words, the identification object is an obstacle object that often appears in the projection space.
The robot may record the identification object and its corresponding characteristics. For example, features of a sofa or seat include a seat cushion and a backrest, the side of the backrest facing the seat cushion being the front of the sofa or seat.
S602, determining the projection direction according to the characteristics corresponding to the identification object.
For example, if the identification object is a sofa, when the plane available for projection is a plurality of walls, the robot may use the direction of the front of the sofa facing the wall as the projection direction.
Referring to fig. 1 and 7 together, fig. 7 is a sub-flowchart of step S112 in fig. 1. As shown in fig. 7, when the robot determines the projection pose, step S112 may include the following sub-steps:
and S701, determining whether the brightness is smaller than a preset brightness threshold value. If the brightness is smaller than the preset brightness threshold, step S702 is executed. If not, the process returns to step S110 in fig. 1.
S702, determining whether the noise is smaller than a preset noise threshold value. If the noise is determined to be less than the preset noise threshold, step S113 in fig. 1 is executed. If not, the process returns to step S110 in fig. 1.
In other embodiments, the robot may also determine whether the noise is smaller than a preset noise threshold, and then determine whether the brightness is smaller than a preset brightness threshold. And when the brightness is smaller than the brightness threshold value and the noise is smaller than the noise threshold value, the robot finishes the projection operation according to the projection posture. Otherwise, the robot readjusts the projection parameters to adjust the projection attitude.
The projection method in the embodiment of the present application is described below with reference to one of the application scenarios.
For example, please refer to fig. 8, and fig. 8 is a schematic view of a scene in which the projection method is applied to an office area according to the embodiment of the present application. In fig. 8, the solid arrow lines indicate the movement locus of the robot 100, and the dotted arrow lines indicate the projected line of the robot 100. The robot 100 is provided with a camera (not shown) and a projector (not shown).
As shown in fig. 8, when the robot 100 is in an office area, there are a plurality of conference rooms (e.g., conference rooms 1 to 4) in the office area. First, the robot 100 performs mapping recognition on an office area by using a camera to recognize a plurality of conference rooms, and marks coordinates of the respective conference rooms in a map. Then, the robot determines the number of persons in each meeting room capable of holding the video according to the area of each meeting room, for example, the area of the meeting room satisfies 8 ≦ S<9 square meter (m)2) And determining that the number of persons in the conference room capable of holding the film for watching is 8. After the robot 100 receives a voice command 'help me find a projection space of 8 people', a meeting room capable of accommodating not less than 8 people for watching a movie is found according to the information of the number of people in the voice command. When the robot finds a conference room (such as the conference room 2) which can accommodate at least 8 people for watching the images, the robot navigates to the conference room 2 according to the coordinates of the conference room. When the robot is in the meeting room 2, the camera recognizes the environment inside the meeting room 2, finds a suitable projection plane, such as the wall surface 200 in the meeting room 2, and determines the projection area on the projection plane. When the robot takes an area on the wall surface 200 as a projection area, the projection parameters of the projector are adjusted to determine the projection attitude, and the projection operation is completed according to the projection attitude. For example, the robot adjusts the projection height, projection distance, and projection angle of the projector, so that the projector projects a clear and stable image on the wall surface 200.
It can be understood that the projection method provided by this embodiment identifies a plurality of stereo spaces and obtains coordinates of the stereo spaces by mapping and identifying the surrounding environment, determines a projection space from the plurality of stereo spaces according to a projection instruction, and navigates to the projection space according to the coordinates. And then, carrying out environment recognition on the projection space to determine a projection area available for projection, and adjusting projection parameters according to the projection area to determine a projection posture. And finally, completing the projection operation according to the projection posture. Therefore, the appropriate projection plane can be quickly and accurately found, and the quality of the projection plane is ensured, so that the projection effect is ensured.
Fig. 9 is a schematic configuration diagram of a robot 100 according to an embodiment of the present application.
Referring to fig. 9, the robot 100 includes a processor 110 and a memory 120, the memory 120 is used for storing a computer program or code, and the processor 110 can call the computer program or code stored in the memory 120 to execute: and carrying out mapping identification on the surrounding environment so as to identify a plurality of three-dimensional spaces, and marking the coordinates of each three-dimensional space in the map. And calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of the three-dimensional space, wherein the capacity refers to the number of people who can hold the film for watching in the three-dimensional space. And acquiring a projection instruction, and judging whether the projection instruction comprises the number information of the film viewers. And if the projection instruction comprises the number of people watching the film, determining the projection space according to the number of people watching the film and the capacity of each three-dimensional space. And if the projection instruction does not comprise the number information of the film viewers, determining the three-dimensional space where the robot is located currently as the projection space. And navigating to the projection space according to the coordinates of the three-dimensional space in the map. And performing environment identification on the projection space to determine a projection area. And adjusting projection parameters according to the projection area to determine the projection attitude. And finishing the projection operation according to the projection posture.
It is understood that the robot 100 is capable of implementing all the method steps of the above method embodiments, and the description of the same method steps and advantages is omitted here.
The configuration illustrated in the embodiment of the present application is not intended to specifically limit the robot. In other embodiments of the present application, the robot may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components.
For example, referring to fig. 10, fig. 10 is a schematic diagram of a hardware structure of a polypod robot 300 according to an embodiment of the present invention. In the embodiment shown in fig. 10, the multi-legged robot 300 includes a mechanical unit 301, a communication unit 302, a sensing unit 303, an interface unit 304, a storage unit 305, a display unit 306, an input unit 307, a control module 308, and a power supply 309. The various components of the multi-legged robot 300 can be connected in any manner, including wired or wireless connections, and the like.
It is to be understood that the specific structure of the multi-legged robot 300 shown in fig. 10 does not constitute a limitation to the multi-legged robot 300, the multi-legged robot 300 may include more or less components than those shown, some components do not belong to the essential constitution of the multi-legged robot 300, and some components may be omitted or combined as necessary within the scope of not changing the essence of the application.
The various components of the multi-legged robot 300 are described in detail below with reference to fig. 10:
the mechanical unit 301 is the hardware of the multi-legged robot 300. As shown in fig. 10, the mechanical unit 301 may include a driving plate 3011, a motor 3012, and a mechanical structure 3013. As shown in fig. 11, fig. 11 is an external view of the multi-legged robot 300. Machine structure 3013 may include a body 3014, extendable legs 3015, feet 3016, and in other embodiments, machine structure 3013 may further include extendable robotic arms (not shown), a rotatable head structure 3017, a swingable tail structure 3018, a payload structure 3019, a saddle structure 3020, a camera structure 3021, and so forth. It should be noted that each component module of the mechanical unit 301 may be one or multiple, and may be set according to specific situations, for example, the number of the leg portions 3015 may be 4, each leg portion 3015 may be configured with 3 motors 3012, and the number of the corresponding motors 3012 is 12.
The communication unit 302 can be used for receiving and transmitting signals, and can also communicate with other devices through a network, for example, to receive command information sent by a remote controller or other multi-legged robot 300 to move in a specific direction at a specific speed according to a specific gait, and transmit the command information to the control module 308 for processing. The communication unit 302 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, etc.
The sensing unit 303 is used for acquiring information data of the environment around the multi-legged robot 300 and monitoring parameter data of each component inside the multi-legged robot 300, and sending the information data to the control module 308. The sensing unit 303 includes various sensors such as a sensor for acquiring surrounding environment information: laser radar (for long-range object detection, distance determination, and/or velocity value determination), millimeter wave radar (for short-range object detection, distance determination, and/or velocity value determination), a camera, an infrared camera, a Global Navigation Satellite System (GNSS), and the like. Such as sensors monitoring the various components inside the multi-legged robot 300: an Inertial Measurement Unit (IMU) (for measuring values of velocity, acceleration and angular velocity values), a sole sensor (for monitoring sole impact point position, sole attitude, ground contact force magnitude and direction), a temperature sensor (for detecting component temperature). As for the other sensors such as the load sensor, the touch sensor, the motor angle sensor, and the torque sensor, which can be configured in the multi-legged robot 300, the detailed description thereof is omitted.
The interface unit 304 can be used to receive inputs from external devices (e.g., data information, power, etc.) and transmit the received inputs to one or more components within the multi-legged robot 300, or can be used to output inputs to external devices (e.g., data information, power, etc.). The interface unit 304 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 305 is used to store software programs and various data. The storage unit 305 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the multi-legged robot 300 in use (such as various sensing data acquired by the sensing unit 303, log file data), and the like. Further, the storage unit 305 may include high speed random access memory, and may also include non-volatile memory, such as disk memory, flash memory, or other volatile solid state memory.
The display unit 306 is used to display information input by the user or information provided to the user. The Display unit 306 may include a Display panel 3061, and the Display panel 3061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 307 may be used to receive input numeric or character information. Specifically, the input unit 307 may include a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 3071 using a palm, a finger, or a suitable accessory), and drive the corresponding connection device according to a preset program. The touch panel 3071 may include two parts of a touch detection device 3073 and a touch controller 3074. The touch detection device 3073 detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller 3074; the touch controller 3074 receives touch information from the touch sensing device 3073, converts it to touch point coordinates, and sends the touch point coordinates to the control module 308, and can receive and execute commands from the control module 308. The input unit 307 may include other input devices 3072 in addition to the touch panel 3071. In particular, other input devices 3072 may include, but are not limited to, one or more of a remote control handle or the like, and are not limited thereto.
Further, the touch panel 3071 can cover the display panel 3061, and when the touch panel 3071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the control module 308 to determine the type of the touch event, and then the control module 308 provides a corresponding visual output on the display panel 3061 according to the type of the touch event. Although in fig. 10, the touch panel 3071 and the display panel 3061 are implemented as two independent components to implement the input and output functions, respectively, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions, which is not limited herein.
The control module 308 is a control center of the multi-legged robot 300, connects the respective components of the entire multi-legged robot 300 by various interfaces and lines, and performs overall control of the multi-legged robot 300 by operating or executing a software program stored in the storage unit 305 and calling up data stored in the storage unit 305.
The power supply 309 is used to supply power to the various components, and the power supply 309 may include a battery and a power supply control board for controlling battery charging, discharging, and power consumption management functions. In the embodiment shown in fig. 10, the power source 309 is electrically connected to the control module 308, and in other embodiments, the power source 309 may be electrically connected to the sensing unit 303 (such as a camera, a radar, a sound box, etc.) and the motor 3012, respectively. It should be noted that each component may be individually connected to a different power source 309 or powered by the same power source 309.
On the basis of the above embodiments, in some embodiments, specifically, the communication connection with the multi-legged robot 300 can be performed through a terminal device, when the terminal device communicates with the multi-legged robot 300, the terminal device can transmit instruction information to the multi-legged robot 300, the multi-legged robot 300 can receive the instruction information through the communication unit 302, and in case of receiving the instruction information, the instruction information can be transmitted to the control module 308, so that the control module 308 can process the target velocity value according to the instruction information. Terminal devices include, but are not limited to: the mobile phone, the tablet computer, the server, the personal computer, the wearable intelligent device and other electrical equipment with the image shooting function.
The instruction information may be determined according to a preset condition. In one embodiment, the multi-legged robot 300 can include a sensing unit 303, and the sensing unit 303 can generate instruction information according to the current environment in which the multi-legged robot 300 is located. The control module 308 can determine whether the current velocity value of the multi-legged robot 300 satisfies the corresponding preset condition according to the instruction information. If yes, the current speed value and the current gait movement of the multi-legged robot 300 are maintained; if not, the target velocity value and the corresponding target gait are determined according to the corresponding preset conditions, so that the multi-legged robot 300 can be controlled to move at the target velocity value and the corresponding target gait. The environmental sensors may include temperature sensors, air pressure sensors, visual sensors, sound sensors. The instruction information may include temperature information, air pressure information, image information, and sound information. The communication mode between the environmental sensor and the control module 308 may be wired communication or wireless communication. The manner of wireless communication includes, but is not limited to: wireless network, mobile communication network (3G, 4G, 5G, etc.), bluetooth, infrared.
It is understood that the multi-legged robot 300 is capable of implementing all the method steps of the above-described method embodiments, and the same method steps and advantages will not be described herein again.
The embodiments of the present application have been described in detail with reference to the drawings, but the present application is not limited to the embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present application.

Claims (11)

1. A robotic projection method, the method comprising:
carrying out map building and identification on the surrounding environment so as to identify a plurality of three-dimensional spaces and marking the coordinates of each three-dimensional space in a map;
calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of the three-dimensional space, wherein the capacity refers to the number of people in the three-dimensional space capable of accommodating the film to be viewed;
acquiring a projection instruction, judging whether the projection instruction comprises the number information of film watching persons, and if the projection instruction comprises the number information of the film watching persons, determining a projection space according to the number information of the film watching persons and the capacity of each three-dimensional space; if the projection instruction does not include the number information of the film viewers, determining the three-dimensional space where the robot is located currently as the projection space;
navigating to the projection space according to the coordinate of the three-dimensional space in a map;
performing environment identification on the projection space to determine a projection area;
adjusting projection parameters according to the projection area to determine a projection posture;
and finishing the projection operation according to the projection posture.
2. A robotic projection method as claimed in claim 1 wherein said determining a projection space based on said audience size information and the volume of each of said three-dimensional spaces comprises:
responding to the projection instruction, inquiring a plurality of stereoscopic spaces meeting the number of people condition, wherein the number of people condition is that the number of people which can be accommodated in the stereoscopic spaces is more than or equal to the number of film watching people;
and determining the projection space according to the query result.
3. A robotic projection method as claimed in claim 2, wherein said determining the projection space from the query result comprises:
when all the three-dimensional spaces meeting the number condition are inquired, acquiring coordinates of all the three-dimensional spaces meeting the number condition in a map and current position coordinates of the robot;
determining the distance from each three-dimensional space meeting the people number condition to the current position of the robot according to the coordinates of each three-dimensional space meeting the people number condition and the coordinates of the current position of the robot;
and determining the projection space according to the distance or the historical records of all the three-dimensional spaces meeting the people number condition.
4. A robotic projection method as claimed in claim 2, wherein after said determining a projection space based on said viewership information and a capacity of each of said three-dimensional spaces, said method further comprises:
determining whether the projection space is occupied;
and when the projection space is not occupied, navigating to the projection space according to the coordinate of the three-dimensional space in the map.
5. A robotic projection method as claimed in claim 4, wherein after said determining a projection space based on said viewership information and a capacity of each of said three-dimensional spaces, said method further comprises:
when the projection space is occupied, judging whether other three-dimensional spaces meeting the number of people exist or not;
if other three-dimensional spaces meeting the number of people exist, determining the projection space according to the distance from the positions of the other three-dimensional spaces to the current position of the robot, and navigating to the projection space according to the coordinates of the other three-dimensional spaces in a map;
and if no other three-dimensional space meeting the number condition exists, stopping the projection work and feeding back the result to the user.
6. A robotic projection method as claimed in claim 1, wherein said performing environment recognition on said projection space to determine a projection region comprises:
performing environment recognition on the projection space to determine a projection direction;
determining whether a projection plane with a size larger than a preset size exists in the projection direction;
when a projection plane with a size larger than a preset size exists in the projection direction, determining the projection area on the projection plane;
and when the projection plane with the size larger than the preset size does not exist in the projection direction, adjusting the rotation angle of the robot to adjust the projection direction.
7. A robotic projection method as claimed in claim 6 wherein said performing environment recognition on said projection space to determine projection directions comprises:
when a preset identification object exists in the projection space, acquiring the corresponding characteristic of the identification object;
and determining the projection direction according to the characteristics corresponding to the identification object.
8. A robotic projection method as claimed in claim 1, wherein the projection parameters include at least projection height, projection distance and projection angle.
9. A robotic projection method as claimed in claim 1, wherein after said adjusting projection parameters according to the projection region to determine a projection pose, the method further comprises:
acquiring environmental parameters, wherein the environmental parameters comprise brightness and noise;
determining whether the brightness is less than a preset brightness threshold;
determining whether the noise is smaller than a preset noise threshold value;
and when the brightness is smaller than the brightness threshold value and the noise is smaller than the noise threshold value, completing the projection operation according to the projection posture.
10. A robotic projection method as claimed in claim 1, wherein after the completion of the projection operation in accordance with the projection pose, the method further comprises:
and responding to an operation instruction of a user, and adjusting at least one of the projection attitude, the projection brightness and the volume.
11. A robot, characterized in that the robot comprises a processor and a memory,
the memory is for storing a computer program or code which, when executed by the processor, implements the robotic projection method of any of claims 1 to 10.
CN202210225542.3A 2022-03-09 2022-03-09 Robot projection method and robot Active CN114603557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210225542.3A CN114603557B (en) 2022-03-09 2022-03-09 Robot projection method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210225542.3A CN114603557B (en) 2022-03-09 2022-03-09 Robot projection method and robot

Publications (2)

Publication Number Publication Date
CN114603557A true CN114603557A (en) 2022-06-10
CN114603557B CN114603557B (en) 2024-03-12

Family

ID=81861373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210225542.3A Active CN114603557B (en) 2022-03-09 2022-03-09 Robot projection method and robot

Country Status (1)

Country Link
CN (1) CN114603557B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024149018A1 (en) * 2023-01-13 2024-07-18 美的集团(上海)有限公司 Game projection method, game projection apparatus, readable storage medium and robot

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1413324A (en) * 1972-04-20 1975-11-12 Captain Int Ind Ltd Apparatus and methods for monitoring the availability status of guest rooms in hotels and the like
JP2002099045A (en) * 2000-09-26 2002-04-05 Minolta Co Ltd Display device and method
JP2008009136A (en) * 2006-06-29 2008-01-17 Ricoh Co Ltd Image projection device
KR20090000637A (en) * 2007-03-13 2009-01-08 주식회사 유진로봇 Mobile intelligent robot having function of contents provision and location guidance
CN104915903A (en) * 2015-05-29 2015-09-16 深圳走天下科技有限公司 Intelligent automatic room distribution device and method
US20170330495A1 (en) * 2015-02-03 2017-11-16 Sony Corporation Information processing apparatus, information processing method, and program
KR20180003269A (en) * 2016-06-30 2018-01-09 엘지전자 주식회사 Beam projector and operating method thereof
CN109996050A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Control method and control device of projection robot
CN210955065U (en) * 2019-11-18 2020-07-07 南京菲尔德物联网有限公司 Intelligent hotel box recommendation device
CN111476839A (en) * 2020-03-06 2020-07-31 珠海格力电器股份有限公司 Method, device and equipment for determining projection area and storage medium
US20200404232A1 (en) * 2019-06-20 2020-12-24 Lg Electronics Inc. Method for projecting image and robot implementing the same

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1413324A (en) * 1972-04-20 1975-11-12 Captain Int Ind Ltd Apparatus and methods for monitoring the availability status of guest rooms in hotels and the like
JP2002099045A (en) * 2000-09-26 2002-04-05 Minolta Co Ltd Display device and method
JP2008009136A (en) * 2006-06-29 2008-01-17 Ricoh Co Ltd Image projection device
KR20090000637A (en) * 2007-03-13 2009-01-08 주식회사 유진로봇 Mobile intelligent robot having function of contents provision and location guidance
US20170330495A1 (en) * 2015-02-03 2017-11-16 Sony Corporation Information processing apparatus, information processing method, and program
CN104915903A (en) * 2015-05-29 2015-09-16 深圳走天下科技有限公司 Intelligent automatic room distribution device and method
KR20180003269A (en) * 2016-06-30 2018-01-09 엘지전자 주식회사 Beam projector and operating method thereof
CN109996050A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Control method and control device of projection robot
US20200404232A1 (en) * 2019-06-20 2020-12-24 Lg Electronics Inc. Method for projecting image and robot implementing the same
CN210955065U (en) * 2019-11-18 2020-07-07 南京菲尔德物联网有限公司 Intelligent hotel box recommendation device
CN111476839A (en) * 2020-03-06 2020-07-31 珠海格力电器股份有限公司 Method, device and equipment for determining projection area and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024149018A1 (en) * 2023-01-13 2024-07-18 美的集团(上海)有限公司 Game projection method, game projection apparatus, readable storage medium and robot

Also Published As

Publication number Publication date
CN114603557B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
US11961285B2 (en) System for spot cleaning by a mobile robot
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
US20240118700A1 (en) Mobile robot and control method of mobile robot
US20230280743A1 (en) Mobile Robot Cleaning System
CN108885459B (en) Navigation method, navigation system, mobile control system and mobile robot
US20210260773A1 (en) Systems and methods to control an autonomous mobile robot
CN114847803B (en) Positioning method and device of robot, electronic equipment and storage medium
US9552056B1 (en) Gesture enabled telepresence robot and system
WO2019144541A1 (en) Cleaning robot
US20200088524A1 (en) Airport guide robot and operation method therefor
EP3527935B1 (en) Context-based depth sensor control
US20190351558A1 (en) Airport robot and operation method therefor
KR20180038879A (en) Robot for airport and method thereof
JP2007017414A (en) Position management system and position management program
CN113116224A (en) Robot and control method thereof
CN114603557B (en) Robot projection method and robot
CN114800535B (en) Robot control method, mechanical arm control method, robot and control terminal
US10889001B2 (en) Service provision system
KR20180040907A (en) Airport robot
JP2021151694A (en) Systems for measuring location using robots with deformable sensors
US11009887B2 (en) Systems and methods for remote visual inspection of a closed space
JP2021162607A (en) Display system, information processing apparatus, and display control method for display system
CN115731349A (en) Method and device for displaying house type graph, electronic equipment and storage medium
JP7354528B2 (en) Autonomous mobile device, method and program for detecting dirt on lenses of autonomous mobile device
JP7492756B2 (en) Radio wave source terminal position detection system and radio wave source terminal position detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant