Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
In the description of the present invention, unless explicitly stated or limited otherwise, the terms "connected," "connected," and "fixed" are to be construed broadly, and may, for example, be fixedly connected, detachably connected, or integrally formed, mechanically connected, electrically connected, directly connected, indirectly connected through an intervening medium, or in communication between two elements or in an interaction relationship between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
In the description of the present embodiment, the terms "upper", "lower", "right", etc. orientation or positional relationship are based on the orientation or positional relationship shown in the drawings, and are merely for convenience of description and simplicity of operation, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the invention. Furthermore, the terms "first," "second," and the like, are used merely for distinguishing between descriptions and not for distinguishing between them.
The crown-grabbing service robot is in the form of a high-stage skiing crown-grabbing scene of a person-shaped athlete, has the mind of a user to be stimulated to build courageous to fight at the moment, and can spray colors on the surfaces of the athlete to form clothes of the athlete in different countries, so that the product is closer to the emotion of the user, the requirement on the individuation of the product form is reduced, and the standardization and the cost reduction of the product are facilitated.
Fig. 1 is a schematic structural diagram of a crown-grabbing service robot according to some embodiments of the present invention. Fig. 2 is a schematic structural view of a connection assembly of a crown-grabbing service robot according to some embodiments of the present invention. Referring to fig. 1 and 2, the crown-grabbing service robot includes a head 100, a body 200, a connection assembly 300, a camera rotation module 400, a rotation arm lamp assembly 500, a leg assembly 600, and a function base 700. The head 100 incorporates an illumination lamp and/or a micro-projection module, and the body 200 incorporates radar sensors to sense space and human body and behavior. The head 100 is detachably coupled to the body 200 by the coupling assembly 300. The camera rotation module 400 is rotatably connected to the body 200 to perform video recognition or monitoring on the environment, people or things, the rotating arm lamp assembly 500 is rotatably connected to the body 200, the leg assembly 600 is internally provided with a sensor for sensing the environment, the gesture or sign of the user, the body 200 is internally provided with a radar sensor for sensing the behavior of the user, the rotating arm lamp assembly 500 is used for illuminating according to the environment, the gesture, the behavior or the sign of the user, and the illumination can be divided into downward output healthy illumination and upward output emergency, night-taking, scene or atmosphere illumination. Leg assembly 600 is disposed on the underside of torso 200, functional base 700 is disposed on the underside of leg assembly 600, and functional base 700 is provided with sled 620, sled 620 corresponding to leg assembly 600.
Fig. 3 is a schematic diagram illustrating an exploded structure of a head of a crown-grabbing service robot in some embodiments of the invention. Referring to fig. 3, in particular, the head 100 may have a cubic or elliptical or irregular shape as a whole, and may be provided with goggles 110 on the outer surface thereof to simulate the face of a skier, and helmet lights 120 on the top thereof to simulate a skiing sport. The back of the head 100 may also be detachably connected with a sealing plate 130 to facilitate maintenance. The left and right sides of the head 100 and the sealing plate 130 may be provided with a plurality of heat dissipation holes to ensure heat dissipation efficiency, the shape of the heat dissipation holes may be strip-shaped, circular or other forms, the side of the head 100 may be further provided with corresponding decorative patterns, and the specific heat dissipation holes may be designed according to the requirements of the actual user, which is not limited by the present invention. The head 100 and the body 200 are detachably connected, so that the requirements of different users on different grades and different scenes can be met, the interactive experience of the users can be improved, the head front intelligent lighting LED lamp design can bear the auxiliary lighting effects of camera video identification and monitoring besides normal scene lighting, and the problem of space lighting during multi-scene interaction such as live broadcasting, video interaction and illegal invasion is solved. The head 100 adopts a 'barrel-shaped' base and cover plate design, reduces polygonal splicing combination, and is beneficial to improving the attractiveness, firmness and standardization of the product.
Fig. 4 is a schematic diagram illustrating an exploded structure of a body of a crown-grabbing service robot in some embodiments of the invention. Referring to fig. 4, the body 200 may be generally in a "1" shape, with a connection assembly 300 provided at the top thereof for mounting the head 100, a rotating arm lamp assembly 500 provided at the side thereof, a leg assembly 600 provided at the bottom thereof, and a mounting plate 210 provided at the rear thereof in a "7" shape. The built-in radar sensor is used for sensing space, human bodies and behaviors, the design is the most key sensor setting of the robot for distinguishing traditional hardware, and the behavior and the requirements of the robot for space users are identified, so that the robot actively outputs functions and services for the users, more requirements of the users are met by adding single hardware, the use frequency of the robot is also greatly improved, the viscosity of the robot for the users can be improved by combining complete functions and contents of the robot, and a foundation is laid for mining user values and platform operation services for community digital operation. And the camera rotation module 400 may be disposed at the front side of the body 200 near one end of the head 100. The camera rotation module 400 may be rotated up or down to identify or monitor different environments, scenes, people or things, while the camera module signal may be directly physically cut off through the privacy switch to protect the privacy of the user. The body 200 may have a "U-shaped" body and a back cover to form a hollow interior body 200 cavity.
The internal cavity of the body 200 can be internally provided with a built-in sensor, the built-in sensor can comprise a radar sensor, a magnetic sensor, a magnet, a wireless communication antenna and the like, the built-in wireless antenna is beneficial to improving the wireless communication coverage range, and the magnetic sensor is internally provided with a state sensing and output control instruction which is convenient for the rotating arm lamp assembly 500 to collect a cage, and a state and output control instruction of the camera shooting rotating module 400. The body 200 may be further provided with corresponding slots as heat dissipation slots and heat dissipation holes, wherein the heat dissipation slots may be disposed near the radar sensor, and the heat dissipation holes may be disposed on two sides and the back of the body 200, so as to facilitate heat dissipation of the radar sensor and temperature isolation between the radar sensor and the leg assembly 600.
The rotary arm lamp assembly 500, the leg assembly 600 and the functional base 700 can be provided with corresponding functions according to requirements, for example, a plurality of groups of LED lamps such as a night lamp, a blue-light-free and flicker-free health lamp, a color-changing lamp and an emergency lamp are arranged in the rotary arm lamp assembly 500, and the lighting is automatically turned on or turned off by sensing data of sensors such as a body 200, the leg assembly 600, a connecting assembly 300 and the functional base 700 and the like and combining with factors such as time, environment and space when the space senses the behavior, the gesture, the habit, the fall, the state, the existence and the like of a user. The functional base 700 and the leg assembly 600 are provided with physical sign sensors, and the physical sign detection is actively reminded or cared for the user when the behavior and/or the existence and/or the state of the user are spatially sensed by combining factors such as time, environment, health and the like.
The crown-grabbing service robot takes a crown-grabbing scene of the high-platform movement of a person-shaped athlete as a product form, and gives the product an inspirational meaning of an inspired user; meanwhile, through structural innovation, the system solves the fusion integration of a plurality of intelligent modules and sensors, so that the system can sense environment, physical signs, states and user behaviors, habits and demands, and the system can realize the self-contained multi-scene output of the system, so that the single-product robot can solve the requirements of multiple user pain points such as inspiring, physical sign, environment, behavior, habit, state sensing, home safety, falling and emergency alarming, visual intercom, video monitoring and OCR (Optical Character Recognition ) identification, health files, machine and remote inquiry, health supervision management, home control, intelligent projection screen projection, intelligent loudspeaker box, intelligent accompany, entertainment interaction, somatosensory interaction, intelligent illumination, eye protection health, supervision learning, mother language environment interactive learning, correction bad learning, communication coverage, privacy protection, data safety and the like, and the system also provides command-free active intelligent service, so that the problems of difficult and multiple user requirements, difficult and difficult realization, difficult and old installation, difficult realization, intelligent and high-frequency active and high-performance of the traditional intelligent system are solved; meanwhile, business model innovation is carried out based on robots, so that the user experiences the service of the digital operation platform of the intelligent community to continuously generate re-consumption, and operators have more investment and perfect service after having stable income, thus circularly serving communities, life, O2O, housekeeping, safety, health, entertainment, and the like of the user, property, etc., services tend to be mature and perfect, and operators who are not one taste are free to input and serve, so that the ecological cycle of the intelligent community industry can develop, and the life, work, study, entertainment and home life of home users can become easier, safer and more intelligent.
The capture service robot further comprises an AI core processor, a storage and expansion storage unit, an input unit, an output unit and a communication unit which are all in communication connection with the AI core processor, and a power supply unit which is respectively and electrically connected with the AI core processor, the storage and expansion storage unit, the input unit, the output unit and the communication unit, wherein the input unit comprises, but is not limited to, a radar sensor and a built-in sensor.
Specifically, the input unit includes, but is not limited to, geomagnetic, triaxial Tuospirone, health, body temperature, air environment, distance, gesture, radar, camera, microphone array, privacy switch, status, tamper switch, magnetic force, touch input, etc. sensors. The robot is provided with a plurality of built-in sensors, so that the robot can collect the physical signs, environment, state data and the like of a household and the behaviors, habits and demands of the human body in the indoor space comprehensively, and the robot can provide corresponding functions and services for users actively, thereby being beneficial to improving the use frequency and the strong viscosity of the robot of the user. Meanwhile, the robot can judge the state of the robot in time conveniently, and a condition basis is created for the robot to provide services for users accurately.
The communication unit includes, but is not limited to, dual frequency Wifi, bluetooth, dual LAN, RF infrared, extended carrier PLC module, extended Zigbee or data module, extended LoRa module, extended 4G/5G module, etc. The design of the multi-network communication module is combined with the cooperation of the robot structure, the function of the robot bearing a home communication gateway is conveniently realized, the characteristics that a desktop scene is close to a home wired communication interface are fully utilized by the design of the double LAN interface and the double-frequency Wifi, the wireless communication coverage is carried out on a room, the cost of the wireless communication coverage is saved for users, the problem that a large household adopts a Wifi relay mode to carry out signal instability or no signal relay is avoided, the problem of communication connection of desktop multi-communication equipment is solved, and the intelligent management of the wireless coverage can be realized if the active intelligence of the robot is added, so that the use habit of the user is corrected and the network safety is protected. Meanwhile, the design of the communication module can be expanded, a user can customize the communication module according to the self requirement, the cost is saved for the user, the personalized communication coverage requirement is realized, and the selling price and the production cost of the robot are reduced.
The output units include, but are not limited to, DO signals, display screens, miniature projector heads, multi-channel LED lamps, speakers, output interfaces, etc. The design of the multi-scene output is beneficial to solving the multi-requirement of a user without other matched system output, realizing the matching-free installation, being beneficial to realizing the standardized landing of the system function and the robot, and simultaneously has the multi-scene output, thereby improving the use frequency and the strong viscosity of the user to the robot, and being beneficial to improving the entertainment interactivity and the user experience of the robot.
The power module comprises, but is not limited to, a power supply adapter, a charging, electricity storage and overcharge protection module, and is electrically connected with the communication unit, the processing unit and the input unit respectively.
The ability of catching user's action through above-mentioned structure carries out the space perception, and the input/output unit that all had with traditional intelligent hardware becomes more intelligent, and interactive interactivity is stronger, realizes the true intelligence of family service robot, makes user's life, work, study, amusement, house become easier, safer, healthier, better, more wisdom.
For example, when a user is perceived to enter a room, the environment sensor actively detects indoor environment data and actively reports environmental conditions to the user without the user actively querying the robot. For example, when the user is perceived to enter the room in the arming state, the robot actively reminds the user to confirm the identity (the camera is in the closed state or the downward monitoring state must confirm the identity through other ways), if the user does not confirm the identity and/or the robot moves in the ultra-custom time, the system will send an alarm.
For example, when the child plays indoors on a weekend, the robot automatically plays the favorite foreign language videos, music, poems and the like of the child, and according to the play content, the robot actively initiates dialogue interaction with the foreign language to the child according to the behavior of the user in space, so as to create a foreign language learning environment which really lives abroad, and realize the immersive environment type foreign language interactive learning.
For example, the robot senses that the user falls down or actively seeks help, the system automatically performs pre-alarm, and when the user confirms the alarm in the super-self-defined time, the system automatically sends the alarm to the relatives mobile terminal, the property service center platform, the service operation platform and the like. The user is perceived to go to the hospital and go home, and actively reminded to put a doctor diagnosis book in front of the robot for scanning and entering files. The method has the advantages that the user is perceived to go home at night, the built-in lamp of the robot is automatically started to illuminate different scenes and/or to start projection and/or play music, so that a warm and comfortable environment of a family is created, the matched perception, hardware and system are not needed, and the scene function is independently realized by a single product, so that the problems that old family users are unwilling to intelligently reform or are difficult to reform are solved.
The invention provides a service robot taking a crown as a basis, which is an innovative product based on a desk lamp, inherits the multi-pain-point requirements of users such as the demand of the desk lamp, the demand of the users such as the demand of falling alarm, physical sign perception, health files, health active supervision management, supervision learning, native language environment foreign language learning, community service, privacy protection, data security and the like, has the advantages of a certain distance of height, multiple application scenes, relatively fixed positions, portability, no installation and the like, realizes the purpose of solving the multi-point requirements of the users by adopting the innovation of a structure, a system, a method, an algorithm and the like, and utilizes the active sensing, the algorithm, the calculation, the communication and the data storage capability of the robot to solve the more multi-point requirements of the users, so that the users can trust the functions and the services provided by the robots are strongly dependent on the high-frequency use of the users. The intelligent community digital operation method has the advantages that the intelligent community digital operation method has foundation and value based on the home service robot which is high in frequency, strong in viscosity, applicable and good in experience, the intelligent community digital operation method based on the robot can promote the industrial ecological virtuous circle development through the mode innovation, and the intelligent community digital operation method has important significance for intelligent family and intelligent community development.
Fig. 5 is a schematic diagram illustrating a cover plate of the crown-grabbing service robot according to some embodiments of the present application. Referring to fig. 2 and 5, in some embodiments of the present application, the connection assembly 300 includes a cover plate 310, a rotating base 320, a rotating first shaft 330, and a rotating second shaft 331. The rotating shaft 330 is disposed on the rotating seat 320 and connected to the head 100, and the rotating seat 320 is fixedly connected to the head 100 through the shaft center, so that the head 100 can rotate in the second direction. The cover plate 310 is arranged on the top of the body 200, the rotating seat 320 is arranged on the cover plate 310, the rotating two shafts 331 are rotatably connected with the cover plate 310, so that the head 100 can rotate in a first direction, and a projection module for carrying out video interaction with a user is arranged on the head 100. The projection module comprises a projector and a projection lens. The projector is disposed within the head 100. The projection lens is disposed on the head 100 and rotates with the rotation of the head 100.
Specifically, the cover 310 may have a square cross section, and may have corresponding grooves formed on edges thereof to be engaged with the body 200, such as a front groove. The top wall of the cover plate 310 is provided with a rotary groove 311, a rotary shaft 331 at the bottom of the rotary seat 320 is rotatably connected with the rotary groove 311, a rotary shaft 330 is arranged on the rotary seat 320, the middle part of the rotary shaft 330 is provided with an enlarged shaft and a through hole penetrating through the enlarged shaft rotary seat 320 for various cables to pass through, and the design can shape the connecting component more like the throats of athletes, can enlarge the threading through hole and is convenient for the cables to pass through. The first direction is the circumferential direction of the rotating two shafts 311, and the rotating two shafts 331 can drive the projection lens to rotate in the horizontal plane, and the rotation angle is at least 180 degrees. The second direction is the circumferential direction of the rotating shaft 330, and rotating the shaft 330 can drive the projection lens to rotate up and down, that is, vertically, the downward rotation angle can be 60 degrees, and the upward rotation angle can be 30 degrees. It should be understood that the specific rotation angle may be designed according to practical situations, and the present invention is not limited thereto.
The head 100 is internally provided with a projection module, and the head 100 can project towards different angles by combining the first rotation direction and the second rotation direction of the connecting assembly 300, and can realize video and audio interaction or somatosensory interaction or content interaction with a user by combining an input/output module of the robot. The connection manner of the cover plate 310 and the locking member makes the connection stability of the head 100 and the body 200 higher, and is suitable for users who pursue equipment safety and user experience.
Fig. 6 is a schematic structural diagram of a magnetic base and a magnetic block as a connection component of the service robot according to some embodiments of the present invention. Referring to fig. 6, in some embodiments of the invention, the connection assembly 300 includes a magnet base and a magnet block 340. The magnetic attraction base is provided on the magnetic attraction type head 100. The magnet block 340 is provided on the top of the body 200 to be attracted to the magnet base. Specifically, a magnetic attraction base is arranged at the bottom of the head 100, a contact type communication interface male head is arranged in the middle of the magnetic attraction base, a contact type communication interface female head is arranged in the middle of a corresponding magnetic attraction block 340, and a bracket 341 is arranged below the magnetic attraction block 340. The bracket 341 is in snap connection with a fastening groove formed on the top of the body 200, and then the bracket 341 is fixed on the body 200 through a screw fastener.
The magnetic type head and the body are connected in a magnetic type manner, the problem of quick and fixed connection of the magnetic type head is correspondingly solved, the magnetic type head is suitable for pursuing a high-flexibility scene so as to adapt to the application requirements of multiple scenes of a user, the head and the body are fixedly and rotatably connected, the problem of head rotation of a built-in projection module is correspondingly solved so as to adapt to the experience requirements of projection output of multiple scenes of the user, the two modes exist independently, the modules can be purchased to mutually convert and upgrade the experience, the main equipment is not required to be replaced, and the user can solve the continuously-changed experience requirements at low cost.
Referring to fig. 4, in some embodiments of the present invention, an imaging rotation module 400 includes a mount 410, a rotation part 420, a camera 430, and a built-in magnet. The mounting base 410 is provided on the body 200, and the rotating part 420 is rotatably coupled to the mounting base 410. The camera 430 is disposed on the rotating part 420. The built-in magnet is provided on the rotating part 420 to rotate with the rotation of the rotating part 420. The detection module determines the rotation angle of the rotating part 420 by detecting the rotation angle of the rotating magnet, and determines whether to remind or care the user for specific scene application according to the state and the angle of the rotating part 420 when sensing the behavior and the demand of the user in the space. When a child is perceived to approach a desk to start learning, the user is actively reminded to rotate the camera downwards to the bottom by using a foreign language (if the camera is in a downwards monitoring state, the user is not reminded to rotate the camera), so that the camera can identify book content or supervise child learning, the foreign language is learned, and the interactive experience of the user is increased.
Specifically, the side of the body 200, which is close to the head 100, protrudes outward to form the mounting seat 410, and the side of the body 200 is matched with the rotating cavity 411 for mounting the rotating part 420, and the rotating part 420 is cylindrical in shape as a whole and is rotatably connected in the rotating cavity 411 through a rotating shaft. The camera 430 is disposed on the side of the rotating portion 420, and a through hole communicating with the body 200 is disposed at the bottom of the inner side of the rotating cavity 411 for the communication cable to pass in.
The rotary magnets are arranged on the rotary part 420, and two rotary magnets are arranged one above the other. The detection module includes a magnetic force sensor provided at a position where the states of the two rotary magnets can be detected, thereby judging the position of the rotary part 420. The system and the method can meet the requirements of video identification or monitoring of environments, people or things to adapt to different scene experience of users, and meanwhile, the rotation module is designed to rotate manually, so that the privacy of the users is protected, the cost of products is reduced, and the interactive operation experience of the users and the equipment can be increased. The camera 430 is provided with a status sensor, and is combined with other sensors built in the robot, so that the requirements, the use habits, the personal preferences and the like of the user can be sensed. It should be understood that the camera rotation module 400 may be disposed at other positions to meet the requirements of the user, and the present invention is not limited in particular.
Fig. 7 is a schematic top view of a rotating arm light assembly of a crown-grabbing service robot in accordance with some embodiments of the invention. Fig. 8 is a schematic diagram illustrating the bottom structure of a rotating arm light assembly of a crown-grabbing service robot in some embodiments of the invention. Referring to fig. 7 and 8, in some embodiments of the present invention, the rotating arm lamp assembly 500 includes an arm lever 510, an upper arc lamp 520, a lower arc lamp 530, a connection magnet, and an adsorption part 540. The upper arc lamp 520 is provided on the top wall of the arm 510 for emergency, night-time, color-changing scene and atmosphere lighting. The lower ramp light 530 is provided on the bottom wall of the arm 510 for illumination by a blue-free, flicker-free health light. The connection magnet is provided on the body 200. The adsorption part 540 is provided on the inner vertical surface of the arm 510 to be adsorbed to the connection magnet after the arm 510 is rotated.
Specifically, the arm 510 and the torso 200 form an upper body, the left and right arm 510 is rotatable to realize the unfolding and folding of the rotary arm lamp assembly 500, and the middle part of the arm 510 may be further provided with an arm elbow guard and the front end may be further provided with a glove shape, so that the arm 510 is closer to the appearance of the arm. The upper and lower double-sided illumination design of the rotating arm lamp increases the output scene of the robot, meets the experience requirements of different scenes of a user, the lower inclined surface lamp design is beneficial to increasing the illumination intensity of the user experience area, the shading edge design is beneficial to protecting the health of eyes of the user and avoiding direct-view light sources of the eyes. The design of the inner side wall adsorption part is favorable for stable fixation and durability when the rotary arm rod is unfolded, and meanwhile, the difficulty and the requirement of the design and the production of the arm rod mounting seat can be reduced, so that the rotary arm rod function can be realized at low cost. The arm 510 is designed to be simple and attractive, which can increase the interactive experience of the product, reduce the occupied space of the robot, adapt to more scene applications, and form a strip-shaped trophy together with the head and leg assembly 600, and inspire the user together with the product form of the scene of the skier's high-stage skiing and crown-capturing, so that the trophy can be obtained in the respective fields only by setting up courageous in the struggling spirit.
The upper arc surface lamp 520 and the lower inclined surface lamp 530 can be combined with a robot built-in radar sensor, a gesture sensor, a distance sensor and the like, so that an intelligent lighting function of the robot for sensing indoor user behaviors and actively switching on and off a lighting lamp can be conveniently realized. For example, the lower slope lighting lamp is automatically turned on when the user is perceived to sit on the desk, the lighting of the upper arc lamp 520 is automatically turned on when the user is perceived to open the door to enter the room, all the lamps are automatically turned off when the user is perceived to sleep, the night lighting is automatically turned on when the user is perceived to get back to bed again, the night lighting is automatically turned off when the user is perceived to exist during holidays, the red ambient lighting of the upper arc lamp 520 is automatically turned on when the user is perceived to exist at home and power is cut off during night, the emergency lighting is automatically turned on when the user is at home and the power is cut off, etc. The emergency lamp is powered by a robot built-in energy storage battery, and an energy-saving low-power LED lamp with proper brightness is adopted, so that the illumination time is prolonged.
The cambered surface design is beneficial to expanding the irradiation range of the built-in lamp, the lower inclined surface design is beneficial to projecting the illumination light source to the desktop working area, and meanwhile, the illumination shadow can be avoided by the bilateral symmetry light source design. A correspondingly shaped shade edge 560 can also be provided on arm 510 to conceal the light source from the light source to the height of the tube to illuminate the user's eyes. Various decorative patterns may also be provided on the arcuate surface 520 to facilitate the arm 510 more hand-like to integrate with the athlete's body.
The adsorption part 540 may be formed of a metal sheet and provided on the inner side surface of the arm 510. The inner side surface vertical metal sheet is matched with the built-in connecting magnet of the robot, when the left arm lever 510 and the right arm lever 510 rotate upwards to be in place, the left arm lever 510 and the right arm lever 510 are sucked, and the durability of fixing the rotating arm lamp assembly 500 is improved.
In some embodiments of the present invention, the body 200 is provided with a receiving notch, an arm mounting bin, and an arm mounting seat. Arm lever mounting seat for is connected with the arm lever in a rotating way. The arm lever installation bin is internally provided with a connecting magnet for adsorbing and fixing the arm lever 510 with the adsorption part 540 after the arm lever 510 is rotated upwards. The receiving notch is used for placing the arm 510 folded downward. The arm lever 510 is combined with the storage notch design of the body 200, when the arm lever 510 is folded to fill the storage notch, the arm lever 510, the body 200, the head 100 and the leg components 600 together form a cylindrical whole in a simple manner, so that the novel multifunctional skiing device is attractive in appearance, saves space, is beneficial to stably placing a desktop of a robot, can meet the application requirements of multiple scenes of the robot, can also form a cylindrical trophy shape together with the functional base 700, continues a skiing crown-grabbing scene of a sportsman to obtain a trophy, and inspires a user only to struggle with effort, and can certainly obtain the trophy of the user in the respective fields.
In some embodiments of the present invention, the connection assembly 300, the camera rotation module 400, the rotary arm lamp assembly 500, the leg assembly 600, and the function base 700 are provided with a state detection module for detecting an operation state and a module state of the corresponding structure.
The state detection module can comprise a processing unit and a corresponding sensor, and the abstract service robot can judge the states of the components and/or the modules and/or the whole machine according to the sensing data of the sensor, so that the robot can actively control the robot, output the robot in a multi-mode manner, actively remind or alarm or care, realize scene interaction, protect equipment, protect user privacy and the like.
Referring to fig. 4, in some embodiments of the invention, the leg assembly 600 includes a leg 610. The supporting leg 610 is in a squatting shape, is arranged on the lower side of the body 200 and is connected with the sliding plate 620 of the functional base 700, the supporting leg 610 is provided with a heat dissipation and ventilation hole, and the supporting leg 610 is internally provided with sensors for detecting air environment, body temperature, human body, gestures and other sensors, wherein a heat insulation protection module is arranged on one side of the body close to the supporting leg 610 and one side of the functional base 700 close to the supporting leg 610.
Specifically, the leg 610 includes a large leg and a small leg, which are connected in a curved manner to realize that the leg 610 is squat-shaped to simulate a skiing motion, and a skateboard 620 is provided between the bottom of the leg assembly 600 and the functional base 700. The leg 610 is hollow, and an environmental sensor, a body temperature sensor, a distance sensor, a gesture sensor, etc. are arranged in the leg 610, so as to be used for detecting temperature and humidity, air quality, smoke, etc. in the environment, and corresponding heat dissipation holes can be formed in the side surface of the leg 610, and the leg 610 is matched with left, right and rear sides to dissipate heat and vent holes, so that the temperature sensitive sensor integrated in the leg 610 is prevented from being influenced by the temperature of the body 200 and the functional base, and the sensing accuracy of the sensor is ensured. Through leg assembly 600, room environment sensing, human body temperature sensing, and user gesture sensing capabilities can be achieved. The sensing capability of the built-in radar sensor and the sensing capability of the short-distance human body sensor are combined, so that the functions of reporting the active environment condition, reminding a user of measuring the body temperature by active care, sensing that the hand approaches to the body temperature and the like can be conveniently realized.
The robot has the advantages that the front of all parts of the robot is not provided with the heat dissipation and ventilation holes, the privacy of a user is mainly considered and protected, the hidden sensors are prevented from being installed through the front heat dissipation and ventilation holes of the robot, the user rights and interests are damaged, the appearance attractiveness of a robot product is guaranteed, the built-in sensor module comprises sensors such as an air environment, a body temperature, a human body and gestures or the like, and the sensors are arranged in the supporting legs 610. Meanwhile, the collected comprehensive data can be used for analyzing the use habit and portrait of the user, so that a foundation is laid for the robot to better serve the user.
Fig. 9 is a schematic diagram illustrating a base structure of a crown-grabbing service robot according to some embodiments of the present invention. Referring to fig. 1, 4 and 9, in some embodiments of the present invention, a function base 700 includes a function table 701, a display 710, and a base cover 720. The function table 701 is arranged obliquely and the top end is connected to the leg 610. A display screen 710 is provided on the top surface of the function table 701 for displaying a dynamic picture. The base cover 720 is disposed on the bottom surface of the functional platform 701, and a plurality of functional communication ports are formed in the base cover 720. In addition, the outer base cover 720 and the 7-shaped back plate cover of the body can be connected into a whole to be designed, and can be further separated into finer parts, so that the cost can be reduced, and the structure can be realized by a die with easy structure.
Specifically, the function table 701 is formed by cutting two sides of an inclined cylinder, a positive side surface and a bottom surface, the left side surface, the right side surface, the positive side surface and the bottom surface of the cut irregular cylinder are cut into planes, the top surface and the horizontal surface form an inclined included angle, the display screen 710 is arranged on the top surface of the function table 701, the user can conveniently operate, watch and experience functions, services and contents provided by a robot, the design is convenient for inputting and outputting signals and interfaces of the left side surface, the right side surface and the front side surface of the function table 701, the inclined cylinder occupation space can be reduced, the horizontal cutting surface is also beneficial to stably placing or installing the robot, the internal space of a functional base of a skiing high table is also beneficial to expanding, other sensors are also beneficial to installing above the display screen, the aesthetic degree of the robot is also beneficial to be promoted as a whole, the round side surface can be sprayed into a round shape of the earth, the cylindrical top surface can be sprayed into white snow color, and the olymping spirit and the jigsaw spirit of the winter world can be participated in.
The front side of the functional platform 701 is provided with a microphone array 726, the upper part of the top surface is provided with a physical sign sensor 740, and the left side and the right side are provided with a plurality of functional communication ports such as a 3.5 inch earphone socket, an SPDIF optical fiber audio interface 721, an HDMI interface 722, a USB interface 723, a volume adjusting switch 725, a privacy switch, a power switch and the like. The multifunctional communication port and physical sign sensor design is beneficial to improving and expanding the interactive experience of the robot, protecting the privacy of a user, meeting the multi-scene experience requirement of the user, improving the viscosity of the user to the robot, and simultaneously facilitating the robot to analyze the habit and portrait of the user according to the times and the scene of using the interface and the sensor by the user so that the robot can provide functions and services for the user more actively. Besides the daily frequent use of a user, the design of the physical sign sensor on the top surface of the cylindrical functional part is more important to attract the user to input data of other medical physical sign sensors in the home to the robot in a multi-mode, so that the robot collects more comprehensive data to establish a health file for the user, the health file is used for effectively carrying out targeted health management on the health of the user, in addition, the physical sign sensor in the robot can also reduce the consumption cost of the user, the viscosity and high frequency of the robot are improved, and a foundation is laid for the innovation business mode of digital operation of intelligent families and communities.
The top surface of the functional table 701 and the lower part of the leg component 600 are provided with the strip-shaped sliding plate 620, the strip-shaped sliding plate extends downwards to the edge of the top surface of the cylindrical functional part, the middle of the strip-shaped sliding plate is separated by the display screen 710, the display screen 710 can be a touch display screen, the design can form a complete sliding plate 620 by the actual strip-shaped sliding plate 620 and a virtual sliding plate displayed by the display screen 710, different snow scenes and skiing actions in the display screen can be combined, various scene effects of the virtual sliding plate sliding in snow can be realized, meanwhile, a projection module can also project a skiing crown-capturing scene of a sportsman, a scene inspiring user combining reality and virtual can be realized, entertainment interactivity of a robot can be increased, and viscosity and high frequency of the user can be improved.
The base cover 720 is in a similar 'claw shape of a dig machine', the back surface of the base cover is provided with a double RJ45 network interface, a TypeC interface, an emergency button 727, a distance sensing position, a sound outlet, a heat dissipation hole and a fixing seat, and the opening direction of the base cover 700 is fixed with the bottom edge of the functional table 701 through connecting screws, so that the functional base 700 of the skiing high table is formed. The design of the 'dig claw' is beneficial to enlarging the internal space of the functional base 700 of the skiing high platform, and the 'dig claw' base cover 720 and the functional platform 701 are combined to form the functional base 700 of the skiing high platform, thereby being beneficial to the realization of hardware structures, improving the attractiveness and stability of the functional base 700 of the skiing high platform and reducing the occupied desktop space. The base cover 720 distance sensing design facilitates the robot to sense and determine the status of the desktop scene, such as the distance from the wall, to determine whether the user is moving the robot and/or is at the reference probe location, so that the robot can provide scene functions and services for the user better actively.
The fixing seat 728 is arranged in an upper fixing seat and a lower fixing seat, the two fixing seats are in a pair, and the left side and the right side of the back face are respectively provided with a pair of fixing seats 728, so that the desk lamp can be conveniently fixed or placed on a desk top or fixed on a wall surface through a bracket. The dual-network interface design facilitates the desk lamp to communicate from the room network interface connection, namely, the robot can convert the wired network into Wifi wireless and/or IoT wireless to perform wireless coverage on the indoor, and meanwhile, the output network interface facilitates the communication connection of a user external computer or intelligent equipment. Meanwhile, the indoor wireless coverage can be controlled by the intelligent switch and the functions of actively caring the user switch privacy switch and/or active volume control and/or active output instruction are combined with the capability of the robot for sensing the user behavior in the indoor space of the built-in radar sensor. If the child is perceived to sleep on the bed, the robot automatically closes wireless Wifi coverage to prevent the child from surfing the net with a mobile phone on the bed; and if the user is perceived to get into the room, the robot automatically starts a small night light on the back of the user.
Fig. 10 is a schematic diagram illustrating a first implementation of a mounting bracket for a crown-grabbing service robot in some embodiments of the invention. Fig. 11 is a schematic diagram illustrating a second implementation of a mounting bracket for a crown-grabbing service robot in some embodiments of the invention. Fig. 12 is a schematic view illustrating a structure of a bracket of a crown-grabbing service robot according to some embodiments of the present invention. Referring to fig. 10, 11 and 12, in some embodiments of the present invention, the crown-grabbing service robot further includes a mounting bracket 730. A mounting bracket 730 is provided on the base cover 720, the mounting bracket 730 including, but not limited to, a triangular placement reinforcement bracket 732, a clamp-type fixed mounting bracket 731, and a wall-mounted fixed mounting bracket. The support thing support 733 sets up before functional table 701 through the buckle, holds in the palm the space that the thing support formed the bearing object together with functional table 701, and holds in the palm the thing support and be the slide head molding of slide 620, and this design makes things convenient for the user to place articles such as cell-phone, books, can fuse as an organic wholely with skier's slide again, has promoted the aesthetic degree and the interactive experience sense of product. The mounting bracket 730 is used in cooperation with the left and right double fixing bases 728 on the back of the desk lamp so as to adapt to various requirements of different scenes of different users. The clamp type mounting bracket 731 comprises a double-fixing seat connecting part and a clamp type clamping table edge part, and is used for fixing the robot on a table to protect equipment safety.
The placing type mounting bracket 732 is triangular or multi-triangular, is attached to the desktop while being connected with the robot double-fixing seat 728, and can not turn backwards or leftwards and rightwards to support the robot, so that the safety of equipment is protected.
The wall bracket comprises a wall surface fixing screw hole, a vertical rod and a rod head fixing buckle, wherein the robot fixing seat 728 is sleeved on the large vertical rod, and the rod head fixing buckle fixes the vertical rod to form a closed loop so as to prevent the robot from falling off and the vertical rod from breaking. The bracket form is not limited to the form of the components to be marked and delivered, a user also needs to customize or purchase a plurality of special mounting brackets 730 to meet the individual requirements and aesthetic requirements of the user, and the design is beneficial to improving the placement or mounting stability of the robot so as to adapt to the multi-scene experience requirements of the user.
The support bracket 733 is nearly "dune" font, and it is equipped with two fixed plates, two slide plate heads and perpendicular connection piece, and slide plate head is perpendicular with slide 620 molding, and two fixed plates level are arranged in on the plane, and perpendicular connection piece just sets up on the roof of two fixed plates, and slide plate head just sets up on perpendicular connection piece, and the three can integrated into one piece also can welded fastening. The design is convenient for a user to charge or learn the mobile phone or the book on the inclined plane in a rotating way, and meanwhile, the skiing and crown-grabbing scene of the athlete can be more vividly restored, and the user is inspired. The design avoids the problem that the functional base 700 occupies the desktop space, the snap-in connection is convenient to detach, the object supporting bracket 733 and the functional base 700 form the cavity, a user can put articles such as a mobile phone, a book, a dictionary and the like in the cavity, the desktop space utilization rate is improved, and the user experience and the product viscosity are improved at the same time
In some embodiments of the application, the crown-grabbing service robot can be provided with various decorative patterns or colors. The arrangement positions of the decorative patterns or colors include, but are not limited to, head characteristic patterns and/or colors such as goggles of the head 100, hand characteristic patterns and/or colors such as elbows of the rotary arm lamp assembly 500 and glove shapes, humanoid shapes, front and round sides of the functional table 701, and surface integral decoration patterns or colors such as the lower part of the leg assembly 600, the sliding plate 620 and the base cover 720, and the like.
Based on the concept foundation of the above-mentioned service robot, fig. 13 is a flowchart of a method for implementing command-free active intelligence according to the first embodiment of the present invention. As shown in fig. 13, the method includes the steps of:
s110, determining a reference detection point for placing the crown-grabbing service robot, and guiding a user to set the crown-grabbing service robot at the reference detection point.
The reference detection point is the most commonly used position for placing the crown-grabbing service robot, the crown-grabbing service robot with the advantage of a desk lamp is usually arranged at a wall-leaning position such as a desktop, a bedside, a sofa or the like, in addition, a fixed radar sensor is arranged for expanding a space sensing range for networking communication, and the corresponding reference detection point is usually a wall surface of a room.
In the practical application process, after a user or an installer starts the service robot and/or based on the service robot to install and network other room radar sensor reference detection points with normal communication, the service robot senses the existence of the user, actively outputs voice and/or a screen display and/or projects and/or lamplight to guide the user to place the service robot at the most commonly used scene (reference detection point), the service robot judges whether the service robot is the reference detection point by using a built-in sensor, if the service robot senses the back surface to be more than 50CM away from the wall surface, the service robot can actively output voice or display or lamplight to confirm the reason or the position authenticity of the user, the privacy switch is closed (when the sensing function state is in an on state), the camera module is rotated upwards to the limit (when the sensing function is in the default state or at any middle position of the rotation axis, if the camera module is configured to be fixed, the camera module does not need to do so), and the user can enter the next step after the service robot is confirmed or senses the movement position of the service robot in a plurality of operation modes and/or after the sensing direction is regulated, if the user is not operated, the user can actively operate the service robot or the system and the camera module is not be required to display the voice or the user.
The canopy-grabbing service robot can be installed in a living room, a dining room, a bedroom, a study room and other scenes, can also be installed in a plurality of scenes such as offices, living rooms, apartments, meeting rooms, wards, exhibition halls, store rooms, schools and factories, and can actively solve the problem of multiple demands of users based on the command-free active intelligence of the canopy-grabbing service robot. Meanwhile, considering the cost, the demand and the deployment problem of full space perception, the problem that a space (such as a kitchen, a toilet, a passage, an elevator hall, a bedroom, a living room and the like) for a service robot is not required to be deployed exists, and the service robot can be in networking communication with a reference detection point radar sensor fixedly installed in a special space so as to expand the space perception range of the service robot (which is equivalent to the service robot for simultaneously managing the reference detection points of a plurality of rooms), so that a household can intelligently perceive the full space of the household based on at least one service robot for the service robot, and real indoor full-space command-free active intelligence is realized. Of course, if only the fixed service robot and the mode of extending the space sensing range based on the networking communication of the fixed service robot are adopted, the indoor full-space command-free active intelligence can be realized, and compared with the fixed service robot, the fixed service robot has the advantages of flexible position, more scenes, close to users, convenient communication and electricity taking, no installation, no matching and easier landing. Therefore, judging based on how the service robot converts the user perception of the indoor space into the behavior and the demand of the user becomes a key problem for realizing the imperative initiative intelligence. Therefore, the application innovates the method, and carries out algorithm judgment on the space perception users of the service robot and the reference detection point sensor based on the extended space perception range of the service robot so as to identify the behaviors and demands of the users in the household space, collect information and/or output self-contained scenes and/or output corresponding functions and services through other intelligent devices or systems which are in networking communication. Therefore, the capturing service robot not only can independently and quickly realize the command-free active intelligent, but also can endow other traditional intelligent devices with networking communication with command-free active intelligent functions, thereby thoroughly changing the series problems of the traditional household intelligent system, such as passive, manual, user independent management, inconvenient voice, complex integration, old installation change difficulty, standardization difficulty, popularization and landing difficulty and the like, and ensuring that the life, work, study, entertainment and home of the user are easier, safer and more intelligent.
Fig. 14 is a sub-flowchart of a command-free active intelligent implementation method according to a first embodiment of the present invention.
And S120, performing indoor space sensing based on the reference detection points to configure a space structure coordinate graph according to sensing results.
The sensing result is a result obtained by detecting indoor objects and structures by the crown-grabbing service robot and/or a fixedly-installed radar sensor based on the extension space networking communication of the crown-grabbing service robot, and the sensing result is used for determining the indoor space environment at the moment so as to determine a space structure coordinate graph according to the space structure, namely parameterized description of the indoor space environment.
Specifically, the configuration space structure graph in this embodiment mainly includes two modes, manual and automatic, where the automatic mode further includes three cases of combining according to image data, radar data, and multi-sensor data, that is, step S120 includes steps S121-124:
s121, determining a space structure layout diagram according to the adjustment operation and the confirmation instruction of the user based on the preset structure layout diagram and/or the actual structure layout diagram imported by the user.
In this embodiment, the service robot is further provided with a configuration interface of a system modeling program, so that the service robot is configured with a screen display content and/or a voice broadcast preset structure layout and/or a user-imported actual structure layout through the system modeling program, and of course, the service robot is also configured with an actual structure layout importing path, a file format, a conventional main structure or object parameters can be set for user confirmation or adjustment based on the configuration interface of the system modeling program (such as the positions and specifications of a door, a window, a sofa, a television, a wall width and the like of a living room scene), the user completes content input according to content or guidance, marks one to three reference detection point positions, and if more than three reference detection point positions of a user standard or a system perception room may be the detection points when any spatial arbitrary point is more than the preset size, the service robot is actively voice-operated or the screen display and/or the projection and/or the lamplight reminding user is best to install at least 3 positioning beacons or base stations at the position with obvious indoor structure characteristics on the spatial structure layout, and marks the position of the space layout, and marks the position of the service robot according to the detection direction of the floor sensor, and marks the reference position of the service robot on the basis of the floor of the service robot.
S122, if the adjustment operation and the confirmation instruction of the user are not detected, acquiring an indoor space image, identifying indoor objects and space structure features based on the indoor space image, and generating a space structure layout diagram according to the indoor objects and the space structure features and combining preset feature data.
When the user is perceived to exist in the space by the service robot, the screen display and/or the voice and/or the projection and/or the light guide the user to input configuration content, and the user cannot operate or is unwilling to operate for more than the preset self-defining time, or the user directly operates and confirms that the user cannot operate or does not operate, the service robot automatically starts the system modeling program after perceiving that the user leaves, for example, pauses starting the system modeling program or the screen display or the active voice or the projection output ' starting the system modeling program ' when perceiving that the user exists in the space, and the screen display or the projection or the active voice output ' please the user to adapt to the environment time for the self-defining time (for example, 5 minutes) and the user leaves the room so as to carry out modeling by the system identification space Sofa, window, bedside table, chair, door, floor tile, etc.), the abstract service robot takes a picture according to a fixed scale (for example, the horizontal and vertical scales of the picture are 1cm by 1cm, and the actual object size is 0.2m when the picture is 1m away, conversely, if the object size is 0.2m, the distance of the camera from the object is 1m when the object size is 0.2m, the feature data (for example, door (width is 0.9 m by 1.9 m), window (height from ground is 0.9-1.05 m), sofa (seat height of a general single sofa is 0.42 m)), the feature data (for example, door (width is 0.9 m by 1.9 m), window (height from ground is 0.9-1.05 m), beds (height is generally 0.5m, width is 1.2-1.8 m, length is 1.9-2 m), bedside cabinets (width is 0.5m, depth is 0.4 m, height is 0.7 m), tables (width is 0.8m, length is 1.4 m, length is 0.8 m) and the like) (the principle is that a telescope with a fixed scale can judge the distance between an object or a person and an observer by looking at the physical or person, and the shape, size, shape, height, shape and the like of an indoor space structure and a conventional object are judged and output The method comprises the steps of generating a space structure layout in an indoor space visible area according to the detection direction of a geomagnetic sensor and the detection direction of the service robot, when the service robot senses the existence of a user, actively outputting voice or displaying a screen or projecting the voice to enable a host to rotate the robot leftwards or rightwards in situ by 60 degrees (the monitoring angle of a system configuration camera module is generally not lower by 60 degrees, an image shot by the angle is not easy to deform, the left and right sides rotate by 60 degrees respectively, the space is just spliced by 180 degrees through pictures or spaces, the service robot is convenient to be placed by a wall to comprehensively sense the indoor space), judging that the user rotates by 60 degrees through the geomagnetic sensor and/or a three-axis gyroscope detection parameter when the service robot senses the rotation of the user, starting a modeling program after the user is sensed to leave a room, repeating the previous steps, generating the space structure layout of the left and right regions of the reference visible region of the service robot of the indoor space, and automatically configuring the reference space region of the service robot, and combining the space layout of the service robot into a reference space map based on the space map, and the space structure layout is complete.
S123, or if the adjustment operation and the confirmation instruction of the user are not detected, acquiring indoor radar detection data, identifying indoor objects and spatial structural features based on the indoor radar detection data, and generating a spatial structural layout chart according to the indoor objects, the default specification parameters of the system and the spatial structural features and combining preset feature data.
When a user or an installer starts the service robot, the service robot and/or a fixed installation radar sensor based on the extension space networking communication of the service robot sense that the user exists indoors, actively outputting voice or a screen display or projection to guide the user to place the service robot on the most commonly used scene (reference detection point), and place the back of the service robot parallel to the wall surface, wherein the screen display and/or the voice guide the user to input configuration content, and the user cannot operate or is unwilling to operate for more than the self-defined time, or the system starts an automatic configuration program of a modeling system when the user directly confirms that the service robot cannot operate or does not operate; the system senses the existence of a user, actively requests the user to rotate the crown-taking service robot leftwards and rightwards so as to enable the system to be configured and generate a complete space structure layout diagram, compares the generated space structure layout data with general characteristic data of household conventional objects and space structures to judge whether the detected object specification is the specification of a real object, if not, the system automatically re-detects the object to compare the specification parameters, if not, the system re-detects the super-self-determined times, comparing object specification parameters, if the deviation is still larger, recording one-time abnormal recognition, generating a space structure layout diagram, simultaneously labeling objects or spaces with large recognition deviation, and checking object specification or active voice or screen display or projection and user confirmation through video recognition when a user is perceived to leave home and the camera is started and the camera is in a forward monitoring state, for example, when the user is perceived to exist, the user is inquired by the active voice, and the user is inquired about how wide the room is. If the video rechecking or the user confirms that the deviation between the object specification and the detection specification is larger, the system automatically feeds back to the service platform for algorithm optimization and verification. Such as a system detecting a door width of 0.5 meters and an actual door width of 1.2 meters.
S124, a space structure coordinate graph is built by taking the reference detection point as a coordinate origin based on the space structure layout graph and the perception direction of the crown-grabbing service robot.
According to the space structure layout diagram, the geomagnetic sensor detects the direction, the detection angle and the detection range of the geomagnetic sensor and/or other room radar sensors with normal networking communication are logically related together, and a reference detection point position coordinate dot and the like are generated to generate a space structure coordinate diagram, namely, the corresponding space position or coordinate can be found in the corresponding indoor space structure layout diagram according to the human body coordinate obtained by the movement of the user in the detection range of the canopy-capturing service robot. If the reference detection point is (0, 0) and the position in the indoor space layout is near the wall in the middle of the desk, and if the user is (x, y) in the detection range of the service robot, the coordinate is (x 1, y 1) corresponding to the midpoint position of the door in the indoor space structure layout. When the service robot perceives the existence of the user, the service robot actively inquires the user whether the room has other frequent application scenes, and if the user confirms that the service robot perceives the existence of the user, the service robot is required to be placed in the other application scenes by the user to carry out system configuration. And the crown-grabbing service robot generates a space structure coordinate graph of the crown-grabbing service robot at any point in space by autonomously judging the space position and combining the space structure coordinate graph, the position of the reference detection point, the detection angle and direction and the direction detected by the geomagnetic sensor. The space structure layout diagram is only a common plane diagram, the reference detection point is only one point on the plane diagram, and the two-dimensional vertical coordinate of the plane diagram is required to be correlated with the polar coordinate detected by the crown-grabbing service robot, so that a person can find the vertical two-dimensional coordinate of the corresponding indoor space structure plane in the polar coordinate of the detection range of the crown-grabbing service robot and other room radar sensors with normal networking communication, and the space structure coordinate diagram can be divided into the next sensing areas because each sensing area is composed of the vertical two-dimensional coordinates of one area. According to the application scene requirement, a virtual perception area can be set.
S130, dividing a sensing area based on the space structure coordinate graph, and configuring triggering conditions of scene events based on the sensing area.
After the space structure coordinate graph is determined, the space structure graph is recorded, the detection direction of the geomagnetic sensor, the detection direction, the angle and the range of the crown service robot and other room radar sensors with normal networking communication are taken, the logic relation among the reference detection point positions is obtained, the sensing area division is carried out on the indoor space based on the space structure graph, a plurality of coordinate areas such as a bed area, a window area, a desk area, a door area, a television area, a sofa area and a projection screen (wall surface) area are obtained, different scene events are arranged on the basis of different coordinate areas, and at least one triggering condition is arranged on the basis of different scene events, and when the behavior, the state and the like of a user indoors meet the triggering condition, the user is indicated to be in the scene event. The method includes the steps of firstly, taking position (coordinate) information as a unique factor in a triggering condition, namely, only an outer bedside can trigger a scene on a side wall bed, and further, for example, the periphery of a dining table in the middle can trigger the scene, secondly, combining time factors to generate a logic triggering condition of a scene event, and if the user positioning coordinate of a user-defined time (such as 1 second time) before the scene triggering is traced back when the scene triggering is set, if the user positioning coordinate is located outside a scene area, the user is judged to enter the coordinate area, otherwise, the user leaves the coordinate area, and if the user coordinate is unchanged, the user is judged to continuously exist or misinformation is judged to give up processing. It will be appreciated that the trigger conditions referred to in this embodiment may also include a logical requirement for a series of consecutive actions by the user, also referred to as a logical condition. The design can also effectively solve the problem that the timing control user experience of the basic sensing capability of the traditional sensor or the radar sensor is poor. If the user goes to the toilet for a user-defined time (such as 1 minute) and is still, the light is automatically turned off.
And S140, sensing user information based on the reference detection point so as to determine the current scene based on the user information and the trigger condition.
The user information comprises information such as actions and positions of people and objects needing to be focused indoors at different moments. The method mainly comprises the step of judging a scene according to the positioning coordinates (radar normal perception mode, static object recognition mode different from system configuration) of people or objects moving indoors and specific triggering conditions. If the user falls down, an emergency pre-scene is output, if the user stands beside the projection screen or the television and hands are stroked upwards or downwards in the projection screen or the television area, the system judges that the display content returns or turns downwards a page command, if the crown-grabbing service robot is arranged between the projection screen or the television and the user and the detection direction is opposite to the detection direction, when the user virtually moves forwards and backwards and leftwards or stands and squats in the detection direction in the virtual projection screen or the television area, the system judges that the display content virtually moves forwards and backwards and leftwards or stands and squats in the synchronous output mode, and human body perception interaction of the display content is realized. That is, the sensing of user information based on the reference detection points to determine a current scene based on the user information and the trigger condition includes sensing user positioning coordinates, determining a user action record based on the positioning coordinates in combination with a corresponding time, and determining the current scene based on the action record matching the trigger condition.
S150, generating an execution instruction according to preset execution logic based on the current scene, the sensing area and the user information, acquiring information through an input module based on the execution instruction, or outputting functions and services through an output module and/or sending the execution instruction to a connected networking device through a communication module for input and/or output.
The method mainly judges which services the user needs to provide according to the current scene of the user and user information, wherein the current scene comprises a pre-scene event, a scene event trigger list and priorities, the user information comprises information such as the number and characteristics, time, networking equipment, equipment states, logic relations among multi-room scenes and the like of the user coordinates, the executing instructions comprise discarding processing (for example, detecting actions of non-users such as animals), control instructions, display contents, executing programs, interactive voice information, reminding voice information and the like, the canopy-taking service robot is simultaneously provided with an input and output module, so that various scene services including voice, screen display, projection, lamplight and the like are output through the input module, and the self-contained output module can avoid that the canopy-taking service robot can output scenes only by installing a matched system, thereby simplifying the system and better serving the user. For example, the weekend morning senses that a child gets up, outputs foreign greeting voices, senses that the child plays in a room in the morning, triggers a bedside event, outputs foreign voices which are not currently sleeping time, creates a scene of initiating a conversation with the child by using the foreign voices, senses that the child plays in the room in the afternoon, does not trigger an event, exceeds self-defined time, outputs foreign music, poems, stories or videos which the child likes at ordinary times, enables the child to play in an immersive foreign environment, and cultivates the sense of language of the child unconsciously, and repeatedly outputs wake-up voices or music in self-defined time, senses that the old has activity in the bed, and outputs voice reminding for requesting the old to stretch to a crown-taking service robot to measure the body temperature, such as detecting high fever of the old, and pushes user fever information to a service platform, a community health center, a mobile phone, a relatives or a government service center.
The embodiment provides a command-free active intelligent implementation method, which comprises the steps of firstly determining a reference detection point for placing an abstract service robot, then guiding a user to set the abstract service robot at the reference detection point, carrying out indoor space perception based on the reference detection point to configure a space structure coordinate graph according to a perception result, dividing a perception area based on the space structure coordinate graph, configuring a triggering condition of a scene event based on the perception area, sensing user information based on the reference detection point to determine a current scene based on the user information and the triggering condition, finally generating an execution command according to preset execution logic based on the current scene and the user information, acquiring information and/or outputting scene service by the abstract service robot and/or sending the execution command to other execution devices in communication connection.
Optionally, in some embodiments, fig. 15 is a flowchart illustrating a method for implementing imperative active intelligence according to a first embodiment of the present application. As shown in fig. 15, the method includes:
S210, determining a reference detection point for placing the crown-grabbing service robot, and guiding a user to set the crown-grabbing service robot at the reference detection point;
And S220, performing indoor space sensing based on the reference detection points to configure a space structure coordinate graph according to sensing results.
S230, dividing a sensing area based on the space structure coordinate graph, and configuring triggering conditions of scene events based on the sensing area.
S240, sensing user information based on the reference detection point so as to determine a current scene based on the user information and the trigger condition.
S250, judging whether the current scene is matched with the reference detection point or not.
And S260, if the service robot is not matched with the service robot, guiding the user to adjust the pose of the service robot, and detecting the pose adjustment operation of the service robot by the user.
And S270, adjusting the space structure coordinate graph and the perception region according to the pose adjustment operation.
S280, generating an execution instruction according to preset execution logic based on the current scene, the sensing area and the user information, dividing the sensing area based on the space structure graph, and configuring triggering conditions of scene events based on the sensing area.
The difference between the embodiment and the foregoing embodiment lies in the steps S250-280, which aim at considering that the sensing range of the sensor is limited in the actual use process, the situation that the pose of the service robot needs to be adjusted may occur, that is, if the direction of the service robot is adjusted to adapt to the actual use requirement of the user or the detection angle of the service robot is lower than 180 degrees in the use process of the user, the system can synchronously adjust the detection direction and range of the corresponding service robot on the space structure coordinate graph according to the direction of the service robot rotated by the user.
Optionally, in some embodiments, the method further includes the step of after step S280, adding steps S290-200 (not shown) to the case of coping with the situation when the user does not have feedback after collecting information through the input module and/or outputting the scene service through the output module based on the execution instruction and/or sending the execution instruction to other devices connected to the communication:
S290, judging whether scene feedback of the user based on the execution instruction is perceived;
And S200, if not, generating an abnormal instruction based on the current scene, and sending the abnormal instruction to an abnormality handling device.
For example, if the user is a chronic patient, the user perceives that the active voice output exists at the meal time of 12 am to remind the user of taking medicine, if the user has no interactive response in a super-self-definition manner and or the medicine is not determined to be the information such as name, medicine quantity and the like by the crown-grabbing service robot, the crown-grabbing service robot records the abnormal information of taking medicine of the user once.
Optionally, in some embodiments, to further optimize the service experience, a self-learning mechanism is further provided to autonomously record the habit of the user, and specifically, after step S200, step S201 (not shown) is further included:
s201, recording the occurrence number of the current scene and the feedback number with the scene feedback, determining the stage habit or knowledge grasping level of the user according to the occurrence number and the feedback number, and generating a benign guiding scheme according to the stage habit or knowledge grasping level.
Specifically, in the self-defining time, the same scene is executed for more than preset self-defining times, and at the same time, the number of times that the user does not feed back the scene exceeds preset negative times, the user is judged to form stage habit or knowledge mastering level, if the stage habit is benign, a benign guiding scheme is formulated according to a preset time threshold or according to the appearance interval of the same scene to actively care and remind the user to execute the scene or give up execution, if the stage habit is not benign, the benign guiding scheme actively care reminds the user that the bad living habit needs correction, and the user executes a benign habit scene once, the system can also actively encourage or confirm the user behavior. If the user sleeps for 1 night 3 times within a week, the system automatically generates a poor living habit in one stage. If the abstract service robot senses the trigger event of the user, actively initiates foreign language dialogue interaction to the user for more than self-defined times (such as 5 times), and if the user has no feedback all the time, judging that the user does not grasp the interactive foreign language sentence, and automatically adjusting and outputting the interactive foreign language sentence or the output interpretation sentence or the output native language inquiry sentence by the system.
Example two
Fig. 16 is a schematic structural diagram of a device for implementing command-free active intelligence according to a second embodiment of the present invention. As shown in fig. 16, the implementation apparatus 800 of the imperative active intelligence of the present embodiment includes:
A placement guidance module 810, configured to determine a reference detection point for placing a service robot, and guide a user to set the service robot at the reference detection point;
A space sensing module 820 for performing indoor space sensing based on the reference detection points to configure a space structure graph according to sensing results;
The scene configuration module 830 is configured to divide a sensing area based on the spatial structure graph, and configure a trigger condition of a scene event based on the sensing area;
a user sensing module 840 for sensing user information based on the reference detection points to determine a current scene based on the user information and the trigger condition;
And the execution module 850 is configured to generate an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, and acquire information based on the execution instruction through the input module and/or output scene services through the output module and/or send the execution instruction to other devices connected in communication.
Optionally, in some embodiments, directing the user to place the crown-taking service robot at the reference detection point includes directing the user to place the crown-taking service robot at the reference detection point by voice or screen display or projection or light so that the back of the crown-taking service robot is parallel to the wall surface and perceives a distance from the wall surface.
Optionally, in some embodiments, the indoor space sensing is performed based on the reference detection points to configure the space structure layout according to the sensing result, wherein the method comprises the steps of determining the space structure layout according to the adjustment operation and the confirmation instruction of a user based on the preset structure layout and/or the actual structure layout imported by the user, collecting indoor space images and generating the space structure layout according to the preset feature data by combining the indoor objects and the space structure features based on the images if the adjustment operation and the confirmation instruction of the user are not detected, or collecting indoor radar detection data and identifying the indoor objects and the space structure features based on the indoor radar detection data, generating the space structure layout according to the indoor objects and the space structure features and combining the preset feature data, and establishing the space structure coordinate based on the space structure layout and the sensing direction of the service robot by taking the reference detection points as the origin of coordinates.
Optionally, in some embodiments, the method further comprises recording the number of occurrences of the current scene and the number of feedback times with scene feedback, determining a user stage habit or knowledge mastering level according to the number of occurrences and the number of feedback times, and generating a benign guidance scheme according to the stage habit or knowledge mastering level.
Optionally, in some embodiments, the method further comprises judging whether the current scene is matched with the reference detection point, if not, guiding a user to adjust the pose of the crown-grabbing service robot, detecting the pose adjustment operation of the user on the crown-grabbing service robot, and adjusting the space structure coordinate graph and the sensing area according to the pose adjustment operation.
Optionally, in some embodiments, sensing user information based on the reference probe points to determine the current scene based on the user information and the trigger condition includes sensing user location coordinates, determining a user's action record based on the location coordinates in combination with a corresponding time, and determining the current scene based on the action record matching the trigger condition.
Optionally, in some embodiments, an execution instruction is generated according to preset execution logic based on the current scene, the sensing area and the user information, scene services are acquired through an input module and/or an output module based on the execution instruction and/or the execution instruction is sent to other devices connected through communication, and the method further comprises the steps of judging whether scene feedback of the user based on the execution instruction is sensed, if not, generating an abnormal instruction based on the current scene, and sending the abnormal instruction to an abnormality handling device.
The device for realizing the command-free active intelligence provided by the embodiment of the invention can execute the method for realizing the command-free active intelligence provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
It is to be understood that the above examples of the present invention are provided for clarity of illustration only and are not limiting of the embodiments of the present invention. Various obvious changes, rearrangements and substitutions can be made by those skilled in the art without departing from the scope of the invention. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.