CN217530864U - Capture service robot - Google Patents

Capture service robot Download PDF

Info

Publication number
CN217530864U
CN217530864U CN202221373395.6U CN202221373395U CN217530864U CN 217530864 U CN217530864 U CN 217530864U CN 202221373395 U CN202221373395 U CN 202221373395U CN 217530864 U CN217530864 U CN 217530864U
Authority
CN
China
Prior art keywords
user
robot
service robot
head
arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202221373395.6U
Other languages
Chinese (zh)
Inventor
李修球
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huiankang Technology Co ltd
Original Assignee
Shenzhen Huiankang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huiankang Technology Co ltd filed Critical Shenzhen Huiankang Technology Co ltd
Priority to CN202221373395.6U priority Critical patent/CN217530864U/en
Application granted granted Critical
Publication of CN217530864U publication Critical patent/CN217530864U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model belongs to the technical field of intelligent service, a capture hat service robot is specifically disclosed includes: a head; a body; the head is detachably connected with the body through the connecting component; the camera shooting rotating module is rotatably connected to the body; rotating the arm lamp assembly, wherein a built-in sensor senses an environment or user gesture or physical sign sensor, and a radar sensor is arranged in the body; a leg assembly; a functional base, on which a slide plate is arranged. The robot is internally provided with a multi-sensor and an intelligent system, and can automatically identify the environment and the state of a user, so that corresponding functions and services are actively provided for the user, the multi-demand of the user is met for a single product, commandless active intelligent services are provided, the series problems that the multi-demand is difficult to meet by traditional hardware, the passive intelligence of the traditional intelligent system is difficult to meet the requirements, the system is difficult to realize, old installation is difficult to change, standardization is difficult, landing is difficult and the like are solved, and the life, work, study, entertainment and home furnishing of the user are easier, safer and more intelligent.

Description

Capture crown service robot
Technical Field
The utility model relates to a technical field of intelligent service especially relates to a capture hat service robot.
Background
The seventh census data shows that the population at 60 years and above is 2.64 billion people across the country, with 1.9 billion people at 65 years and above, and it is predicted that the aged population at 2025 years, 60 years and above will break through 3 billion, and will reach a peak of 4.87 billion in 2053. The Chinese-style endowment has been said to have a 9073 pattern, namely 90% family endowment, 7% community-style family endowment and 3% institution endowment. Obviously, home health and care become rigid demands for the elderly and even the whole society. At present, relatively independent medical health and nursing service systems are difficult to meet the multi-level and diversified health and nursing requirements of old people, and medical and nursing combination is urgently needed. Perfects a nursing service system combining home-based, community-based, institution-based, medical and nursing.
The wisdom is healthy at home/endowment utilizes new generation information technology products such as thing networking, cloud computing, big data, intelligent hardware, can realize effective butt joint and the optimal configuration of individual, family, community, mechanism and healthy endowment resource, promotes healthy endowment service intellectuality and upgrades, promotes the healthy/endowment quality of service efficiency level at home. Only the needs of the home health/care user are all-round, including: home security, physical sign awareness, health record and supervision management, fall and emergency alarm, intelligent companion, medical care, daily care, parent communication, entertainment and leisure, home intelligence, communication coverage, privacy protection, care for children, child learning, housekeeping services, community services, and the like. The current market has many intelligent products related to home health/care, such as: the robot comprises a health robot, a companion robot, wearable equipment, a sign sensor, a falling sensor, an emergency button, an educational robot, a learning table lamp, an environmental sensor, whole house intelligence and the like. However, the functions of the products are single, the problem of multiple demands of users is difficult to solve by a single product, the system integration is realized in a large scale, the memory of the old is reduced, the culture level is limited, the more and more system equipment is integrated, the more difficult the system equipment is to fall to the ground and the more difficult the system equipment is to operate, and the current system is a user autonomous management mode based on a mobile phone and is rarely docked with communities, organizations and healthy endowment resources, so that the current smart home health/endowment based on traditional equipment is more in a walking form.
The household health and the care of the old cannot be separated from the community, the safety, health, care, O2O, life, entertainment and other services of the smart community are generally free, and because the operator invests the income of the smart community and the user is directly found for charging but not practical, how to promote the ecological virtuous circle development of the smart community industry is a big problem which puzzles the industry and a difficult problem which puzzles the household health and the care of the old.
The intelligent system building intercom system with the largest intelligent community is the only intelligent system with wired connection between property and a family, and solves the problems that the user demand is less, the system is complex, the failure rate is high, the operation cost is high, the installation, maintenance and old modification are difficult, the frequency is low, the viscosity is weak, the experience is poor and the like. Therefore, the smart community has been developed in China for decades, the medical and health combined community service integrating the smart community and the service, the endowment, the medical care and the insurance is yearly, a lot of smart community SAAS platforms are built, but the income generated by the source opening (parking, property and advertising fees are traditional income) is quite small, the reason is simple, namely that the digital operation of the smart community is the operation of a public area, the association degree with the family of a user is very weak, the user cannot buy a bill for any equipment, system and service in the public area, the operation has no opportunity of income opening, and the investment of developers, property and operators and the return are impossible. Therefore, to realize the virtuous circle development of the intelligent community industry ecology, a core carrier of a digital operation family end which is just needed to solve the user trust and dependence of user requirements is needed, the user value is fully excavated while the family users are well served in operation, the ecological re-consumption of the users is guided, and the comprehensive service guarantee is provided for the ecological re-consumption, so that the users can actively enjoy the service and the re-consumption, the virtuous circle development of the whole community industry ecology can be formed, and the medical and health combined community service of community + service + care + medical care + insurance can fall to the ground.
Initiative intelligence can help the list article to realize more functions, solves more demands of user, high frequency and strong stickness. Although many manufacturers realize that active intelligence can provide better experience for users at present, no clear solution exists for solving the multi-demand of users and the realization degree of the active intelligence based on the active intelligence. The realization of initiative intelligence can divide system integration to realize and singleness function realization two kinds of modes, and current smart machine realizes that initiative intelligence can only pass through system integration's mode, and the most probable implementation mode has: (1) Judging the requirement of a user to realize active intelligence based on the positioning of a portable smart phone in an indoor space, but the current positioning technology of the mobile phone in the indoor space is not accurate, so that other matching systems are necessary for limited mobile phone scene output even if the positioning of the mobile phone in the future is accurate, and the deployment of the matching systems is an industrial problem; moreover, positioning must be combined with the spatial structure layout of a family, and the mobile phone faces the problems of continuous calculation power consumption and privacy; in addition, the mobile phone is difficult to leave at home, and the mobile phone is difficult to land in the mode. The active intelligence is realized by identifying the user behaviors based on the mobile robot (video identification), the family scene belongs to a special environment, the privacy of the user can be seriously influenced by completely depending on the video identification user behaviors, the existing algorithm of the mobile robot mainly solves the problem of survival, the problem that the user needs less, a series of problems of non-rigid need, high cost, more people and resources, continuation of the journey, height and space occupation and the like exist, and the problem that a matching system must be installed exists, so that the landing is difficult. The method has the advantages that active intelligence is realized by identifying the user based on the user wearing equipment, the active intelligence similar to the active intelligence with a mobile phone for positioning also has obvious limitation, the user is difficult to leave at home, the most important wearing equipment mainly collects physical sign data of the user, the behavior and the demand of the user can not be judged, a logic relation can not be established with the space of the home, intelligent decision is not provided, a matched intelligent system is required to be relied on, and the user can not fall to the ground in the mode. The brain consciousness electric wave intelligence which may appear in the future directly outputs brain waves to inform the intelligent system of the own requirements, at present, the high intelligence is only ideal high intelligence, but the high intelligence also has certain limitations, such as unconsciousness of a user in sleeping, capability of people in one space, requirement of the intelligent system to execute brain consciousness and the like, and the fixed existence of family space perception cannot be replaced, and the intelligent matching system is difficult to fall to the ground originally, so the ground is difficult to fall. Fifthly, active intelligence is realized based on full-house intelligence, a plurality of sensors and intelligent devices are installed in a home, the existing full-house intelligence mainly adopts intelligent control as a main part, the problem that the pain point demand of a user is too little is solved, the full-house intelligence is multi-device superposition integration, the deployment is the most difficult thing of the industry, and the deployment is one of the most important reasons for restricting the development of the industry, and the deployment must face: (1) problems of new and old houses; (2) the problem of wiring of the grooving, hole breaking and pipe arrangement; (3) the whole house is also a local problem; (4) integration and communication mode problems; (5) power taking problem; (6) equipment installation, position, beauty and safety problems; (7) functional and service issues; (8) data security and privacy protection, and the like, so that it is difficult to actively provide corresponding functions and services for users on the premise of no active operation of the users.
The active intelligence realized by the above integration modes is lack of correlation with indoor space and unified perception algorithm calculation and storage equipment, so that the customization is difficult to fall to the ground, the cost is high, and the system is complex, so that the active intelligence is basically simple linkage type active intelligence, the problem that the user demand is less and the user experience is poor is solved, and the problem that a single product is necessary to solve the problem that the user demands are more based on the active intelligence. The single product aims at solving the multiple requirements of users, and the product innovation faces: product type, form, structure, connotation, pleasing to the eye, ground height, communication, cost, attribute, functional requirement, independence, system implementation, privacy, safety, health, scene, experience, suitable old, practical, deployment, after sales, algorithm calculation power, supply chain, operation, business model, industry ecology and other many-sided factor considerations, not simple stack formula integration can realize, so must be the all-round innovation of subversing, the new difficult problem of solving of wicresoft.
SUMMERY OF THE UTILITY MODEL
An object of the utility model is to provide a seize hat service robot, the many demands of user problem is solved in product structure innovation, fuses no command formula initiative intelligent system initiative simultaneously and provides relevant function and service for the user, initiatively solves the more painful point demands of user, realizes seizing hat service robot high frequency, strong stickness, experience, exempt from the installation, exempt from functions such as supporting, standardization, reduce cost.
To achieve the purpose, the utility model adopts the following technical proposal:
a crown-capturing service robot in the form of a scene of a motion capturing crown of a humanoid athlete, comprising: a head, in which a lighting lamp and/or a micro-projection module is built; a body having a radar sensor built therein to sense space and human body and behavior; the head is detachably connected with the body through the connecting component; the camera shooting rotating module is rotationally connected to the body to carry out video identification or monitoring on the environment, people or objects; the rotating arm lamp assembly is rotatably connected to the body, a sensor arranged in the leg assembly senses an environment or user gesture or physical sign sensor, a radar sensor arranged in the body senses a user behavior, and the rotating arm lamp assembly is used for illuminating according to the environment or the user gesture, behavior or physical sign; a leg assembly disposed under the torso; the functional base is arranged below the leg component, a sliding plate is arranged on the functional base, and the sliding plate corresponds to the leg component.
Optionally, the connection assembly comprises: the magnetic suction base is arranged on the magnetic suction type head; the magnetic iron block is arranged at the top of the body and is adsorbed to the magnetic base; and/or the connection assembly further comprises: the cover plate is arranged at the top of the body; a rotating base provided on the cover plate and connected with the head portion such that the head portion can rotate in a first direction; and a rotation shaft provided on the rotation base and connected with the head so that the head can rotate in a second direction; the head is provided with a projection module used for video interaction with a user.
Optionally, the camera rotation module comprises: the mounting seat is arranged on the body; the rotating part is rotatably connected to the mounting seat; and the camera is arranged on the rotating part.
Optionally, the rotating arm light assembly, comprising: the arm rod is rotatably connected with the body; the upper arc surface lamp is arranged on the arm rod and used for emergency, night, color-changing scene and atmosphere lighting; the lower inclined lamp is arranged below the arm rod and used for illuminating the health lamp without blue light and flickering; the shading edge is arranged in front of the arm rod and is used for shading light; and the adsorption part is arranged on the inner side vertical surface of the arm lever and is used for fixing the arm lever by matching with a magnet arranged in the body.
Optionally, the camera rotation module includes: the mounting seat is arranged on the body; the rotating part is rotatably connected to the mounting seat; and a camera disposed on the rotating portion.
Optionally, a containing notch, an arm rod installation bin and an arm rod installation seat are arranged on the body, the arm rod installation seat is used for being rotatably connected with the arm rod, and a connecting magnet is arranged in the arm rod installation bin and used for adsorbing and fixing the arm rod with the adsorption part after the arm rod is upwards rotated; the accommodating notch is used for accommodating the arm rod which is folded downwards.
Optionally, the connecting assembly, the camera rotation module, the rotary arm lamp assembly, the leg assembly and the functional base are all provided with a state detection module, and the state detection module is used for detecting the running state and the module state of the corresponding structure.
Optionally, the leg assembly is squat, the leg assembly comprising: the landing leg is arranged below the body and connected with the sliding plate of the functional base, a heat dissipation hole and a vent hole are formed in the landing leg, and a sensor is arranged in the landing leg so as to detect the air environment, the body temperature, the human body and/or the gesture; wherein, the body is close to one side of shank and one side that is close to of function base the shank all is equipped with thermal-insulated protection module.
Optionally, the functional base comprises: the functional table is obliquely arranged and is provided with a plurality of functional interfaces; the base cover is arranged on the bottom surface of the functional table, and a plurality of functional communication ports are formed in the base cover; and the display screen is arranged on the top surface of the functional table and used for displaying dynamic pictures.
Optionally, the capturing crown service robot further comprises: the mounting bracket is arranged on the base cover and comprises but is not limited to a triangular placing reinforcing bracket, a clamp type fixed mounting bracket and a wall-mounted fixed mounting bracket; the object supporting bracket is arranged in front of the functional table through a buckle, the object supporting bracket and the functional table form a space for supporting an object together, and the object supporting bracket is in a sliding plate head shape of the sliding plate.
Optionally, the capturing crown service robot further comprises: the device comprises an AI core processor, a storage and expansion storage unit, an input unit, an output unit, a communication unit and a power supply unit, wherein the storage and expansion storage unit, the input unit, the output unit and the communication unit are all in communication connection with the AI core processor; wherein the input unit includes but is not limited to: the radar sensor and the built-in sensor.
The utility model has the advantages that:
the robot for taking the crown from the high-platform movement of the humanoid athlete is in the form of a product, and gives the product the intension of inspiring; simultaneously, the captivity service robot solves the fusion integration of a plurality of intelligent modules and sensors through structural innovation, can sense environment, physical signs, states and user behaviors, habits and demands, and realizes the single-item robot to solve the family by the captivity service robot with multi-scene output: the intelligent robot system has the advantages that the intelligent robot system meets the requirements of users on pain points, such as inspirations, (signs + environment + behavior + habit + state), home security, falling and emergency alarm, visual talkback, video monitoring and OCR (optical character recognition), health files, machine and remote inquiry, health supervision and management, home control, intelligent projection screen projection, intelligent sound boxes, intelligent companions, entertainment interaction, somatosensory interaction, intelligent lighting, eye protection, supervised learning, mother language environment type interactive learning, deviation correction bad study, communication coverage, privacy protection, data security and the like, is free of installation, free of matching, easy to standardize, and also provides command-free active intelligent services, solves the series problems that the traditional hardware is difficult to solve, passive intelligence, system implementation difficulty, old installation difficulty, standardization, difficult landing difficulty and the like of the traditional intelligent system, completely has no defects of the intelligent community maximum system building intercom system, has good active intelligent user experience, high frequency and strong viscosity, and can completely bear the core carrier of a digital operation home end of a community; meanwhile, business mode innovation is carried out based on the robot, so that the user can continuously generate re-consumption when experiencing the services of the intelligent community digital operation platform, operators have stable income and have more investment and perfect services, the services such as community, life, O2O, housekeeping, safety, health, entertainment, property and the like which are circularly served for the user tend to be mature and perfect, and the intelligent community is not invested and served by the operators free of charge, so that the ecological environment of the intelligent community industry can be well-circulated and developed, and the life, work, study, entertainment and home furnishing of family users can become easier, safer and more intelligent.
Drawings
Fig. 1 is a schematic structural diagram of a capture crown service robot according to some embodiments of the present invention.
Fig. 2 is a schematic structural diagram of a connection assembly of a captivity crown service robot according to some embodiments of the present invention.
Fig. 3 is a schematic diagram illustrating an explosion structure of the head of a captivation crown service robot according to some embodiments of the present invention.
Fig. 4 is a schematic diagram illustrating an explosive structure of a body of a captivity cap service robot according to some embodiments of the present invention.
Fig. 5 is a schematic structural diagram of a cover plate of a capture crown service robot in some embodiments of the present application.
Fig. 6 is a schematic structural view of the connecting assembly of the crown capturing service robot according to some embodiments of the present invention, which is a magnetic attraction base and a magnetic attraction iron block.
Fig. 7 is a schematic top view of a rotating arm lamp assembly of a captivating robot in some embodiments of the invention.
Fig. 8 is a schematic bottom view of a rotating arm lamp assembly of a captivating robot according to some embodiments of the present invention.
Fig. 9 is a schematic view of a base structure of a capture crown service robot according to some embodiments of the present invention.
Fig. 10 is a schematic structural diagram illustrating a first implementation manner of a mounting bracket of a captivation crown service robot according to some embodiments of the present invention.
Fig. 11 is a schematic structural diagram illustrating a second implementation manner of a mounting bracket of a captivation crown service robot according to some embodiments of the present invention.
Fig. 12 is a schematic structural view of an object support of a capture crown service robot according to some embodiments of the present invention.
Fig. 13 is a flowchart of a commander-less active intelligent implementation method according to an embodiment of the present invention.
Fig. 14 is a sub-flowchart of a method for implementing commandedless active intelligence according to an embodiment of the present invention.
Fig. 15 is a flowchart illustrating a method for implementing commandenerless active intelligence according to an embodiment of the present invention.
Fig. 16 is a schematic structural diagram of a commander-less active intelligent implementation apparatus according to a second embodiment of the present invention.
In the figure: 100. a head; 110. goggles; 120. a helmet light; 130. closing plates; 200. a body; 210. mounting a plate; 300. a connecting assembly; 310. a cover plate; 311. a rotating tank; 320. a rotating base; 330. a rotating shaft; 331. rotating the two shafts; 340. magnetically attracting the iron block; 341. a bracket; 400. a camera rotation module; 410. a mounting seat; 411. a rotating chamber; 420. a rotating part; 430. a camera; 500. rotating the arm lamp assembly; 510. an arm lever; 520. a cambered surface; 530. arm bar elbow guard; 540. an adsorption part; 560. a shading edge; 600. a leg assembly; 610. a support leg; 620. a slide plate; 700. a functional base; 701. a function table; 710. a display screen; 720. a base cover; 721. an SPDIF fiber optic audio interface; 722. an HDMI output interface; 723. a USB interface; 725. a volume switch; 726. a microphone array sensor; 727. an emergency button; 728. a fixed seat; 730. mounting a bracket; 731. a pincer-type mounting bracket; 732. a placement type mounting bracket; 733. a material supporting bracket; 740. a physical sign sensor; 800. a realizing device; 810. placing a guide module; 820. a spatial perception module; 830. a scene configuration module; 840. a user perception module; 850. and executing the module.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
In the description of the present invention, unless expressly stated or limited otherwise, the terms "connected," "connected," and "fixed" are to be construed broadly, e.g., as meaning permanently connected, detachably connected, or integral to one another; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the present disclosure, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may comprise direct contact between the first and second features, or may comprise contact between the first and second features not directly. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
In the description of the present embodiment, the terms "upper", "lower", "right", etc. are used in an orientation or positional relationship based on that shown in the drawings only for convenience of description and simplicity of operation, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used only for descriptive purposes and are not intended to be limiting.
The robot for capturing the canopy takes the shape of a high-platform skiing and capturing scene of a humanoid athlete, has the spirit of combining the excitement of a user in struggling for brave warrior leaps all the time, and the appearance of the athlete can be sprayed with colors to form clothes of athletes in different countries, so that the product is closer to the emotion of the user, the requirement on individualization of the product shape is reduced, and the robot for capturing the canopy is beneficial to product standardization and cost reduction.
Fig. 1 is a schematic structural diagram of a capture crown service robot according to some embodiments of the present invention. Fig. 2 is a schematic structural diagram of a connection assembly of a captivity crown service robot according to some embodiments of the present invention. Referring to fig. 1 and 2, the capturing crown service robot includes: head 100, torso 200, linkage assembly 300, camera rotation module 400, rotating arm light assembly 500, leg assembly 600, and functional base 700. Head 100 houses a light and/or micro-projection module; the body 200 houses radar sensors to sense space and human body and behavior. The head 100 is detachably coupled to the body 200 by a coupling assembly 300. The camera rotary module 400 is rotatably connected to the body 200 to perform video identification or monitoring on the environment, people or objects; the rotary arm lamp assembly 500 is rotatably connected to the body 200, the sensor built in the leg assembly 600 senses an environment or user gesture or sign sensor, the radar sensor built in the body 200 senses a user behavior, the rotary arm lamp assembly 500 is used for lighting according to the environment or the user gesture, behavior or sign, and the lighting can be divided into downward output healthy lighting and upward output emergency, night, scene or atmosphere lighting. Leg assembly 600 sets up in body 200 downside, and function base 700 sets up in leg assembly 600 downside, is equipped with slide 620 on the function base 700, and slide 620 corresponds with leg assembly 600.
Fig. 3 is a schematic diagram illustrating an explosive structure of a head of a captivity cap service robot according to some embodiments of the present invention. Referring to fig. 3, in particular, the head 100 may have a cubic shape or an oval shape or an irregular shape as a whole, may be provided with a visor 110 on an outer surface thereof to simulate the face of a skier, and may be provided with a helmet light 120 on the top thereof to simulate a skiing sport. A sealing plate 130 may be removably attached to the back of the head 100 to facilitate access. All can set up the louvre on the side and the shrouding 130 about the head 100 to ensure radiating efficiency, the louvre can set up a plurality ofly, and its shape can be rectangular shape, circular or other forms, can also be equipped with corresponding decoration pattern on the side of head 100, and the constitution and the pattern content of specific louvre can be designed according to actual user's demand, the utility model discloses do not prescribe a limit. The detachable connection of the head 100 and the body 200 can not only meet the requirements of different users on different grades and different scenes, but also improve the interactive experience of the users; the design of the intelligent lighting LED lamp on the front of the head can bear the auxiliary lighting effect of video identification and monitoring of the camera besides normal scene lighting, and the space lighting problem during multi-scene interaction such as live broadcast, video interaction and illegal invasion is solved. The head 100 adopts a barrel-shaped base and cover plate design, so that the multi-edge splicing combination is reduced, and the attractiveness, firmness and standardization of a product are favorably improved.
Fig. 4 is a schematic diagram illustrating an explosive structure of a body of a captivity cap service robot according to some embodiments of the present invention. Referring to fig. 4, the body 200 may have a shape of "1" as a whole, and the connecting member 300 may be provided at the top thereof for mounting the head 100, at the side thereof for mounting the rotating arm lamp assembly 500, at the bottom thereof for mounting the leg assembly 600, and at the rear thereof for mounting the mounting plate 210 having a shape of "7". The built-in radar sensor is used for perceiving space and human body and action, this design is the most key sensor setting that the robot distinguishes traditional hardware, realize the robot to the action of space user and the discernment of demand, so that the robot initiatively is user output function and service, the favourable more demands of single hardware solution user that increases, also the favourable frequency of use that greatly promotes the robot, combine the complete function of robot and content still can promote the viscidity of user to the robot, for the digital operation excavation user value of community and platform operation service establish the basis. The camera rotation module 400 may be disposed at the front side of the body 200 near one end of the head 100. The camera rotation module 400 can rotate up or down to identify or monitor different environments, scenes, people or objects, while the camera module signal can be directly physically cut off by the privacy switch to protect the user privacy. The body 200 may be formed with a "U-shaped" body and a back cover to define an interior hollow interior of the body 200.
A built-in sensor can be arranged in the inner cavity of the body 200, and the built-in sensor can comprise a radar sensor, a magnetic sensor, a magnet, a wireless communication antenna and the like; the built-in wireless antenna is beneficial to improving the coverage range of wireless communication; the magnetic sensor is internally provided with a state sensing and output control instruction for the rotating arm lamp assembly 500 when the cage is closed, and a state and output control instruction for the camera rotating module 400. Corresponding slotted holes can be formed in the body 200 to serve as radiating grooves and radiating holes, the radiating grooves can be arranged close to the radar sensor, and the radiating holes can be formed in the two sides and the back of the body 200 to facilitate radiating of the radar sensor and also facilitate forming temperature isolation between the body 200 and the leg component 600.
Rotatory arm banks spare 500, shank subassembly 600, then can set up corresponding function according to the demand on the function base 700, for example be equipped with the night-light in the rotatory arm banks spare 500, there is not healthy lamp of blue light flicker, the discolour lamp, multiunit LED lamps such as emergency light, through the body 200, shank subassembly 600, built-in radar of parts such as coupling assembling 300 and function base 700, magnetic force, the state, the environment, the gesture, the distance, the sign, the video, the sensing data of sensors such as pronunciation, the binding time, the environment, factors such as space, sensing to user's action in the space, the gesture, the custom, fall down, the state, the existence etc., the illumination is automatic to be opened or to be closed. The functional base 700, the leg assembly 600 and other components are provided with the physical sign sensors, and when the space senses the behavior and/or presence and/or state of the user, the user is actively reminded or cared for physical sign detection by combining factors such as time, environment, health and the like.
The robot for taking the crown from the high-platform movement of the humanoid athlete is in the form of a product, and the product is endowed with the connotation of motivating users; simultaneously, the captivity service robot solves the fusion integration of a plurality of intelligent modules and sensors through structural innovation, can sense environment, physical signs, states and user behaviors, habits and demands, and realizes the single-item robot to solve the family by the captivity service robot with multi-scene output: the system comprises a plurality of user pain point requirements such as encouragement, (sign + environment + behavior + habit + state) perception, household safety, falling and emergency alarm, visual talkback, video monitoring and OCR (Optical Character Recognition), health file, machine and remote inquiry, health supervision and management, home control, intelligent projection screen projection, intelligent sound box, intelligent accompany, entertainment interaction, somatosensory interaction, intelligent lighting, eye protection health, supervision learning, native language environmental interactive learning, deviation correction bad study, communication coverage, privacy protection, data safety and the like, the robot is free of installation, free of matching and easy to standardize, and provides command-free active intelligent service, so that the problems that multiple requirements are difficult to solve by traditional hardware, passive intelligence and system implementation of a traditional intelligent system are difficult, old installation is difficult to change, standardization is difficult, landing is difficult and the like are solved, active intelligent user experience is good, and high frequency and strong viscosity are achieved; meanwhile, business mode innovation is carried out based on the robot, the user can continuously generate re-consumption when experiencing intelligent community digital operation platform service, the operator has stable income and has more investment and perfect service, the community, life, O2O, housekeeping, safety, health, entertainment, property and the like which circularly serve the user are provided, the service tends to be mature and perfect, the intelligent community industrial ecology can be developed in a virtuous circle instead of free investment and service of the operator, and the life, work, study, entertainment and home furnishing of the family user can be easier, safer and more intelligent.
Capture the hat service robot and still include: the device comprises an AI core processor, a storage and expansion storage unit, an input unit, an output unit, a communication unit and a power supply unit, wherein the storage and expansion storage unit, the input unit, the output unit and the communication unit are all in communication connection with the AI core processor; wherein, the input unit includes but is not limited to: radar sensors and built-in sensors.
Specifically, the input unit includes, but is not limited to: geomagnetic, triaxial tuo snail appearance, health, body temperature, air circumstance, distance, gesture, radar, camera, microphone array, privacy switch, state, prevent tearing open switch, magnetic force, touch input etc. sensor. The robot is provided with the built-in multi-sensor design, so that the robot can conveniently and comprehensively collect the behaviors, habits and requirements of indoor space human bodies such as physical signs, environment and state data of a family, the robot can actively provide corresponding functions and services for users, and the use frequency and the strong stickiness of the robot of the users can be favorably improved. Meanwhile, the robot can conveniently judge the state of the robot in time, and a condition foundation is created for the robot to accurately provide service for users.
Communication units include, but are not limited to: the system comprises a dual-frequency Wifi module, a Bluetooth module, a dual LAN module, an RF infrared module, an extended carrier PLC module, an extended Zigbee or mate module, an extended LoRa module, an extended 4G/5G module and the like. The design of the multi-network communication module is combined with the robot structure, and the cooperation of the multi-network communication module can be installed, so that the robot can conveniently realize the function of the home communication gateway; the design of the double LAN interfaces and the double-frequency Wifi fully utilizes the characteristic that a desktop scene is close to a household wired communication interface to carry out wireless communication coverage on a room, thereby helping a user to save wireless communication coverage cost, avoiding the problem that a signal is unstable or has no signal relay when a large-sized dwelling house adopts a Wifi relay mode, and solving the problem of communication connection of multiple communication devices on the desktop; if the robot is added with active intelligence, the wireless coverage can be intelligently managed, so that the use habits of correction users and the network safety are favorably protected. Meanwhile, the design of the communication module can be expanded, and a user can customize the communication module according to the self requirement, so that the cost is saved for the user, the personalized communication coverage requirement is realized, and the sale price and the production cost of the robot are reduced.
Output units include, but are not limited to: DO signal, display screen, miniature projector head portrait, multichannel LED lamp, loudspeaker, output interface etc.. The multi-scene output design is beneficial to solving the multi-demand of the user by the robot without the output of other matching systems, realizes the matching-free installation and is beneficial to realizing the standardized landing of the system function and the robot; meanwhile, the multi-scene output is carried out, the use frequency and strong viscosity of the robot by a user are improved, and the entertainment interactivity and the user experience feeling of the robot are favorably improved.
The power module comprises but is not limited to a power adapter, a charging, electricity storage and overcharge protection module, and is respectively and electrically connected with the communication unit, the processing unit and the input unit.
Carry out the ability of space perception and catching user's action through above-mentioned structure, the input/output unit that all has traditional intelligent hardware becomes more intelligent, and interactive interaction is stronger, realizes the real intelligence of family service robot, makes user's life, work, study, amusement, house become easier, safer, more healthy, more beautiful, more wisdom.
For example: when the user is sensed to enter the room, the environment sensor actively detects indoor environment data and actively reports the environment condition to the user without actively inquiring the robot. The following steps are repeated: when the user is sensed to enter the room in the defense deploying state, the robot actively reminds the user to confirm the identity (the identity must be confirmed through other ways when the camera is in the closed state or in the downward monitoring state), and if the user does not confirm the identity and/or the robot is moved after the user exceeds the self-defined time, the system gives an alarm.
The following steps are repeated: perceiving that the child goes home, the robot actively initiates a dialogue to the child in a foreign language to greet the child and or the immersive interactive learning in the foreign language; when the child plays indoors on weekends, the robot automatically plays foreign language videos, music, poems and the like which the child likes, actively initiates dialogue interaction to the child in foreign languages according to the played contents and the spatial behaviors of the user, creates a foreign language learning environment which is like a foreign language learning environment which really lives abroad, and realizes immersion environment type foreign language interactive learning.
The following steps are repeated: the robot senses that the user falls down or actively seeks help, the system automatically gives a pre-alarm, and when the alarm is over the user-defined time or confirmed by the user, the system automatically gives an alarm to the parent mobile terminal or the property service center platform or the service operation platform and the like. The perception user goes to the hospital to see a doctor and goes home, and actively reminds the user to put a doctor diagnostic book in front of the robot for scanning and entering into the file. Sensing that a user goes home at night, automatically starting a built-in lamp of the robot to illuminate different scenes and/or starting projection and/or playing music so as to create a warm and comfortable environment for a family, without the need of matching sensing, hardware and a system, and realizing the scene function independently by a single product to solve the problems that old family users are unwilling to intelligently modify or are difficult to modify and the like.
The utility model provides a captivation service robot is with the innovative product based on the desk lamp, inherit the desk lamp and just need, it is near from the user, from taking scene output, it is near from communication interface, there is certain ground height, use the scene many, position relatively fixed, it is portable, exempt from good characteristics such as installation, through innovations such as structure, system, method, algorithm, realize that single article solve indoor initiative illumination, emergency lighting, initiative sound, initiative projection, initiative care, OCR discernment, video monitoring, interactive interaction, wireless coverage, space safety, environmental security, house control, equipment safety, promptly seek help, the warning of tumbleing, sign perception, health file, healthy initiative supervision management, supervise study, native language environmental formula foreign language study, community service, privacy protection, user's many pain point demands such as data safety, still can the cooperation of multirobot or robot + sensor or robot + traditional intelligent hardware together, utilize the robot's initiative perception of outer dress, algorithm, calculation, communication and data storage ability solve user's more pain point demands, let the user trust, rely on and use the function and the high frequency that the robot provided, and stickness produce strong stickness. The intelligent community digital operation robot based on solving of user demands, high frequency, strong viscosity, applicability and good experience has basis and value, through mode innovation, the intelligent community digital operation based on the robot can promote the development of industrial ecological virtuous circle, and the intelligent community digital operation robot has important significance for the development of intelligent families and intelligent communities.
Fig. 5 is a schematic structural diagram of a cover plate of a capture crown service robot according to some embodiments of the present disclosure. Referring to fig. 2 and 5, in some embodiments of the present invention, the connecting assembly 300 includes: a cover plate 310, a rotary base 320, a rotary shaft 330, and a rotary shaft 331. The rotating shaft 330 is disposed on the rotating base 320 and connected to the head 100, and the rotating base 320 is fixedly connected to the head 100 through the shaft center, so that the head 100 can rotate in the second direction. The cover plate 310 is arranged on the top of the body 200; the rotary base 320 is disposed on the cover plate 310, and the rotary shaft 331 is rotatably connected to the cover plate 310 so that the head 100 can rotate in a first direction; wherein, a projection module for video interaction with a user is provided on the head 100. The projection module includes: a projector and a projection lens. The projector is disposed within the head 100. The projection lens is disposed on the head 100 and rotates with the rotation of the head 100.
Specifically, the cover plate 310 may have a square cross-section, and corresponding grooves may be formed on the edge thereof to be engaged with the body 200, such as a front groove. The top wall of the cover plate 310 is provided with a rotary groove 311, a rotary shaft 331 at the bottom of the rotary seat 320 is rotatably connected in the rotary groove 311, a rotary shaft 330 is arranged on the rotary seat 320, the middle part of the rotary shaft 330 is provided with an enlarged shaft and a through hole penetrating through the enlarged shaft rotary seat 320 for various cables to pass through, the design can make the connecting component form more like a throat of an athlete, and the through hole for threading can be enlarged, thereby facilitating the cables to pass through. The first direction is the circumferential direction of the rotating biaxial 311, the rotating biaxial 331 can drive the projection lens to rotate in the horizontal plane, and the rotation angle is at least 180 degrees. The second direction is the circumferential direction of the rotating shaft 330, and the rotating shaft 330 can drive the projection lens to rotate up and down, i.e. vertically rotate, the downward rotation angle can be 60 degrees, and the upward rotation angle can be 30 degrees. It should be understood that the specific rotation angle can be designed according to actual situations, and the present invention is not limited thereto.
The head 100 is provided with a projection module, the head portrait 100 can project towards different angles by combining the first and second rotating directions of the connecting assembly 300, and the interaction with the video and audio or the somatosensory interaction or the content interaction with the user can be realized by combining the input and output modules of the robot. The connection mode of the cover plate 310 and the locking member enables the connection stability of the head 100 and the body 200 to be high, and is relatively suitable for users who pursue equipment safety and user experience.
Fig. 6 is a schematic structural view of the magnetic attraction base and the magnetic attraction iron block as the connecting component of the crowning service robot in some embodiments of the present invention. Referring to fig. 6, in some embodiments of the present invention, the connection assembly 300 includes: a magnetic base and a magnetic iron block 340. The magnetic base is disposed on the magnetic head 100. The magnetic iron block 340 is arranged on the top of the body 200 to be adsorbed with the magnetic base. Specifically, the magnetic base is arranged at the bottom of the head 100, and a contact type communication interface male head is arranged in the middle of the magnetic base; a contact communication interface female head is arranged corresponding to the middle of the magnetic iron block 340, and a bracket 341 is arranged below the magnetic iron block 340. The bracket 341 is connected to a buckle groove provided at the top of the body 200 by a buckle, and the bracket 341 is fixed to the body 200 by a screw fastener.
The magnetic type connecting head and the body correspondingly solve the problem that the magnetic type head is quickly and fixedly connected, and is more suitable for pursuing a scene with high flexibility so as to adapt to the multi-scene application requirements of users; the head and the body are fixedly and rotatably connected, and the problem of head rotation of a built-in projection module is correspondingly solved, so that the experience requirement of multi-scene projection output of a user is met; the two modes exist independently, and the module interconversion upgrading experience can be purchased without replacing the main equipment, so that the user can solve the constantly changing experience requirement with low cost.
Referring to fig. 4, in some embodiments of the present invention, the camera rotation module 400 includes: a mounting base 410, a rotating portion 420, a camera 430, and a built-in magnet. The mounting seat 410 is provided on the body 200; the rotation part 420 is rotatably coupled to the mount 410. The camera 430 is disposed on the rotating part 420. The built-in magnet is provided on the rotating portion 420 to rotate with the rotation of the rotating portion 420. The detection module judges the rotation angle of the rotating part 420 by detecting the rotation angle of the rotating magnet, and judges whether to remind or care the user of the specific scene application according to the state and the angle of the rotating part 420 when sensing the behavior and the demand of the user in the space. Such as: when the owner of the woman is sensed to come home, actively reminding the user to close the camera privacy switch so as to physically cut off a camera signal and protect the privacy of the user; and the following steps: when a perception child approaches a desk and prepares to start learning, the user is actively reminded of rotating the camera downwards to the bottom by foreign languages (if the camera is in a downwards monitoring state, the user does not need to be reminded of rotating the camera), so that the camera can identify the content of a book or supervise the learning of the child, the foreign languages are learned, and the interactive experience of the user is increased.
Specifically, one side of the trunk element 200, which is close to the head 100, protrudes outward to form a mounting seat 410, a rotating cavity 411 for mounting the rotating part 420 is formed by matching with the side of the trunk element 200, and the rotating part 420 is integrally cylindrical and is rotatably connected in the rotating cavity 411 through a rotating shaft. The camera 430 is disposed on the side of the rotating portion 420, and a through hole communicated with the inside of the trunk 200 is disposed at the bottom of the inner side of the rotating cavity 411 for transmitting a communication cable.
The rotary magnets are disposed on the rotary portion 420, and two rotary magnets are disposed one above the other. The detection module includes a magnetic sensor, and the magnetic sensor is disposed at a position capable of detecting states of the two rotary magnets, thereby determining a position of the rotary portion 420. Therefore, the video identification or monitoring of the environment, people or objects is met, and the experience requirements of different scenes of a user are met; meanwhile, the manual rotation design of the rotation module protects the privacy of the user, reduces the product cost and can increase the interactive operation experience of the user and the equipment. And the camera 430 is provided with a state sensor, and can sense the user requirement, the use habit, the personal preference and the like by combining with other built-in sensors of the robot. It should be understood that the camera rotation module 400 may be disposed at other positions to meet the user's requirement, and the present invention is not limited thereto.
Fig. 7 is a schematic top view of a rotating arm lamp assembly of a captivating robot in some embodiments of the invention. Fig. 8 is a schematic bottom view of a rotating arm lamp assembly of a captivating robot according to some embodiments of the present invention. Referring to fig. 7 and 8, in some embodiments of the present invention, the rotatable arm lamp assembly 500 includes: arm pole 510, last cambered surface lamp 520, down-slope lamp 530, connection magnetite and absorption portion 540. The upper arc lamp 520 is arranged on the top wall of the arm lever 510 and used for emergency, night, color-changing scene and atmosphere lighting. The lower bevel light 530 is disposed on the bottom wall of the arm 510 for no-blue no-flicker health light illumination. The connecting magnet is disposed on the body 200. The adsorption part 540 is provided on the inner vertical surface of the arm 510 to be screwed on the arm 510 and adsorbed to the magnet.
Specifically, the arm 510 and the body 200 form an upper body, the left and right arms 510 can rotate to realize the unfolding and folding of the rotary arm lamp assembly 500, the middle of the arm 510 can be provided with an arm elbow guard, and the front end can be provided with a glove shape to enable the arm 510 to be closer to the appearance of the arm. The upper and lower double-sided illumination design of the arm lamp is rotated, so that the output scenes of the robot are increased, and the experience requirements of different scenes of a user are met; the design of the lower inclined plane lamp is beneficial to increasing the illumination intensity of the user experience area; the design of the shading edge is beneficial to protecting the eye health of a user and avoiding the direct vision of the eye to the light source. The design of the inner side wall adsorption part is beneficial to the stable fixation and the durability of the rotating arm rod when the rotating arm rod is unfolded, and simultaneously, the design and the production difficulty and the requirement of the arm rod installation seat can be reduced, thereby being beneficial to realizing the function of the rotating arm rod at low cost. The arm 510 is designed to be simple and beautiful, so that the product interaction experience can be increased, the occupied space of the robot can be reduced, the robot can adapt to more scene applications, the robot can form a strip-shaped trophy shape together with the head portrait replacement and the leg assembly 600, users are encouraged together with the skiing athlete plateau skiing and crowing scene product shape, and the trophies can be obtained in respective fields only if struggling spirit of struggling against brave.
The upper arc lamp 520 and the lower arc lamp 530 can be combined with a built-in radar sensor, a gesture sensor, a distance sensor and the like of the robot, and the intelligent lighting function that the robot senses indoor user behaviors and actively switches on and off a lighting lamp can be conveniently achieved. For example: when a user is sensed to sit beside a desk, the lower inclined surface illuminating lamp is automatically turned on; when the door is sensed to enter the room, the illumination of the upper arc lamp 520 is automatically started; when the user is sensed to sleep, all the lamps are automatically turned off; when sensing that the user is getting up at night, automatically turning on the night illumination, and when sensing that the user returns to the bed to sleep again, automatically turning off the night lamp; in holidays, when the user is sensed to exist, the red environment illumination of the upper arc lamp 520 is automatically turned on; when the user needs illumination at night and has power failure, the emergency lamp is automatically turned on for illumination, and the like. The emergency lamp is powered by a built-in energy storage battery of the robot, and an energy-saving low-power LED lamp with proper brightness is adopted, so that the illumination time is prolonged.
The cambered surface design is favorable for expanding the irradiation range of the built-in lamp, the lower inclined surface design is favorable for projecting the illumination light source to a desktop working area, and meanwhile, the bilateral symmetry light source design can avoid illumination shadows. A correspondingly shaped shade 560 may also be provided on the arm 510 to conceal the light source from the light source at the height of the tube illuminating the user's eyes. Various decorative patterns may also be provided on curved surface 520 to facilitate arm 510 to resemble a human hand, so as to integrate with the athlete's body.
The suction part 540 may be a metal sheet, and is provided on the inner side surface of the arm 510. And the vertical metal sheet of the inner side surface is designed to be matched with a built-in connection magnet of the robot, when the left arm rod 510 and the right arm rod 510 rotate upwards to a certain position, the left arm rod 510 and the right arm rod 510 are sucked, and the durability of the fixing of the rotating arm lamp assembly 500 is improved.
The utility model discloses some embodiments are equipped with on the body 200 and accomodate breach, armed lever installation storehouse and armed lever mount pad. The arm lever mounting seat is used for being connected with the arm lever in a rotating mode. The arm rod mounting bin is internally provided with a connecting magnet for adsorbing and fixing the arm rod 510 with the adsorption part 540 after the arm rod 510 is upwards rotated. The receiving notch is used for placing the arm 510 which is folded downwards. The arm 510 is designed by combining with the storage gap of the body 200, when the arm 510 is folded to fill the storage gap, the arm 510, the body 200, the head 100 and the leg assembly 600 together form a cylindrical whole, so that the robot is attractive, space-saving, beneficial to stably placing a desktop and capable of meeting the multi-scene application requirements of the robot; meanwhile, the functional base 700 can form a cylindrical trophy shape together, the skiing and crown-catching scene of athletes is continued to obtain the trophy, and the inspirational user only carries out the vigorous and vigorous piecing together and can obtain the trophy of the user in each field.
In some embodiments of the present invention, the connection assembly 300, the camera rotation module 400, the rotation arm lamp assembly 500, the leg assembly 600 and the functional base 700 are all provided with a state detection module, and the state detection module is used for detecting the operation state and the module state of the corresponding structure.
Specifically, the state detection module may include a processing unit and a corresponding sensor, and the captivity service robot may determine the state of the component and/or the module and/or the complete machine according to the data sensed by the sensor, so that the robot actively controls the component and/or the module and/or the complete machine, outputs in multiple ways, actively reminds or alarms or attends to, interacts with a scene, protects equipment, protects user privacy, and the like; the design can protect the safety of the robot equipment, can also know the experience habits and the hobbies of the user, improve the interactive experience, trace the fault reason and the working condition and state of the robot, and lay a foundation for the robot to solve the multi-demand of the user.
Referring to fig. 4, in some embodiments of the present invention, leg assembly 600 includes: and a leg 610. The support leg 610 is in a squatting shape, is arranged on the lower side of the body 200 and is connected with the sliding plate 620 of the functional base 700, a heat dissipation and ventilation hole is formed in the support leg 610, and a sensor is arranged in the support leg 610 so as to detect sensors of air environment, body temperature, human body and/or gestures and the like; wherein, one side of the body close to the supporting leg 610 and one side of the functional base 700 close to the supporting leg 610 are both provided with a heat insulation protection module.
Specifically, the legs 610 include a large leg and a small leg, which are connected in a bent manner to allow the legs 610 to assume a squat shape to simulate a skiing action, and a sliding plate 620 is provided between the bottom of the leg assembly 600 and the functional base 700. And the inside cavity of landing leg 610 sets up, and its inside is provided with environmental sensor, body temperature sensor and distance or attitude sensor etc. to be arranged in detecting the humiture, air quality, smog etc. in the environment, and can set up corresponding louvre on the side of landing leg 610, cooperation landing leg 610 left and right and rear side heat dissipation and air vent avoid the inside integrated temperature sensitive sensor of landing leg 610 to receive the temperature influence of body 200 and function base, guarantee the sensing precision of sensor. Through leg assembly 600, can realize room environment perception, human body temperature perception, user gesture perception ability. The sensing capabilities of the built-in radar sensor and the close-range human body sensor are combined, so that the functions of reporting the active environment condition, reminding a user of measuring the body temperature or sensing the approach of a hand to the body temperature and the like are conveniently realized.
Meanwhile, the front surfaces of all parts of the robot are not provided with heat dissipation and vent holes, so that the privacy of a user is mainly considered to be protected, lawless persons are prevented from mounting hidden sensors through the heat dissipation and the vent holes on the front surface of the robot, the rights and interests of the user are prevented from being damaged, and the attractiveness of the appearance of a robot product is ensured; the built-in sensor module comprises sensors of air environment, body temperature, human body and/or gestures and the like which are arranged in the supporting legs 610, and the design facilitates the robot to collect more comprehensive environment and user sign data to establish health files for users, and the health files can effectively carry out targeted health management on the health of the users; in addition, the built-in sign sensor of the robot can reduce the consumption cost of users, improve the viscosity and high frequency of the robot, and lay a foundation for the intelligent household and the community to fall to the ground in an innovative business mode during digital operation. Meanwhile, the use habits and figures of the user can be analyzed through the acquired comprehensive data, and a foundation is laid for the robot to better serve the user.
Fig. 9 is a schematic view of a base structure of a capture crown service robot according to some embodiments of the present invention. Referring to fig. 1, 4 and 9, in some embodiments of the present invention, the function base 700 includes a function table 701, a display 710 and a base cover 720. The functional table 701 is disposed obliquely and the top end is connected to the leg 610. A display screen 710 is provided on the top surface of the function station 701 for displaying a dynamic picture. The base cover 720 is disposed on the bottom surface of the functional table 701, and a plurality of functional communication ports are disposed on the base cover 720. In addition, the outer base cover 720 and the body 7-shaped back plate cover can be connected into a whole and can be designed to be thinner and detached, so that the cost can be reduced, and the structure is easy to realize.
Specifically, the function table 701 is in an irregular cylinder shape and is formed by cutting two sides, a front side and a bottom surface of an inclined cylinder, the left side, the right side, the front side and the bottom plane of the cut irregular cylinder-shaped function table are cut into planes, the top surface and the horizontal plane form an inclined included angle, and the display screen 710 is arranged on the top surface of the function table 701, so that a user can operate, watch and experience functions, services and contents provided by the robot conveniently; this design makes things convenient for side and front signal and interface input/output about the function platform 701, still can reduce the cylindrical occupation space of slope, the horizontal direction cutting plane still is favorable to the robot to place stably or install, still is favorable to enlarging the high platform function base inner space of skiing, still is favorable to other sensors of display screen top position installation, still is favorable to promoting the aesthetic measure of robot on the whole, circular side can spray the circular outward appearance of earth, cylindrical top surface can spray white snow field colour, symbolize the olympic spirit and the pin spirit of the global participation of winter Olympic meeting.
The front side of the functional table 701 is provided with a microphone array 726, the upper part of the top surface is provided with a physical sign sensor 740, and the left side and the right side are provided with a plurality of functional communication ports such as a 3.5-inch earphone socket, an SPDIF optical fiber audio interface 721, an HDMI interface 722, a USB interface 723, a volume adjusting switch 725, a privacy switch, a power switch and the like. The multifunctional communication port and the physical sign sensor are designed to be beneficial to improving and expanding the interactive experience of the robot, protecting the privacy of a user, meeting the multi-scene experience requirements of the user, improving the viscosity of the user to the robot, and meanwhile, the robot is convenient to analyze the habit of the user and figure according to the times and scenes that the user uses the interface and the sensor, so that the robot can provide functions and services for the user more actively in the future. The design of the top surface sign sensor of the cylindrical functional part not only enables a user to use the sensor frequently in daily life, but also more importantly attracts the user to input data of other medical grade sign sensors in home to the robot in multiple ways, so that the robot can collect more comprehensive data to establish a health file for the user, and the health file can effectively carry out targeted health management on the health of the user; in addition, the built-in sign sensor of the robot can also reduce the consumption cost of users, improve the viscosity and high frequency of the robot and lay a foundation for the falling of innovative business modes in the digital operation of smart families and communities.
Function platform 701 is gone up on the top surface and is equipped with bar slide 620 with shank subassembly 600, and downwardly extending to cylindrical function portion top surface edge, the centre is cut off by display screen 710, display screen 710 can be touch display screen, this design can form the virtual slide that real bar slide 620 and display screen 710 show and form complete slide 620, still can combine different snow field scenes and skiing action in the display screen, realize the gliding multiple scene effect of virtual slide in snow, projection module also can throw the sportsman and take the hat scene simultaneously, both realize the scene that reality and virtual combined together encourages the user, still can increase the amusement interactivity of robot, promote user's stickness and high frequency.
The base cover 720 is in a shape close to a claw of the excavator, and the back of the base cover is provided with a double RJ45 network interface, a TypeC interface, an emergency button 727, a distance sensing bit, a sound outlet, a heat dissipation hole and a fixed base; the opening direction of the base cover 700 is fixed with the connecting screw at the bottom edge of the functional platform 701 to form the functional base 700 of the skiing plateau. The 'excavator claw shape' design is beneficial to increasing the internal space of the functional base 700 of the skiing high platform, and the 'excavator claw shape' base cover 720 and the functional platform 701 are combined to form the functional base 700 of the skiing high platform, so that the realization of a hardware structure is facilitated, the attractiveness and the stability of the functional base 700 of the skiing high platform are improved, and the occupied desktop space can be reduced. The base cover 720 is designed for sensing distance, which facilitates the robot to sense and judge the state of the scene on the desktop, such as the distance from the wall to judge whether the user moves the robot and/or whether the robot is the reference detection point position, so that the robot can provide the scene function and service for the user better and actively.
The fixing seats 728 are arranged up and down, the two fixing seats are a pair, the left side and the right side of the back face are respectively provided with a pair of fixing seats 728, so that the double upper and lower fixing seats 728 are formed, and the desk lamp can be conveniently fixed on a desktop or fixed on a wall surface through a support. The design of the double network interfaces facilitates the desk lamp to be connected and communicated from a room network interface, namely, the robot converts a wired network into Wifi wireless and/or IoT wireless to carry out wireless coverage indoors, and meanwhile, the output network interface facilitates the communication connection of a user with an external computer or intelligent equipment. Meanwhile, the indoor wireless coverage can intelligently switch on and off the control function by combining the capability of sensing the user behavior of the indoor space of the robot with a built-in radar sensor, and actively care the functions of switching on and off a privacy switch and/or actively controlling volume and/or actively outputting an instruction and the like of the user. Such as: the robot senses that the child goes to bed and sleeps, and automatically closes wireless Wifi coverage to prevent the child from surfing the Internet with a mobile phone on the bed; if the female owner is sensed to enter the room, actively caring reminds the user to close the camera sensor so as to protect the privacy of the user; if the user is sensed to get up at night, the robot automatically opens the back small night lamp.
Fig. 10 is a schematic structural diagram illustrating a first implementation manner of a mounting bracket of a captivity cap service robot according to some embodiments of the present invention. Fig. 11 is a schematic structural diagram illustrating a second implementation manner of a mounting bracket of a captivity cap service robot according to some embodiments of the present invention. Fig. 12 is a schematic structural view of an object supporting bracket of a capture crown service robot according to some embodiments of the present invention. Referring to fig. 10, 11 and 12, in some embodiments of the present invention, the robot for taking crown services further includes: a bracket 730 is mounted. Mounting brackets 730 are provided on the base cover 720, the mounting brackets 730 including, but not limited to, triangular placement reinforcing brackets 732, clamp-type fixed mounting brackets 731, and wall-mounted fixed mounting brackets. Hold in the palm thing support 733 and pass through the buckle setting before function platform 701, hold in the palm the thing support and form the space of bearing object together with function platform 701, and hold in the palm the thing support and be the slide head molding of slide 620, this design convenience of customers places article such as cell-phone, books, can combine together with skier's slide again, has promoted the pleasing to the eye degree of product and has experienced the sense with mutual. The mounting bracket 730 is used in cooperation with the left and right fixed bases 728 at the back of the desk lamp to meet various requirements of different scenes of different users. The clamp type mounting bracket 731 comprises a double-fixing-seat connecting part and a clamp type table edge clamping part, and the robot is fixed on a table, so that the safety of equipment is protected.
The placement type mounting bracket 732 is triangular or multi-triangular, is connected with the robot double-fixing seat 728 on one side, and is attached to the desktop on the other side, so that the robot can be supported and cannot turn backwards or leftwards and rightwards, and the safety of equipment is protected.
The wall support includes the fixed screw of wall, vertical pole and the fixed knot of pole head, and on the big vertical pole of robot fixing base 728 cover, the fixed knot of pole head was fixed the vertical pole and is formed the closed loop, prevents that the robot from droing and the vertical pole fracture. The bracket form is not limited to the form of the components which are attached and delivered in a standard way, and a user also needs to customize or purchase the mounting bracket 730 with various characteristics so as to meet the personalized requirement and aesthetic requirement of the user; the design is favorable for improving the stability of the placement or installation of the robot so as to adapt to the multi-scene experience requirements of users.
The object support 733 is nearly shaped like a dune and is provided with a double fixing plate, a double sliding plate head and a vertical connecting piece, the sliding plate head is vertical to the sliding plate 620, the double fixing plate is horizontally arranged on a plane, the vertical connecting piece is arranged on the top wall of the double fixing plate, the sliding plate head is arranged on the vertical connecting piece, and the three can be integrally formed and can also be welded and fixed. The design is convenient for a user to charge the mobile phone or the book on the inclined plane or learn, and meanwhile, the scene of the skiing sports of the athlete taking the hat can be more vividly restored, and the user is encouraged. The design avoids the problem that the functional base 700 occupies the desktop space, and the snap-fit connection is very convenient to remove; the object supporting bracket 733 and the functional base 700 form the cavity, so that a user can place articles such as a mobile phone, a book and a dictionary in the cavity, the space utilization rate of a desktop is favorably improved, and the experience feeling and the product viscosity of the user are improved.
In some embodiments of the present application, the captivation crown service robot may further be provided with various decorative patterns or colors. The decorative patterns or colors are arranged at positions including, but not limited to, head characteristic patterns or colors such as goggles of the head 100, hand characteristic patterns or colors such as elbows protected by the rotary arm lamp assembly 500, glove shapes, human-shaped shapes, front and round sides of the functional table 701, lower parts of the leg assemblies 600, the sliding plates 620, the base covers 720 and other surface integral decorative patterns or colors; the design is beneficial to the attractiveness of the overall appearance of the robot, the sense of intimacy and the mental resonance between a user and a robot product are also facilitated to be reduced, the excessive requirements of the user on the appearance form are also facilitated to be reduced, and the standardization and the reduction of the realization and the iterative optimization cost of the robot product are facilitated.
Based on the concept of the captivity service robot, fig. 13 is a flowchart of a commander-free active intelligent implementation method according to an embodiment of the present invention. As shown in fig. 13, the method includes the steps of:
s110, determining a reference detection point for placing the captivity crown service robot, and guiding a user to arrange the captivity crown service robot at the reference detection point.
The benchmark probe point is the most frequently used position for placing the capturing crown service robot, the capturing crown service robot with the advantages and the characteristics of the desk lamp is usually arranged at the positions close to the wall, such as the desktop, the bedside, the sofa and the like, in addition, the fixed radar sensor is arranged for expanding the space perception range for networking communication, and the corresponding benchmark probe point is usually the wall of a room.
In the practical application process, after a user or an installer starts a crowning service robot and/or installs and network other room radar sensor reference detection points with normal communication based on the crowning service robot, the crowning service robot senses the existence of the user, actively outputs voice and/or screen display and/or projection and/or light to guide the user to place the crowning service robot in the most frequently used scene (the reference detection points), the crowning service robot judges whether the crowning service robot is the reference detection point by using a built-in sensor, if the crowning service robot senses that the back surface is more than 50CM away from the wall surface, the voice, display or light and the user can be actively output to confirm the reason or the position authenticity, a privacy switch is turned off (when the sensing function state is in an on state), a camera module is rotated upwards to the limit (when the sensing function defaults to the front monitoring state or any middle position of a rotating shaft, the crowning service robot can be normally sensed in the horizontal direction, if the crowning service robot is configured as a fixed camera module, the user does not need to carry out the step) after multi-mode confirmation or senses the existence of the voice operation of the crowning service robot in the moving position and/or adjusts the sensing direction, if the user does not, the user reminds the user in the next step, and reminds that the system does not exist in the next step, and/or the lamp light and/or reminds the user when the system actively senses the existence of the user.
The captivation service robot can be installed in a living room, a restaurant, a bedroom, a study and other scenes, and also can be installed in a plurality of scenes such as an office, a living room, an apartment, a conference room, a ward, an exhibition room, a store, a school, a factory and the like. Meanwhile, the problems of cost, demand and deployment of total space sensing are considered, and spaces (such as a kitchen, a toilet, a passageway, an elevator hall, a bedroom, a living room and the like) where the crown capturing service robot does not need to be deployed exist, the networked communication can be carried out with the reference detection point radar sensor fixedly installed in a special space, so that the space sensing range of the crown capturing service robot is expanded (the reference detection points of a plurality of rooms are managed by the crown capturing service robot at the same time), and a family can intelligently sense the total space of the family based on at least one crown capturing service robot, and real indoor total space commandedless active intelligence is realized. Certainly, if the mode of only adopting the fixed service robot and the fixed installation radar sensor reference detection point based on the fixed service robot networking communication expansion space perception range can also realize the indoor full-space command-free active intelligence, the advantages of the captivity service robot compared with the fixed service robot are that the position is flexible, the scene is more, the distance from the user is close, the communication and the electricity taking are convenient, the installation is avoided, the matching is avoided, and the landing is easier. Therefore, the key problem of realizing commander-free active intelligence is how to convert the user perception of the indoor space into the behavior and demand judgment of the user based on the captivity service robot. Therefore, the method is innovated, and algorithm judgment is carried out on the captivity service robot and the space perception user of the reference detection point sensor based on the captivity service robot expansion space perception range so as to identify the behavior and the demand of the user in the family space, and information is collected and/or self-contained scenes are output and/or corresponding functions and services are output through other intelligent equipment or systems which are communicated in a networking mode. The crowing-catching service robot can independently and quickly realize commandedless active intelligence and can endow other traditional intelligent equipment with networked communication with commadless active intelligent functions, thereby thoroughly changing the series of problems of passivity, manual operation, user autonomous management, inconvenient voice, complex integration, difficult installation, difficult standardization, difficult popularization and landing and the like of the traditional family intelligent system, and leading the life, work, study, entertainment and home furnishing of users to become easier, safer and more intelligent.
Fig. 14 is a sub-flowchart of a method for implementing commandedless active intelligence according to an embodiment of the present invention.
And S120, performing indoor space sensing based on the reference detection points to configure a spatial structure coordinate graph according to sensing results.
The sensing result is obtained by detecting indoor objects and structures by the captivity service robot and/or the fixedly-mounted radar sensor based on the extended space networking communication of the captivity service robot, and at the moment, the indoor space environment is determined so as to determine a space structure coordinate graph according to the space structure, wherein the space structure coordinate graph is parameterized and described in the indoor space environment.
Specifically, the configuration of the spatial structure coordinate graph in this embodiment mainly includes two modes, namely, a manual mode and an automatic mode, wherein the automatic mode further includes three conditions of combination according to image data, radar data, and multi-sensor data, that is, step S120 includes steps S121 to S124:
and S121, determining a spatial structure layout according to the adjustment operation and confirmation instruction of the user based on the preset structure layout and/or the actual structure layout imported by the user.
In this embodiment, the crowing service robot further provides a system modeling program configuration interface, so as to configure, through the system modeling program, screen-display contents and/or voice broadcast a preset structure layout diagram and/or an actual structure layout diagram imported by a user, and certainly, the crowing service robot is also configured with an actual structure layout diagram import path, and based on the system modeling program configuration interface, a file format, a conventional main structure or an article parameter can be set for the user to confirm or adjust (for example, positions and specifications of a door, a window, a sofa, a television, a wide wall length of a wall, and the like) so that the user completes content input according to the contents or guidance, and marks one to three reference detection point positions, if the user standard exceeds more than three reference detection point positions or the system senses that any point of an indoor room exceeding a preset size may possibly become a detection point, the crowing service robot actively performs voice projection and/or light reminding the user that a minimum of 3 positioning beacons or base stations are installed at a position where the indoor structure characteristics are obvious, and marks the position on the spatial structure diagram, and combines the detection direction of the detection sensor and the detection direction of the crowing service robot, so as to generate a geomagnetic layout diagram based on the indoor structure layout diagram.
And S122, if the adjusting operation and the confirmation instruction of the user are not detected, acquiring an indoor space image, identifying indoor articles and space structure characteristics based on the indoor space image, and generating a space structure layout according to the indoor articles and the space structure characteristics and by combining preset characteristic data.
When the crowing service robot senses that the user exists in the space, the screen display andor voice andor projection andor light guides the user to input the configuration content, and the user does not operate or is unwilling to operate for more than the preset custom time, or the user directly operates and confirms that the operation is not performed or not performed: the captivation service robot automatically starts a system modeling program after sensing that a user leaves, for example, when sensing that the user is in a space, the starting of the system modeling program is suspended, or a screen display or active voice or projection output 'starts the system modeling program', the screen display or projection or active voice output 'requests the user to self-define time (for example, 5 minutes) to adapt to environment time, and requests the user to leave a room', so that the system can identify the space to model; the method comprises the following steps that a single or double camera module shoots an indoor space picture, and main indoor conventional articles and space structure characteristics (such as beds, sofas, windows, bedside cabinets, tables, chairs, doors, floor tiles and the like) in the picture are identified through an identification technology; the crown capturing service robot takes pictures according to a fixed scale (for example, when the horizontal scale and the vertical scale of the pictures are 1CM and the shooting distance is 1 meter, the actual object size is 0.2 meter, and conversely, if the known object size is 0.2 CM and 0.2 meter, the distance from the camera to the object is 1 meter) and the characteristic data (such as a door (0.9 meter wide and 1.9 meter high), a window (0.9-1.05 meter high) of the general household main general article and space structure default or confirmed by a user), a sofa (0.42 meter high of a seat surface of a general single person), a bed (0.5 meter high and 1.2-1.8 meter wide and 1.9-2 meter long), a bedside table (0.5 meter deep and 0.4 meter high), a table (0.8 meter wide and 0.8 meter long service robot) and the robot can judge the size, the shape and the spatial configuration of the crown capturing people can be generated according to the principle of the spatial directions of the robot, the horizontal scale and the horizontal and the vertical scale of the camera module, the spatial configuration of the robot can judge the spatial configuration of the robot and the spatial configuration of the indoor capturing people, the spatial configuration of the indoor geomagnetic capturing direction, and the spatial configuration according to the spatial configuration output principle of the spatial configuration; when the crown capturing service robot senses that a user exists, active voice output or screen display or projection output is performed, namely, a master is requested to rotate the robot left or right in place by 60 degrees (the monitoring angle of a camera module configured in the system is generally not lower than 60 degrees, images shot at the angle are not easy to deform, the images are respectively rotated by 60 degrees left and right, and the images or the spaces are spliced by exactly 180 degrees, so that the crown capturing service robot placed close to a wall can sense the indoor space comprehensively); when the crown capturing service robot senses that the user rotates, the crown capturing service robot judges that the user rotates by 60 degrees to actively stop the user through detection parameters of a geomagnetic sensor and/or a triaxial gyroscope, starts a modeling program after sensing that the user leaves a room, repeats the previous steps, generates left and right areas of a reference area of the indoor space crown capturing service robot to perform automatic configuration, and generates a spatial structure layout of the left and right areas of the reference visible area of the indoor space crown capturing service robot; the captivation crown service robot is combined and spliced into a complete spatial structure layout map according to the spatial structure layout of the robot reference detection area and the left and right detection areas, and generates an indoor spatial structure layout map based on the reference detection position of the captivation crown service robot by taking the reference detection point as a coordinate dot and a geomagnetic sensor.
And S123, or if the adjusting operation and the confirmation instruction of the user are not detected, acquiring indoor radar detection data, identifying indoor articles and space structure characteristics based on the indoor radar detection data, and generating a space structure layout according to the indoor articles, the default specification parameters of the system and the space structure characteristics in combination with preset characteristic data.
When a user or an installer starts a crown capturing service robot, the crown capturing service robot and/or a fixedly installed radar sensor based on extended space networking communication of the crown capturing service robot senses that the user exists indoors, voice or screen display or projection is actively output to guide the user to place the crown capturing service robot in a most frequently used scene (a reference detection point), the back surface of the crown capturing service robot is placed in parallel with a wall surface, the screen display and/or voice guide the user to input configuration contents, and when the user does not operate or is unwilling to operate for a time exceeding self-definition, or the user directly confirms that the user does not operate or does not operate, the system starts an automatic configuration program of a modeling system; the crown capturing service robot starts a radar detection static object mode, describes an indoor space structure and the shape and size of an object according to the size of the reflection area of electromagnetic waves, and generates a space structure layout diagram, such as a space structure layout diagram, and the direction and distance of the radar relative to the object and system default specification parameters; when the crown-capturing service robot is provided with a miniature projector for projecting for 2 m, the area of a projection screen is 60 inches; if the built-in single radar sensor of the captivity-enabled robot has a limited detection angle (if the detection angle of a single radar is 90 degrees, a 180-degree detection angle can be formed by double radars), and when the system senses that a user exists, the system actively requests the user to rotate the captivity-enabled robot leftwards and/or rightwards so as to conveniently configure the system to generate a complete spatial structure layout; the crowning service robot compares the generated indoor space structure layout data with the general characteristic data of the household conventional object and the space structure to judge whether the detected object specification is the specification of a real object, if the detected object specification is not in accordance with the specification, the system automatically redetects the object to carry out specification parameter comparison, if the redetection exceeds the self-determined times and the deviation of the specification parameter of the comparison object is still large, one-time abnormal recognition is recorded, a space structure layout drawing is generated, and simultaneously an object or a space with large deviation is marked and recognized, when a user is sensed to leave home and a camera is started and the camera is in a forward monitoring state, the object specification is rechecked or active voice or screen display or projection and user confirmation are carried out through video recognition, such as: when sensing the presence of a user, actively voice-asking the user, "host, ask for how wide the door of the room? ". And if the video rechecking or the user confirms that the deviation between the object specification and the detection specification is large, the system automatically feeds back to the service platform for algorithm optimization and verification. If the system detects that the door width is 0.5 meter, the actual door width is 1.2 meters.
And S124, establishing a spatial structure coordinate diagram by taking the reference detection point as a coordinate origin based on the spatial structure layout diagram and the perception direction of the captivity service robot.
The crowning capturing service robot establishes a logical relationship according to the spatial structure layout drawing, the detection direction of the geomagnetic sensor, the detection direction, the detection angle and the range of the crowning capturing service robot and/or other room radar sensors with normal networking communication, the coordinate dots of the reference detection point position and the like, and generates a spatial structure coordinate drawing, namely, the corresponding spatial position or coordinate can be found in the corresponding indoor spatial structure layout drawing by the human body coordinate obtained by the movement of the user in the detection range of the crowning capturing service robot. If the coordinate of the reference detection point is (0,0), the position is close to the wall in the middle of the desk in the indoor space layout; and the following steps: the coordinate of the user in the detection range of the capturing crown service robot is (x, y), and the coordinate represents (x 1, y 1) corresponding to the midpoint position of the door in the indoor structure space structure layout diagram. When the captivity service robot senses that the user exists, the captivity service robot actively inquires the user whether other frequently-applied scenes exist in the room, and if the user confirms that the frequently-applied scenes exist, the user is asked to place the captivity service robot in other applied scenes for system configuration. The crown capturing service robot judges the spatial position according to the autonomy, and then generates a spatial structure coordinate graph of the crown capturing service robot at any spatial position, detection direction and range by combining a spatial structure coordinate graph, the position of a reference detection point, the detection angle and direction and the direction detected by a geomagnetic sensor. The spatial structure layout diagram is only a common plane diagram, the reference detection point is also only one point on the plane diagram, and the two-dimensional vertical coordinate of the plane diagram needs to be associated with the polar coordinate detected by the crown capturing service robot, so that the person can find the vertical two-dimensional coordinate corresponding to the indoor spatial structure plane at the polar coordinate of the detection range of the crown capturing service robot and other room radar sensors with normal networking communication, and the next sensing area division can be realized only by using the spatial structure coordinate diagram, because each sensing area is formed by the vertical two-dimensional coordinates of one area. According to the application scene requirements, a virtual perception area can be set.
And S130, dividing a sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area.
After the spatial structure coordinate graph is determined, the spatial structure layout graph, the detection direction of the geomagnetic sensor, the detection direction, the angle and the range of the capturing service robot and other room radar sensors with normal networking communication and the logic relation between the positions of the reference detection points are recorded in the spatial structure coordinate graph, the indoor space is subjected to sensing area division based on the spatial structure coordinate graph, so that a plurality of coordinate areas such as a bed area, a window area, a desk area, a door area, a television area, a sofa area and a projection screen (wall surface) area are obtained, meanwhile, different scene events are set based on different coordinate areas, at least one trigger condition is set for the different scene events, and when the behaviors, the states and the like of a user in the room accord with the trigger conditions, the user is represented to be in the scene event. Exemplary, first, the position (coordinate) information is the only factor in the trigger condition, such as: only the outer bedside of the bed with the side close to the wall can trigger the scene; the following steps are repeated: the scene can be triggered around the dining table in the middle; secondly, generating a logic triggering condition of the scene event by combining time factors, if the scene is triggered, tracing the user positioning coordinate of the self-defined time (such as 1 second time) before the scene is triggered, if the user positioning coordinate is positioned outside the scene area, judging that the user enters the coordinate area, and if the user positioning coordinate is not positioned outside the scene area, judging that the user leaves the coordinate area; and if the user coordinate is not changed, judging that the user continuously exists or giving false alarm to abandon the processing. It will be appreciated that the trigger conditions referred to in this embodiment may also include logical requirements, also referred to as logical conditions, for a series of consecutive actions by the user. The design can also effectively solve the problem of poor user experience of timing control of the basic sensing capability of the traditional sensor or the radar sensor. If the user keeps the toilet timing self-defined time (such as 1 minute) and does not move, the light is automatically turned off.
And S140, sensing user information based on the reference detection point to determine a current scene based on the user information and the trigger condition.
The user information comprises the information of the motion and the pose of the indoor people and objects needing attention at different moments. The method mainly performs scene judgment according to the positioning coordinates (radar normal sensing mode, static object identification mode configured by a distinguishing system) of people or objects moving indoors and specific triggering conditions. Such as: if the user is sensed to fall down, outputting an emergency pre-scene; if the user station slides upwards or downwards in the projection screen or the television area beside the projection screen or the television, the system judges that the display content returns or turns a page downwards; if the captivity service robot is arranged between the projection screen or the television and the user, the detection direction faces towards the user, and the projection direction is opposite to the detection direction, when the user moves back and forth, left and right, stands, squats and the like in the virtual projection screen or the television area in the detection direction, the system judges that the virtual person or object with the displayed content synchronously outputs the movement of moving back and forth, left and right, standing, squats and the like, and the human body perception interaction and the like of the displayed content are realized. That is, the sensing user information based on the reference probe point to determine the current scene based on the user information and the trigger condition includes: sensing a user positioning coordinate, and determining an action record of a user based on the positioning coordinate and corresponding time; determining the current scene based on the action record matching the trigger condition.
S150, generating an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, acquiring information through an input module and/or outputting functions and services through an output module based on the execution instruction, and/or sending the execution instruction to a connecting networking device through a communication module for input and/or output.
The method mainly comprises the steps that according to a scene where a user is located and user information, which services need to be provided by the user are judged, the current scene comprises a pre-scene event, a scene event trigger list and priority, the user information comprises information such as the number of coordinates, characteristics, physical signs, time, networking equipment, equipment states and logic relations among multi-room scenes, and the execution instruction comprises abandoning processing (for example, the action of a non-user such as an animal is detected), a control instruction, display content, an execution program, interactive voice information, reminding voice information and the like; the captivity service robot is provided with an input module and an output module simultaneously, so that various scene services including voice, screen display, projection, light and the like can be output through the input module and/or the output module, the captivity service robot can be prevented from being provided with the output module and being required to be provided with a matched system to output the scene, the system landing is simplified, and the captivity service robot can better serve users. Such as: sensing that a child gets up in the morning on weekends, outputting foreign language greeting voice, if sensing that the child plays in a room in the morning, triggering a bedside event, outputting foreign language voice of 'not sleeping time now', creating a scene of initiating conversation with the child in foreign languages, if sensing that the child plays in the room in the afternoon but does not have a triggering event, outputting foreign language music, poetry, stories or videos and the like which are usually liked by the child, enabling the child to play in an immersive foreign language environment, and unconsciously culturing the senses of the child; if the solitary old people do not get up in the morning at 9 o' clock, waking up the old people by waking up the old people or music is repeatedly output in the self-defining process, if the old people are aware of activities in bed, a voice prompt asking the old people to stretch hands to the crown-seizing service robot to measure the body temperature is output, if the body temperature of the old people is detected to be high fever, and the system pushes user fever information to a service platform or a community health center or a relative mobile phone or a government service center.
The embodiment provides a command-free active intelligent implementation method, which includes the steps of firstly determining a reference detection point for placing a crown-seizing service robot, then guiding a user to arrange the crown-seizing service robot at the reference detection point, conducting indoor space perception based on the reference detection point to configure a spatial structure coordinate diagram according to a perception result, dividing a perception area based on the spatial structure coordinate diagram, configuring a trigger condition of a scene event based on the perception area, perceiving user information based on the reference detection point, determining a current scene based on the user information and the trigger condition, and finally generating an execution instruction according to a preset execution logic based on the current scene and the user information.
Optionally, in some embodiments, fig. 15 is a flowchart of a method for implementing commandenerless active intelligence according to an embodiment of the present invention. As shown in fig. 15, the method includes:
s210, determining a reference detection point for placing the captivity crown service robot, and guiding a user to arrange the captivity crown service robot at the reference detection point;
s220, indoor space sensing is conducted on the basis of the reference detection points, and a space structure coordinate graph is configured according to sensing results.
And S230, dividing a sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area.
And S240, sensing user information based on the reference detection point to determine a current scene based on the user information and the trigger condition.
And S250, judging whether the current scene is matched with the reference detection point.
And S260, if the robot positions are not matched, guiding the user to adjust the pose of the captivation service robot, and detecting the pose adjustment operation of the captivation service robot by the user.
And S270, adjusting the spatial structure coordinate graph and the sensing area according to the pose adjusting operation.
S280, generating an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, dividing the sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area.
The difference between this embodiment and the foregoing embodiment is in steps S250-280, and the purpose is that in the actual use process, considering that the sensing range of the sensor is limited, a situation that the pose of the crown capturing service robot needs to be adjusted may occur: if the direction of the captivity service robot is adjusted in the using process of the user to adapt to the actual using requirement of the user or the detection angle of the captivity service robot is lower than 180 degrees, the problem of multiple detection scenes exists, and the system can synchronously adjust the detection direction and range of the captivity service robot corresponding to the spatial structure coordinate diagram according to the direction of the captivity service robot rotated by the user.
Optionally, in some embodiments, a countermeasure when the user does not have feedback after collecting information through the input module based on the execution instruction and/or outputting the scenario service through the output module and/or sending the execution instruction to another device in communication connection is further provided, which adds steps S290-200 (not shown) after step S280:
s290, judging whether scene feedback of the user based on the execution instruction is sensed or not;
and S200, if not, generating an abnormal instruction based on the current scene, and sending the abnormal instruction to an abnormality coping device.
For example: the user is a chronic disease patient, the meal time is 12 pm, the user is sensed to have active voice output to remind the user to take medicine, if the user is over-user-defined and does not have interactive response or information such as name and dosage of the medicine determined by the coronary capturing service robot, and the coronary capturing service robot records the abnormal information of the medicine taking of the user once.
Optionally, in some embodiments, in order to further optimize the service experience, a self-learning mechanism is further provided to autonomously record habits of the user and provide services for the user in a targeted manner, and specifically, after step S200, step S201 is further included (not shown):
s201, recording the occurrence frequency of the current scene and the feedback frequency with the scene feedback, determining the stage habit or knowledge grasping level of the user according to the occurrence frequency and the feedback frequency, and generating a benign guidance scheme according to the stage habit or knowledge grasping level.
Specifically, in the user-defined time, the execution of the same scene exceeds the preset user-defined times, and meanwhile, the times that the user does not make feedback on the scene exceeds the preset negative times, the fact that the user forms stage habits or knowledge mastering levels is judged, if the stage habits are benign, a benign guiding scheme is formulated to actively take care of reminding the user to execute the scene or give up execution according to a preset time threshold or the occurrence interval of the same scene; if the stage habits are not benign, the benign guiding scheme actively takes care of reminding the user that the bad life habits need to be corrected; and the user executes a benign habit scene, and the system can actively and voice encourage or confirm the user behavior. Such as: in one week, the user sleeps at 1 o' clock at 3 nights, and the system automatically generates bad habits and customs in one period. Such as: when the crown capturing service robot senses a user trigger event, the foreign language dialogue interaction is initiatively initiated to the user for a super-user-defined number of times (for example, 5 times), but the user has no feedback all the time, the system judges that the user does not master the interactive foreign language sentence, and automatically adjusts and outputs the interactive foreign language sentence, or outputs an explanation sentence or outputs a native language inquiry sentence.
Example two
Fig. 16 is a schematic structural diagram of a device for implementing commandedless active intelligence according to a second embodiment of the present invention. As shown in fig. 16, the apparatus 800 for implementing commandemand-less active intelligence of the present embodiment includes:
the placement guide module 810 is used for determining a reference detection point for placing the captivity crown service robot and guiding a user to arrange the captivity crown service robot at the reference detection point;
a spatial sensing module 820, configured to perform indoor spatial sensing based on the reference probe points to configure a spatial structure coordinate graph according to a sensing result;
a scene configuration module 830, configured to divide a sensing region based on the spatial structure coordinate map, and configure a trigger condition of a scene event based on the sensing region;
a user sensing module 840 configured to sense user information based on the reference probe point to determine a current scene based on the user information and the trigger condition;
and the execution module 850 is configured to generate an execution instruction according to a preset execution logic based on the current scene, the sensing area, and the user information, and acquire information through an input module and/or output a scene service through an output module based on the execution instruction and/or send the execution instruction to other communication-connected devices.
Optionally, in some embodiments, guiding the user to set the crowning service robot at the reference detection point includes: the user is guided to place the crown capturing service robot at the reference detection point through voice or screen display or projection or light, so that the back of the crown capturing service robot is parallel to the wall surface, and the distance between the back of the crown capturing service robot and the wall surface is sensed.
Optionally, in some embodiments, performing indoor spatial sensing based on the reference probe point to configure the spatial structure layout according to the sensing result includes: determining a spatial structure layout diagram based on a preset structure layout diagram and/or an actual structure layout diagram imported by a user according to an adjusting operation and a confirmation instruction of the user; if the adjusting operation and the confirmation instruction of the user are not detected, acquiring an indoor space image, identifying indoor articles and space structure characteristics based on the image, and generating a space structure layout diagram according to the indoor articles and the space structure characteristics and preset characteristic data; or if the adjusting operation and the confirmation instruction of the user are not detected, acquiring indoor radar detection data, identifying indoor articles and spatial structure characteristics based on the indoor radar detection data, and generating a spatial structure layout according to the indoor articles and the spatial structure characteristics and the preset characteristic data; and establishing a spatial structure coordinate diagram by taking the reference detection point as a coordinate origin based on the spatial structure layout diagram and the perception direction of the captivity service robot.
Optionally, in some embodiments, the method further includes: recording the occurrence frequency of the current scene and the feedback frequency with scene feedback, determining the stage habit or knowledge mastering level of the user according to the occurrence frequency and the feedback frequency, and generating a benign guiding scheme according to the stage habit or knowledge mastering level.
Optionally, in some embodiments, the method further includes: judging whether the current scene is matched with the reference detection point; if not, guiding the user to adjust the pose of the captivity crown service robot, and detecting the pose adjustment operation of the captivity crown service robot by the user; and adjusting the spatial structure coordinate graph and the sensing area according to the pose adjusting operation.
Optionally, in some embodiments, sensing user information based on the reference probe point to determine the current scene based on the user information and the trigger condition includes: sensing a user positioning coordinate, and determining an action record of a user based on the positioning coordinate and corresponding time; the current scenario is determined based on the action record matching the trigger condition.
Optionally, in some embodiments, the method includes generating an execution instruction according to a preset execution logic based on a current scene, a sensing area, and user information, acquiring information through an input module based on the execution instruction, outputting a scene service through an output module based on the execution instruction, and/or sending the execution instruction to other devices in communication connection, and further includes: judging whether scene feedback of a user based on an execution instruction is sensed; if not, generating an abnormal instruction based on the current scene, and sending the abnormal instruction to the abnormality coping equipment.
The command-free active intelligent implementation device provided by the embodiment of the invention can execute the command-free active intelligent implementation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It is to be noted that the foregoing description is only exemplary of the invention and that the principles of the technology may be employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
It is to be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Numerous obvious variations, rearrangements and substitutions will now occur to those skilled in the art without departing from the scope of the invention. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A capture crown service robot, which is in the form of a capture crown scene of a human athlete in motion, comprising:
a head (100), said head (100) having a light and/or a micro-projection module built therein;
a body (200), the body (200) having built-in radar sensors to sense space and human body and behavior;
a connecting assembly (300) by which the head (100) is detachably connected to the body (200);
the camera shooting rotating module (400) is rotatably connected to the body (200) to perform video recognition or monitoring on the environment, people or objects;
a leg assembly (600) disposed under the torso (200);
the rotating arm lamp assembly (500) is rotatably connected below the body (200), a sensor is arranged in the leg assembly (600) and used for sensing an environment or user gesture or physical sign sensor, a radar sensor is arranged in the body (200) and used for sensing user behavior, and the rotating arm lamp assembly (500) is used for intelligently illuminating according to the environment or the user behavior, gesture or physical sign;
the functional base (700) is arranged below the leg assembly (600), a sliding plate (620) is arranged on the functional base (700), and the sliding plate (620) corresponds to the leg assembly (600).
2. The captivating crown service robot according to claim 1, wherein the connection assembly (300) includes:
the magnetic base is arranged on the magnetic head (100);
the magnetic iron block (340) is arranged at the top of the body (200) and is adsorbed to the magnetic base; and/or
The connection assembly (300) further comprises:
a cover plate (310) arranged on the top of the body (200);
a rotary base (320) provided on the cap plate (310) and connected with the head (100) such that the head (100) can rotate in a first direction; and
a rotation shaft (330) provided on the rotation base (320) and connected with the head (100) such that the head (100) can rotate in a second direction;
wherein, a projection module used for video interaction with a user is arranged on the head part (100).
3. The crowning service robot of claim 1, wherein the camera rotation module (400) comprises:
a mounting base (410) provided on the body (200);
a rotating part (420) rotatably connected to the mounting base (410);
and a camera (430) provided on the rotating portion (420).
4. The captivating crown service robot of claim 1, wherein the rotating arm light assembly (500) comprises:
an arm (510) rotatably connected to the body (200);
the upper arc lamp (520) is arranged on the arm rod (510) and is used for emergency, night, color-changing scene and atmosphere illumination;
a lower bevel lamp (530) disposed below the arm (510) for non-blue light and non-flicker health light illumination;
a light shielding edge (560) arranged in front of the arm lever (510) for shielding light;
and the adsorption part (540) is arranged on the vertical surface at the inner side of the arm lever (510) and is used for matching and fixing the arm lever (510) with a magnet arranged in the body (200).
5. The robot for crown grabbing service according to claim 4, wherein the body (200) is provided with a receiving gap, an arm rod mounting bin and an arm rod mounting seat, the arm rod mounting seat is used for being rotatably connected with the arm rod (510), and a connecting magnet is arranged in the arm rod mounting bin and used for adsorbing and fixing the arm rod (510) with the adsorption part (540) after the arm rod (510) is upwards rotated; the receiving notch is used for placing the arm lever (510) which is folded downwards.
6. The capturing crown service robot according to any one of claims 1 to 5, wherein the connection assembly (300), the camera rotation module (400), the rotating arm lamp assembly (500), the leg assembly (600) and the function base (700) are provided with a state detection module connected with the processing unit, and the state detection module is used for detecting the operation state and the module state of the corresponding structure.
7. The crowning service robot of any one of claims 1-5, characterized in that the leg assembly (600) is squat, the leg assembly (600) comprising:
the supporting legs (610) are arranged on the lower side of the body (200) and connected with the sliding plate of the functional base (700), heat dissipation and ventilation holes are formed in the supporting legs (610), and sensors are arranged in the supporting legs (610) so as to detect air environment, body temperature, human bodies and/or gestures;
one side of the body (200) close to the leg and one side of the functional base (700) close to the leg are both provided with heat insulation protection modules.
8. The captivation crown service robot according to any one of claims 1 to 5, wherein the functional base (700) includes:
the functional table (701), the functional table (701) is arranged obliquely, and a plurality of functional interfaces are arranged on the functional table (701);
the base cover (720) is arranged on the bottom surface of the functional table (701), and a plurality of functional communication ports are formed in the base cover (720);
and the display screen (710) is arranged on the top surface of the functional table (701) and is used for displaying dynamic pictures.
9. The captivation crown service robot of claim 8, further comprising:
a mounting bracket (730) disposed on the base cover (720), wherein the mounting bracket (730) includes, but is not limited to, a triangular-shaped placement reinforcement bracket (732), a pincer-type fixed mounting bracket (731), and a wall-mounted fixed mounting bracket; and
and the object supporting bracket (733) is arranged in front of the functional table (701) through a buckle, the object supporting bracket and the functional table (701) form a space for supporting an object, and the object supporting bracket (733) is a sliding plate head model of the sliding plate (620).
10. The captivation crown service robot of any one of claims 1-5, further comprising:
the device comprises an AI core processor, a storage and expansion storage unit, an input unit, an output unit and a communication unit which are all in communication connection with the AI core processor, and a power supply unit which is respectively and electrically connected with the AI core processor, the storage and expansion storage unit, the input unit, the output unit and the communication unit; wherein the input unit includes but is not limited to: the radar sensor and the built-in sensor.
CN202221373395.6U 2022-06-02 2022-06-02 Capture service robot Active CN217530864U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202221373395.6U CN217530864U (en) 2022-06-02 2022-06-02 Capture service robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202221373395.6U CN217530864U (en) 2022-06-02 2022-06-02 Capture service robot

Publications (1)

Publication Number Publication Date
CN217530864U true CN217530864U (en) 2022-10-04

Family

ID=83443943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202221373395.6U Active CN217530864U (en) 2022-06-02 2022-06-02 Capture service robot

Country Status (1)

Country Link
CN (1) CN217530864U (en)

Similar Documents

Publication Publication Date Title
CN107980106B (en) Environmental control system
CN114986532A (en) Crown capturing service robot and non-command type active intelligence implementation method
CN108390859B (en) Intelligent robot device for intercom extension
US11651258B2 (en) Integrated docking system for intelligent devices
CN109445288A (en) A kind of implementation method of wisdom family popularization and application
TWI610596B (en) Modular multifunctional bio-recognition lighting device
CN109634129B (en) Method, system and device for realizing active care
CN109917666B (en) Intelligent household realization method and intelligent device
CN108924612A (en) A kind of art smart television device
CN109345799A (en) Intelligent household's hazard detector of context special characteristic and/or early warning configuration is provided
TWI627606B (en) Multifunctional home monitoring system combined with lighting device
CN110275443A (en) Intelligent control method, system and the intelligent apparatus of active
WO2021213193A1 (en) Intelligent robot weak box
CN103763163A (en) Intelligent housing system based on ubiquitous network
WO2019140697A1 (en) Interphone extension intelligent robot device
CN108845595A (en) A kind of split type temperature control devices and methods therefor with gateway function
WO2020191755A1 (en) Implementation method for smart home and smart device
CN205405107U (en) Intelligent housing system based on wireless sensor network
US11444710B2 (en) Method, apparatus, and system for processing and presenting life log based on a wireless signal
CN109417842A (en) There are simulation systems and method
CN106406447B (en) A kind of user intelligently accompanies terminal device and its application method
CN114278892A (en) Intelligent lighting system and method for Internet of things
CN217530864U (en) Capture service robot
CN214409479U (en) Intelligent glasses
CN214808414U (en) Indoor multifunctional device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant