CN114986532A - Crown capturing service robot and non-command type active intelligence implementation method - Google Patents
Crown capturing service robot and non-command type active intelligence implementation method Download PDFInfo
- Publication number
- CN114986532A CN114986532A CN202210625935.3A CN202210625935A CN114986532A CN 114986532 A CN114986532 A CN 114986532A CN 202210625935 A CN202210625935 A CN 202210625935A CN 114986532 A CN114986532 A CN 114986532A
- Authority
- CN
- China
- Prior art keywords
- user
- robot
- service robot
- crown
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention belongs to the technical field of intelligent service, and discloses a captivation service robot and a command-free active intelligence implementation method, which specifically comprise the following steps: a head portion; a body; the head is detachably connected with the body through the connecting component; the camera shooting rotating module is rotatably connected to the body; rotating the arm lamp assembly, wherein a built-in sensor senses an environment or user gesture or sign sensor, and a radar sensor is arranged in the body; a leg assembly; a functional base. The robot is internally provided with a plurality of sensors and an intelligent system, and can automatically identify the environment and the state of a user, thereby actively providing corresponding functions and services for the user, realizing single-product solution of multi-demand of the user, providing commander-free active intelligent service, solving the series problems of passive intelligence, difficult system realization, difficult old installation change, difficult standardization, difficult landing and the like of the traditional hardware, and making the life, work, study, entertainment and home of the user easier, safer and more intelligent.
Description
Technical Field
The invention relates to the technical field of intelligent service, in particular to a captivity service robot and a command-free active intelligent implementation method.
Background
The seventh census data shows that the population at 60 years and above is 2.64 billion people across the country, with 1.9 billion people at 65 years and above, and it is predicted that the aged population at 2025 years, 60 years and above will break through 3 billion, and will reach a peak of 4.87 billion in 2053. The Chinese-style endowment has been said to have a 9073 pattern, namely 90% of family endowment, 7% of community-family endowment and 3% of institution endowment. Obviously, home health and care become rigid demands for the elderly and even the whole society. At present, relatively independent medical health and nursing service systems are difficult to meet the multi-level and diversified health and nursing requirements of old people, and medical nursing combination is urgently needed. Perfects a nursing service system combining home-based, community-based, institution-based, medical and nursing.
The intelligent home health/endowment utilizes new-generation information technology products such as internet of things, cloud computing, big data and intelligent hardware, can realize effective butt joint and optimal configuration of personnel, families, communities, mechanisms and healthy endowment resources, promotes the intelligent upgrading of healthy endowment service, and promotes the efficiency level of home health/endowment service quality. Only the needs of the home health/care user are all-around, including: home security, physical sign awareness, health record and supervision management, fall and emergency alarm, intelligent companion, medical care, daily care, parent communication, entertainment and leisure, home intelligence, communication coverage, privacy protection, care for children, child learning, housekeeping services, community services, and the like. The current market has many intelligent products related to home health/care, such as: the robot comprises a health robot, a companion robot, wearable equipment, a sign sensor, a falling sensor, an emergency button, an educational robot, a learning desk lamp, an environment sensor, full house intelligence and the like. However, the functions of the products are single, the problem of multiple demands of users is difficult to solve by a single product, the system integration is realized in a large scale, the memory of the old is reduced, the culture level is limited, the more and more system equipment is integrated, the more difficult the system equipment is to fall to the ground and the more difficult the system equipment is to operate, and the current system is a user autonomous management mode based on a mobile phone and is rarely docked with communities, organizations and healthy endowment resources, so that the current smart home health/endowment based on traditional equipment is more in a walking form.
The intelligent system building intercom system that wisdom community is the only wired connection's of property and family intelligent system, because of solving that user demand is few, the system is complicated, the fault rate is high, the operation cost is high, installation maintenance and old change difficult, low frequency, weak stickness, experience very poor scheduling problem, should undertake the core carrier of the digital operation family end of community, but become the digital operation burden of community. Therefore, the smart community has been developed in China for decades, the medical and health combined community service integrating the smart community and the service, the endowment, the medical care and the insurance is yearly, a lot of smart community SAAS platforms are built, but the income generated by the source opening (parking, property and advertising fees are traditional income) is quite small, the reason is simple, namely that the digital operation of the smart community is the operation of a public area, the association degree with the family of a user is very weak, the user cannot buy a bill for any equipment, system and service in the public area, the operation has no opportunity of income opening, and the investment of developers, property and operators and the return are impossible. Therefore, to realize the virtuous circle development of the ecology of the smart community industry, a core carrier of a digital operation family end which is just needed to solve the trust and the dependence of users for user demands is required, the user value is fully excavated while the family users are well operated and served, the ecological re-consumption of the users is guided, and the comprehensive service guarantee is provided for the ecological re-consumption, so that the users can actively enjoy the services and the re-consumption, the virtuous circle development of the ecology of the whole community industry can be formed, and the medical and nutritional combined community service of community + service + old age + medical care + insurance can fall to the ground.
Active intelligence can help single article to realize more functions, solves more demands of the user, high frequency and strong stickness. Although many manufacturers realize that active intelligence can provide better experience for users at present, no clear solution exists for solving the multi-demand of users and the realization degree of the active intelligence based on the active intelligence. The realization of active intelligence can be realized by system integration and single-product function, the existing intelligent equipment realizes that active intelligence can only pass through the system integration mode, and the active intelligence realized by the integrated mode is basically simple linkage active intelligence, few in solved user demands and poor in user experience, and the single-product solves the problem that multiple demands of a user become necessary based on the active intelligence. The single product aims at solving the multiple demands of users, and the product innovation faces: product type, form, structure, connotation, pleasing to the eye, ground height, communication, cost, attribute, functional requirement, independence, system implementation, privacy, safety, health, scene, experience, suitable old, practical, deployment, after sales, algorithm calculation power, supply chain, operation, business model, industry ecology and other many-sided factor considerations, not simple stack formula integration can realize, so must be the all-round innovation of subversing, the new difficult problem of solving of wicresoft.
Disclosure of Invention
The invention aims to provide a crown capture service robot and a method for realizing commandless active intelligence, which are characterized in that the product structure is innovative to solve the problem of multiple demands of users, meanwhile, a commandless active intelligence system is fused to actively provide related functions and services for the users, the demands of more pain points of the users are actively solved, and the functions of high frequency, strong stickiness, good experience, no installation, no matching, standardization, cost reduction and the like of the crown capture service robot are realized.
In order to achieve the purpose, the invention adopts the following technical scheme:
a crown-capturing service robot in the form of a human athlete moving crown-capturing scene, comprising: a head, in which a lighting lamp and/or a micro-projection module is built; a body having a radar sensor built therein to sense space and human body and behavior; the head is detachably connected with the body through the connecting component; the camera shooting rotating module is rotatably connected to the body to carry out video identification or monitoring on the environment, people or objects; the rotating arm lamp assembly is rotatably connected to the body, a sensor arranged in the leg assembly senses an environment or user gesture or physical sign sensor, a radar sensor arranged in the body senses a user behavior, and the rotating arm lamp assembly is used for illuminating according to the environment or the user gesture, behavior or physical sign; a leg assembly disposed under the torso; the functional base is arranged below the leg component, a sliding plate is arranged on the functional base, and the sliding plate corresponds to the leg component.
Optionally, the connection assembly comprises: the magnetic suction base is arranged on the magnetic suction type head; the magnetic iron block is arranged at the top of the body and is adsorbed to the magnetic base; and/or the connection assembly further comprises: the cover plate is arranged at the top of the body; a rotating base provided on the cover plate and connected with the head portion such that the head portion can rotate in a first direction; the rotating shaft is arranged on the rotating seat and connected with the head, so that the head can rotate in a second direction; the head is provided with a projection module used for video interaction with a user.
Optionally, the camera rotation module includes: the mounting seat is arranged on the body; the rotating part is rotatably connected to the mounting seat; and the camera is arranged on the rotating part.
Optionally, the swivel arm light assembly, comprising: the arm rod is rotatably connected with the body; the upper arc surface lamp is arranged on the arm rod and used for emergency, night, color-changing scene and atmosphere lighting; the lower inclined lamp is arranged below the arm rod and used for illuminating the health lamp without blue light and flickering; the shading edge is arranged in front of the arm rod and is used for shading light; and the adsorption part is arranged on the inner side vertical surface of the arm lever and is used for fixing the arm lever in a manner of being matched with the built-in magnet of the body.
Optionally, the camera rotation module comprises: the mounting seat is arranged on the body; the rotating part is rotatably connected to the mounting seat; and a camera disposed on the rotating portion.
Optionally, a storage notch, an arm rod installation bin and an arm rod installation seat are arranged on the body, the arm rod installation seat is used for being rotatably connected with the arm rod, and a connection magnet is arranged in the arm rod installation bin and used for adsorbing and fixing the arm rod with the adsorption part after the arm rod is upwards screwed; the accommodating notch is used for accommodating the arm rod which is folded downwards.
Optionally, the connecting assembly, the camera rotation module, the rotary arm lamp assembly, the leg assembly and the functional base are all provided with a state detection module, and the state detection module is used for detecting the running state and the module state of the corresponding structure.
Optionally, the leg assembly is squat-shaped, the leg assembly comprising: the landing leg is arranged below the body and connected with the sliding plate of the functional base, a heat dissipation hole and a vent hole are formed in the landing leg, and a sensor is arranged in the landing leg so as to detect the air environment, the body temperature, the human body and/or the gesture; wherein, the body is close to one side of shank and one side that is close to of function base the shank all is equipped with thermal-insulated protection module.
Optionally, the functional base comprises: the functional table is obliquely arranged and is provided with a plurality of functional interfaces; the base cover is arranged on the bottom surface of the functional table and is provided with a plurality of functional communication ports; and the display screen is arranged on the top surface of the functional table and used for displaying dynamic pictures.
Optionally, the capturing crown service robot further comprises: the mounting bracket is arranged on the base cover and comprises but is not limited to a triangular placing reinforcing bracket, a clamp type fixed mounting bracket and a wall-mounted fixed mounting bracket; the object supporting bracket is arranged in front of the functional table through a buckle, the object supporting bracket and the functional table form a space for supporting an object together, and the object supporting bracket is in a sliding plate head shape of the sliding plate.
The application also provides a method for realizing commander-free active intelligence, which is characterized in that the method is applied to any one of the captivity crown service robots, and the method comprises the following steps: determining a reference detection point for placing a captivation crown service robot, and guiding a user to arrange the captivation crown service robot at the reference detection point; performing indoor space sensing based on the reference detection points to configure a spatial structure coordinate graph according to a sensing result; dividing a sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area; sensing user information based on the reference probe point to determine a current scene based on the user information and the trigger condition; and generating an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, acquiring information and/or outputting functions and services through an input module and/or an output module based on the execution instruction, and/or sending the execution instruction to a connecting networking device for input and/or output through a communication module.
The invention has the beneficial effects that:
the canopy capturing service robot takes the canopy capturing scene of the high platform movement of a humanoid athlete as a product form, and gives the product the intension of encouragement; simultaneously, the captivity service robot solves the fusion integration of a plurality of intelligent modules and sensors through structural innovation, enables the captivity service robot to sense environment, physical signs, states and user behaviors, habits and demands, and realizes the single-item robot solving the family by providing the captivity service robot with multi-scene output: the system has the advantages of meeting the requirements of users on pain points such as inspiring, (sign + environment + behavior + habit + state), sensing, home security, falling and emergency alarming, visual talkback, video monitoring and OCR (optical character recognition), health files, machine and remote inquiry, health supervision and management, home control, intelligent projection screen projection, intelligent sound boxes, intelligent companions, entertainment interaction, somatosensory interaction, intelligent lighting, eye protection, health supervision and learning, native language environment type interactive learning, deviation rectification bad practice, communication coverage, privacy protection, data security and the like, avoiding installation, avoiding kit matching and being easy to standardize, the robot also provides a non-command type active intelligent service, solves the series problems that the traditional hardware is difficult to solve multiple requirements, the traditional intelligent system is difficult to realize, the installation is difficult to change, the standardization is difficult, the landing is difficult and the like, has good active intelligent user experience, the high frequency and strong viscosity can completely bear the core carrier of the community digital operation family end; meanwhile, business mode innovation is carried out based on the robot, so that the user can continuously generate re-consumption when experiencing the services of the intelligent community digital operation platform, operators have stable income and have more investment and perfect services, the services such as community, life, O2O, housekeeping, safety, health, entertainment, property and the like which are circularly served for the user tend to be mature and perfect, and the operators are not invested and served for free, so that the intelligent community industry ecology can be well-circulated and developed, and the life, work, study, entertainment and home furnishing of family users become easier, safer and more intelligent.
Drawings
Fig. 1 is a schematic structural diagram of a capture crown service robot according to some embodiments of the present invention.
Fig. 2 is a schematic structural diagram illustrating a connection assembly of a captivity cap service robot according to some embodiments of the present invention.
Fig. 3 is a schematic diagram of an exploded view of a head of a captivation robot in some embodiments of the invention.
Fig. 4 is a schematic diagram illustrating an exploded structure of a capturing crown service robot body according to some embodiments of the present invention.
Fig. 5 is a schematic structural diagram of a cover plate of a capture crown service robot in some embodiments of the present application.
Fig. 6 is a schematic structural view of a connecting component of a captivity crown service robot, which is a magnetic base and a magnetic iron block according to some embodiments of the present invention.
Fig. 7 is a top view of a rotating arm light assembly of a captivation robot in accordance with some embodiments of the present invention.
Fig. 8 is a schematic bottom view of a lamp assembly of a rotating arm of a captivating robot in some embodiments of the invention.
Fig. 9 is a schematic diagram of a base structure of a capture crown service robot according to some embodiments of the invention.
Fig. 10 is a schematic structural diagram illustrating a first implementation of a mounting bracket of a captivity cap service robot in some embodiments of the invention.
Fig. 11 is a schematic structural diagram illustrating a second implementation of a mounting bracket of a captivity cap service robot in some embodiments of the invention.
Fig. 12 is a schematic structural view of a captivity support of a captivity service robot in some embodiments of the invention.
Fig. 13 is a flowchart illustrating a method for implementing commandedless active intelligence according to an embodiment of the present invention.
Fig. 14 is a sub-flowchart of a commander-less active intelligent implementation method according to an embodiment of the present invention.
Fig. 15 is a flowchart illustrating a method for implementing commandedless active intelligence according to an embodiment of the present invention.
Fig. 16 is a schematic structural diagram of a commander-free active intelligent implementation apparatus according to a second embodiment of the present invention.
In the figure: 100. a head portion; 110. goggles; 120. a helmet light; 130. closing the plate; 200. a body; 210. mounting a plate; 300. a connecting assembly; 310. a cover plate; 311. a rotating tank; 320. a rotating base; 330. a rotating shaft; 331. rotating the two shafts; 340. magnetically attracting the iron block; 341. a bracket; 400. a camera rotation module; 410. a mounting seat; 411. a rotating chamber; 420. a rotating part; 430. a camera; 500. rotating the arm lamp assembly; 510. an arm lever; 520. a cambered surface; 530. arm bar elbow guard; 540. an adsorption part; 560. a light shielding edge; 600. a leg assembly; 610. a support leg; 620. a slide plate; 700. a functional base; 701. a function table; 710. a display screen; 720. a base cover; 721. an SPDIF fiber audio interface; 722. an HDMI output interface; 723. a USB interface; 725. a volume switch; 726. a microphone array sensor; 727. an emergency button; 728. a fixed seat; 730. mounting a bracket; 731. a pincer-type mounting bracket; 732. a placement type mounting bracket; 733. a material supporting bracket; 740. a physical sign sensor; 800. a realizing device; 810. placing a guide module; 820. a spatial perception module; 830. a scene configuration module; 840. a user perception module; 850. and executing the module.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
In the description of the present invention, unless expressly stated or limited otherwise, the terms "connected," "connected," and "fixed" are to be construed broadly, e.g., as meaning permanently connected, removably connected, or integral to one another; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
In the description of the present embodiment, the terms "upper", "lower", "right", etc. are used in an orientation or positional relationship based on that shown in the drawings only for convenience of description and simplicity of operation, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used only for descriptive purposes and are not intended to have a special meaning.
The robot for capturing the canopy takes the shape of a high-platform skiing and capturing scene of a humanoid athlete, has the spirit of combining the excitement of a user in struggling for brave warrior leaps all the time, and the appearance of the athlete can be sprayed with colors to form clothes of athletes in different countries, so that the product is closer to the emotion of the user, the requirement on individualization of the product shape is reduced, and the robot for capturing the canopy is beneficial to product standardization and cost reduction.
Fig. 1 is a schematic structural diagram of a capture crown service robot according to some embodiments of the present invention. Fig. 2 is a schematic structural diagram illustrating a connection component of a captivity cap service robot according to some embodiments of the present invention. Referring to fig. 1 and 2, the capturing crown service robot includes: head 100, torso 200, attachment assembly 300, camera rotation module 400, rotating arm light assembly 500, leg assembly 600, and functional base 700. Head 100 houses a light and/or micro-projection module; the body 200 houses radar sensors to sense space and human body and behavior. The head 100 is detachably coupled to the body 200 by a coupling assembly 300. The camera rotary module 400 is rotatably connected to the body 200 to perform video identification or monitoring on the environment, people or objects; the rotating arm light assembly 500 is rotatably connected to the body 200, a sensor is arranged in the leg assembly 600 for sensing an environment or user gesture or physical sign sensor, a radar sensor is arranged in the body 200 for sensing a user behavior, and the rotating arm light assembly 500 is used for lighting according to the environment or user gesture, behavior or physical sign, wherein the lighting can be divided into downward output healthy lighting and upward output emergency, night, scene or atmosphere lighting. Leg assembly 600 sets up in body 200 downside, and function base 700 sets up in leg assembly 600 downside, is equipped with slide 620 on the function base 700, and slide 620 corresponds with leg assembly 600.
Fig. 3 is a schematic diagram of an exploded view of a head of a captivation robot in some embodiments of the invention. Referring to fig. 3, in particular, the head 100 may have a cubic shape or an oval shape or an irregular shape as a whole, may simulate the face of a skier on the outer surface thereof and may be provided with a visor 110, and may be provided with a helmet light 120 on the top thereof to simulate a skiing movement. A sealing plate 130 may be removably attached to the back of the head 100 to facilitate access. The left and right sides of the head 100 and the sealing plate 130 may be provided with a plurality of heat dissipation holes to ensure heat dissipation efficiency, the heat dissipation holes may be in a shape of a strip, a circle or other shapes, the sides of the head 100 may be provided with corresponding decoration patterns, and the specific heat dissipation hole configuration and pattern content may be designed according to the requirements of the actual user, which is not limited in the present invention. The detachable connection of the head 100 and the body 200 can meet the requirements of different users on different grades and different scenes, and can improve the interactive experience of the users; the design of the intelligent lighting LED lamp on the front of the head can bear the auxiliary lighting effect of video identification and monitoring of the camera besides normal scene lighting, and the space lighting problem during multi-scene interaction such as live broadcast, video interaction and illegal invasion is solved. The head 100 adopts a barrel-shaped base and cover plate design, so that the multi-edge splicing combination is reduced, and the attractiveness, firmness and standardization of the product are favorably improved.
Fig. 4 is a schematic diagram illustrating an exploded structure of a capturing crown service robot body according to some embodiments of the present invention. Referring to fig. 4, the body 200 may have a shape of "1" as a whole, and the connecting member 300 may be provided at the top thereof for mounting the head 100, at the side thereof for mounting the rotating arm lamp assembly 500, at the bottom thereof for mounting the leg assembly 600, and at the rear thereof for mounting the mounting plate 210 having a shape of "7". The built-in radar sensor is used for perceiving space and human body and action, this design is the most key sensor setting that the robot distinguishes traditional hardware, realize the robot to the action of space user and the discernment of demand, so that the robot initiatively is user output function and service, the favourable more demands of single hardware solution user that increases, also the favourable frequency of use that greatly promotes the robot, combine the complete function of robot and content still can promote the viscidity of user to the robot, for the digital operation excavation user value of community and platform operation service establish the basis. The camera rotation module 400 may be disposed at the front side of the body 200 near one end of the head 100. The camera rotation module 400 can rotate up or down to identify or monitor different environments, scenes, people or objects, while the camera module signal can be directly physically cut off by the privacy switch to protect the user privacy. The body 200 may be formed with a "U-shaped" body and a back cover to define an interior hollow interior of the body 200.
A built-in sensor can be arranged in the inner cavity of the body 200, and the built-in sensor can comprise a radar sensor, a magnetic sensor, a magnet, a wireless communication antenna and the like; the built-in wireless antenna is beneficial to improving the coverage range of wireless communication; the magnetic sensor is internally provided with a state sensing and output control instruction for conveniently rotating the arm lamp assembly 500 when the cage is closed, and a state and output control instruction for the camera rotating module 400. Corresponding slotted holes can be formed in the body 200 to serve as radiating grooves and radiating holes, the radiating grooves can be arranged close to the radar sensor, and the radiating holes can be formed in the two sides and the back of the body 200 to facilitate radiating of the radar sensor and form temperature isolation between the body and the leg assembly 600.
Rotatory arm banks spare 500, shank subassembly 600, then can set up corresponding function according to the demand on the function base 700, for example be equipped with the night-light in the rotatory arm banks spare 500, there is not healthy lamp of blue light flicker, the discolour lamp, multiunit LED lamps such as emergency light, through the body 200, shank subassembly 600, built-in radar of parts such as coupling assembling 300 and function base 700, magnetic force, the state, the environment, the gesture, the distance, the sign, the video, the sensing data of sensors such as pronunciation, the binding time, the environment, factors such as space, sensing to user's action in the space, the gesture, the custom, fall down, the state, the existence etc., the illumination is automatic to be opened or to be closed. The functional base 700, the leg assembly 600 and other components are provided with the physical sign sensors, and when the space senses the behavior and/or presence and/or state of the user, the user is actively reminded or cared for physical sign detection by combining factors such as time, environment, health and the like.
The canopy capturing service robot takes the canopy capturing scene of the high-platform movement of a humanoid athlete as a product form, and endows the product with the connotation of a motivation user; grab hat service robot simultaneously through structural innovation, solve a plurality of intelligent object and sensor just the integration that is, make and grab hat service robot and can perceive environment, sign, state and user's action, custom and demand, in addition grab hat service robot from taking many scenes output, realize that the family is solved to the single-item robot: the robot has the advantages that the robot can meet the requirements of multiple pain points of users such as inspiring, (sign + environment + behavior + habit + state) perception, home security, falling and emergency alarming, visual talkback, video monitoring and OCR (Optical Character Recognition), health files, machine and remote inquiry, health supervision and management, home control, intelligent projection screen projection, intelligent sound boxes, intelligent companions, entertainment interaction, somatosensory interaction, intelligent lighting, eye protection health, supervised learning, native language environment type interactive learning, deviation correction bad learning, communication coverage, privacy protection, data security and the like, is free of installation, free of matching and easy to standardize, provides command-free active intelligent service, solves the series problems of multiple requirements difficult to solve by traditional hardware and the series of passive intelligence, difficult system implementation, difficult installation, difficult modification, difficult standardization, difficult landing and the like of the traditional intelligent system, and has good active intelligent user experience, high frequency and strong viscosity; meanwhile, business mode innovation is carried out based on the robot, so that the user can continuously generate re-consumption when experiencing the services of the intelligent community digital operation platform, operators have stable income and have more investment and perfect services, the community, life, O2O, housekeeping, safety, health, entertainment, property and the like which circularly serve the user are circularly provided, the services tend to be mature and perfect, the operators are not freely invested and served, the ecological environment of the intelligent community industry can be well circularly developed, and the life, work, study, entertainment and home furnishing of family users can be easier, safer and more intelligent.
Capture the hat service robot and still include: the device comprises an AI core processor, a storage and expansion storage unit, an input unit, an output unit, a communication unit and a power supply unit, wherein the storage and expansion storage unit, the input unit, the output unit and the communication unit are all in communication connection with the AI core processor; wherein, the input unit includes but is not limited to: radar sensors and built-in sensors.
Specifically, the input unit includes, but is not limited to: geomagnetic, triaxial tuo snail appearance, health, body temperature, air circumstance, distance, gesture, radar, camera, microphone array, privacy switch, state, prevent tearing open switch, magnetic force, touch input etc. sensor. The robot is provided with a built-in multi-sensor design, so that the robot can conveniently and comprehensively collect physical signs, environment, state data and the like of a family and the behaviors, habits and requirements of a human body in an indoor space, the robot can conveniently and actively provide corresponding functions and services for a user, and the use frequency and the strong viscosity of the robot of the user can be favorably improved. Meanwhile, the robot can conveniently judge the state of the robot in time, and a condition foundation is created for the robot to accurately provide service for users.
Communication units include, but are not limited to: the system comprises a dual-frequency Wifi module, a Bluetooth module, a dual LAN module, an RF infrared module, an extended carrier PLC module, an extended Zigbee or mate module, an extended LoRa module, an extended 4G/5G module and the like. The design of the multi-network communication module is combined with the robot structure, and the cooperation of the multi-network communication module can be installed, so that the robot can conveniently realize the function of the home communication gateway; the design of the double LAN interfaces and the double-frequency Wifi fully utilizes the characteristic that a desktop scene is close to a household wired communication interface to carry out wireless communication coverage on a room, thereby not only helping a user to save the wireless communication coverage cost, but also avoiding the problem that a signal of a large dwelling size adopts a Wifi relay mode is unstable or has no signal relay and solving the problem of communication connection of multiple communication devices on the desktop; if the robot is added with active intelligence, the intelligent management can be carried out on the wireless coverage, which is beneficial to the use habits of the correction users and the protection of the network safety. Meanwhile, the design of the communication module can be expanded, and a user can customize the communication module according to the self requirement, so that the cost is saved for the user, the personalized communication coverage requirement is realized, and the sale price and the production cost of the robot are reduced.
Output units include, but are not limited to: DO signal, display screen, miniature projector head portrait, multichannel LED lamp, loudspeaker, output interface etc.. The multi-scene output design is beneficial to solving the multi-demand of the user by the robot without the output of other matching systems, realizes the matching-free installation and is beneficial to realizing the standardized landing of the system function and the robot; meanwhile, the multi-scene output is carried out, the use frequency and strong viscosity of the robot by a user are improved, and the entertainment interactivity and the user experience feeling of the robot are favorably improved.
The power module comprises but is not limited to a power adapter, a charging, electricity storage and overcharge protection module, and is respectively and electrically connected with the communication unit, the processing unit and the input unit.
Carry out the ability of space perception and catch user's action through above-mentioned structure, the input/output unit that all has traditional intelligent hardware becomes more intelligent, and interactive interaction is stronger, realizes the real intelligence of family service robot, makes user's life, work, study, amusement, house become easier, safer, more healthy, more beautiful, more wisdom.
For example: when sensing that a user enters a room, the environment sensor actively detects indoor environment data and actively reports the environment condition to the user without actively inquiring the robot by the user. The following steps are repeated: when the user is sensed to enter the room in the defense deploying state, the robot actively reminds the user to confirm the identity (the identity must be confirmed through other ways when the camera is in the closed state or in the downward monitoring state), and if the user does not confirm the identity and/or the robot is moved after the user exceeds the self-defined time, the system gives an alarm.
And the following steps: perceiving that the child goes home, the robot actively initiates a dialogue to the child in a foreign language to greet the child and or the immersive interactive learning in the foreign language; when the child plays indoors on weekends, the robot automatically plays foreign language videos, music, poetry and the like which the child likes, actively initiates dialogue interaction to the child in a foreign language according to the played content and the spatial behavior of a user, builds a foreign language learning environment which really lives abroad, and realizes immersion environment type foreign language interactive learning.
The following steps are repeated: the robot senses that the user falls down or actively seeks help, the system automatically gives a pre-alarm, and when the alarm is over the user-defined time or confirmed by the user, the system automatically gives an alarm to the parent mobile terminal or the property service center platform or the service operation platform and the like. The perception user goes to the hospital to see a doctor and goes home, and actively reminds the user to put a doctor diagnostic book in front of the robot for scanning and entering into the file. Sensing that a user goes home at night, automatically starting a built-in lamp of the robot to illuminate different scenes and/or starting projection and/or playing music so as to create a warm and comfortable environment for a family, without the need of matching sensing, hardware and a system, and realizing the scene function independently by a single product to solve the problems that old family users are unwilling to intelligently modify or are difficult to modify and the like.
The captivation service robot provided by the invention is an innovative product based on a table lamp, inherits the excellent characteristics of the table lamp, such as the requirement of the table lamp, the proximity to a user, the self-contained scene output, the proximity to a communication interface, the height from a ground, a plurality of application scenes, relatively fixed positions, mobility, installation-free and the like, realizes the single-product solution of the multi-user pain point requirements of indoor active illumination, emergency illumination, active sound, active projection, active care, OCR (optical character recognition), video monitoring, interactive interaction, wireless coverage, space safety, environmental safety, home control, equipment safety, emergency help, falling alarm, physical sign perception, health files, health active supervision management, supervised learning, native language environmental foreign language learning, community service, privacy protection, data safety and the like through innovations of structures, systems, methods, algorithms and the like, and can also cooperate with multiple robots or a combination of the robots and external sensors or the robots and traditional intelligent hardware, the requirements of more pain points of a user are met by using the active sensing, algorithm, calculation, communication and data storage capacity of the robot, the user can trust, rely on and use the functions and services provided by the robot at high frequency, and strong stickiness is generated. The intelligent community digital operation robot based on solving user demands, high frequency, strong viscosity, applicability and good experience has foundation and value, through mode innovation, the intelligent community digital operation based on the robot can promote the development of industrial ecological virtuous circle, and the intelligent community digital operation robot has important significance for the development of intelligent families and intelligent communities.
Fig. 5 is a schematic structural diagram of a cover plate of a capture crown service robot according to some embodiments of the present disclosure. Referring to fig. 2 and 5, in some embodiments of the present invention, a connection assembly 300 includes: cover plate 310, rotary base 320, rotary first shaft 330, rotary second shaft 331. The rotating shaft 330 is disposed on the rotating base 320 and connected to the head 100, and the rotating base 320 is fixedly connected to the head 100 through the shaft center, so that the head 100 can rotate in the second direction. The cover plate 310 is arranged on the top of the body 200; the rotary base 320 is disposed on the cover plate 310, and the rotary shaft 331 is rotatably connected to the cover plate 310 so that the head 100 can rotate in a first direction; wherein, a projection module for video interaction with a user is provided on the head 100. The projection module includes: a projector and a projection lens. The projector is disposed within the head 100. The projection lens is disposed on the head 100 and rotates with the rotation of the head 100.
Specifically, the cover plate 310 may have a square cross-section, and corresponding grooves may be formed on the edge thereof to be engaged with the body 200, such as a front groove. The top wall of the cover plate 310 is provided with a rotating groove 311, a rotating shaft 330 is arranged on the rotating seat 320 in the rotating connection of a rotating shaft 331 at the bottom of the rotating seat 320, an enlarging shaft and a through hole which penetrates through the enlarging shaft rotating seat 320 and is used for various cables to pass through are arranged in the middle of the rotating shaft 330, the design can make the connecting component form more like the throat of an athlete, the threading through hole can be enlarged, and the cables can pass through conveniently. The first direction is the circumferential direction of the rotating biaxial 311, the rotating biaxial 331 can drive the projection lens to rotate in the horizontal plane, and the rotation angle is at least 180 degrees. The second direction is the circumferential direction of the rotating shaft 330, and the rotating shaft 330 can drive the projection lens to rotate up and down, that is, vertically rotate, the downward rotation angle can be 60 degrees, and the upward rotation angle can be 30 degrees. It should be understood that the specific rotation angle can be designed according to the actual situation, and the invention is not limited.
The head 100 is provided with a projection module, the head portrait 100 can project towards different angles by combining the first and second rotating directions of the connecting assembly 300, and the interaction with the video and audio or the somatosensory interaction or the content interaction with the user can be realized by combining the input and output modules of the robot. The connection mode of the cover plate 310 and the locking member enables the connection stability of the head 100 and the body 200 to be high, and is relatively suitable for users who pursue equipment safety and user experience.
Fig. 6 is a schematic structural view of a connecting component of a captivity crown service robot, which is a magnetic base and a magnetic iron block according to some embodiments of the present invention. Referring to fig. 6, in some embodiments of the invention, a connection assembly 300 includes: a magnetic base and a magnetic iron block 340. The magnetic base is arranged on the magnetic head 100. The magnetic iron block 340 is arranged on the top of the body 200 to be adsorbed with the magnetic base. Specifically, the magnetic base is arranged at the bottom of the head 100, and a contact type communication interface male head is arranged in the middle of the magnetic base; a contact type communication interface female head is arranged in the middle of the corresponding magnetic iron block 340, and a bracket 341 is arranged below the magnetic iron block 340. The bracket 341 is connected to a buckle groove provided at the top of the body 200 by a buckle, and the bracket 341 is fixed to the body 200 by a screw fastener.
The magnetic type connecting head and the body correspondingly solve the problem that the magnetic type head is quickly and fixedly connected, and is more suitable for pursuing a scene with high flexibility so as to adapt to the multi-scene application requirements of users; the head and the body are fixedly and rotatably connected, and the problem of head rotation of a built-in projection module is correspondingly solved, so that the experience requirement of multi-scene projection output of a user is met; the two modes exist independently, and the module interconversion upgrading experience can be purchased without replacing the main equipment, so that the user can solve the constantly changing experience requirement with low cost.
Referring to fig. 4, in some embodiments of the invention, a camera rotation module 400 includes: a mounting base 410, a rotating portion 420, a camera 430, and a built-in magnet. The mounting seat 410 is provided on the body 200; the rotating part 420 is rotatably coupled to the mount 410. The camera 430 is disposed on the rotating part 420. The built-in magnet is provided on the rotating portion 420 to rotate with the rotation of the rotating portion 420. The detection module judges the rotation angle of the rotating part 420 by detecting the rotation angle of the rotating magnet, and judges whether to remind or care the user of the specific scene application according to the state and the angle of the rotating part 420 when sensing the behavior and the demand of the user in the space. Such as: when the female owner is sensed to come home, the user is actively reminded to close the camera privacy switch so as to physically cut off the camera signal and protect the privacy of the user; the following steps are repeated: when the perception child is close to the desk and prepares to start learning, the user is actively reminded of rotating the camera downwards to the bottom by foreign languages (if the camera is in a downward monitoring state, the user does not need to be reminded of rotating the camera), so that the camera can identify the book content or supervise the learning of the child, the foreign languages are learned, and the interactive experience of the user is increased.
Specifically, one side of the trunk element 200, which is close to the head 100, protrudes outward to form a mounting seat 410, a rotating cavity 411 for mounting the rotating part 420 is formed by matching with the side of the trunk element 200, and the rotating part 420 is integrally cylindrical and is rotatably connected in the rotating cavity 411 through a rotating shaft. The camera 430 is disposed on the side of the rotating portion 420, and a through hole communicated with the inside of the trunk 200 is disposed at the bottom of the inner side of the rotating cavity 411 for transmitting a communication cable.
The rotary magnets are disposed on the rotary portion 420, and two rotary magnets are disposed one above the other. The detection module includes a magnetic sensor, and the magnetic sensor is disposed at a position capable of detecting states of the two rotary magnets, thereby determining a position of the rotary portion 420. Therefore, the video identification or monitoring of the environment, people or objects is met, and the experience requirements of different scenes of a user are met; meanwhile, the manual rotation design of the rotation module protects the privacy of the user, reduces the product cost and increases the interactive operation experience of the user and the equipment. And the camera 430 is provided with a state sensor, and can sense the user requirement, the use habit, the personal preference and the like by combining with other built-in sensors of the robot. It should be understood that the camera rotation module 400 can be disposed at other positions to meet the user's requirement, and the present invention is not limited thereto.
Fig. 7 is a top view of a rotating arm light assembly of a captivation robot in accordance with some embodiments of the present invention. Fig. 8 is a bottom view of a lamp assembly of a rotating arm of a captivating robot according to some embodiments of the present invention. Referring to fig. 7 and 8, in some embodiments of the present invention, a swivel arm light assembly 500 comprises: arm pole 510, last cambered surface lamp 520, down-slope lamp 530, connection magnetite and absorption portion 540. Go up the cambered surface lamp 520 and set up and be used for emergent, night, the scene of discolouing and atmosphere illumination on the arm pole 510 roof. The lower bevel lamp 530 is disposed on the bottom wall of the arm 510 for non-blue and non-flashing health light illumination. The connecting magnet is disposed on the body 200. The adsorption part 540 is provided on the inner vertical surface of the arm 510 to be screwed on the arm 510 and adsorbed to the magnet.
Specifically, the arm 510 and the body 200 form an upper body, the left and right arms 510 can rotate to realize the unfolding and folding of the rotary arm lamp assembly 500, the middle part of the arm 510 can be provided with an arm elbow guard, and the front end can be provided with a glove shape, so that the arm 510 is closer to the appearance of the arm. The design of upper and lower double-sided illumination of the arm lamp is rotated, so that the output scenes of the robot are increased, and the experience requirements of different scenes of a user are met; the design of the lower inclined plane lamp is beneficial to increasing the illumination intensity of the user experience area; the design of the shading edge is beneficial to protecting the eye health of a user and avoiding the direct vision of the eye to the light source. The design of the inner side wall adsorption part is beneficial to the stable fixation and the durability of the rotating arm rod when the rotating arm rod is unfolded, and simultaneously, the design and the production difficulty and the requirement of the arm rod installation seat can be reduced, thereby being beneficial to realizing the function of the rotating arm rod at low cost. The arm 510 is designed to be simple and beautiful, which not only increases the product interaction experience, but also reduces the occupied space of the robot, so that the robot is suitable for more scene applications, and can form a strip-shaped trophy shape together with the head portrait and the leg component 600, inspire users together with the product shape of the plateau skiing crown-capturing scene of the skiers, and can obtain trophies in respective fields only if struggling spirit of struggling against warrior.
The upper arc lamp 520 and the lower arc lamp 530 can be combined with a built-in radar sensor, a gesture sensor, a distance sensor and the like of the robot, and the intelligent lighting function that the robot senses indoor user behaviors and actively switches on and off a lighting lamp can be conveniently achieved. For example: when a user is sensed to sit beside a desk, the lower inclined surface illuminating lamp is automatically turned on; when the door is sensed to enter the room, the illumination of the upper arc lamp 520 is automatically started; when the user is sensed to sleep, all the lamps are automatically turned off; when sensing that the user is getting up at night, automatically turning on the night illumination, and when sensing that the user returns to the bed to sleep again, automatically turning off the night lamp; in the festival, when the existence of the user is sensed, the red environment illumination of the upper arc lamp 520 is automatically turned on; when the user needs illumination at night and has power failure, the emergency lamp is automatically turned on for illumination, and the like. The emergency lamp is powered by a built-in energy storage battery of the robot, and an energy-saving low-power LED lamp with proper brightness is adopted, so that the illumination time is prolonged.
The cambered surface design is favorable for expanding the irradiation range of the built-in lamp, the lower inclined surface design is favorable for projecting the illumination light source to a desktop working area, and meanwhile, the bilateral symmetry light source design can avoid illumination shadows. A correspondingly shaped shade 560 may also be provided on the arm 510 to conceal the light source from the light source at the height of the tube illuminating the user's eyes. Various decorative patterns may also be provided on curved surface 520 to facilitate arm 510 to resemble more of a human hand, so as to integrate with the athlete's body.
The suction part 540 may be a metal sheet, and is provided on the inner side surface of the arm 510. And the vertical metal sheet of the inner side surface is designed to be matched with a built-in connection magnet of the robot, when the left arm rod 510 and the right arm rod 510 rotate upwards to a certain position, the left arm rod 510 and the right arm rod 510 are sucked, and the durability of the fixing of the rotating arm lamp assembly 500 is improved.
In some embodiments of the present invention, the body 200 is provided with a receiving gap, an arm rod mounting bin and an arm rod mounting seat. The arm lever mounting seat is used for being connected with the arm lever in a rotating mode. The arm rod mounting bin is internally provided with a connecting magnet for adsorbing and fixing the arm rod 510 with the adsorption part 540 after the arm rod 510 is upwards rotated. The receiving notch is used for placing the arm 510 which is folded downwards. The arm 510 is designed by combining with the storage gap of the body 200, when the arm 510 is folded to fill the storage gap, the arm 510, the body 200, the head 100 and the leg assembly 600 together form a cylindrical whole, so that the robot is attractive, space-saving, beneficial to stably placing a desktop and capable of meeting the multi-scene application requirements of the robot; meanwhile, the functional base 700 can form a cylindrical trophy shape together, the skiing and crown-catching scene of athletes is continued to obtain the trophy, and the inspirational user only carries out the vigorous and vigorous piecing together and can obtain the trophy of the user in each field.
In some embodiments of the present invention, the connection assembly 300, the camera rotation module 400, the rotation arm lamp assembly 500, the leg assembly 600, and the function base 700 are provided with a state detection module for detecting the operation state and the module state of the corresponding structure.
Specifically, the state detection module may include a processing unit and a corresponding sensor, and the crown captivation service robot may determine the state of the component and/or the module and/or the complete machine according to the sensor sensing data, so that the robot actively controls, outputs in multiple ways, actively reminds or alarms or attends to, interacts with a scene, protects a device, protects user privacy, and the like; the design can protect the safety of the robot equipment, can also know the experience habits and the hobbies of the user, improve the interactive experience, trace the fault reason and the working condition and state of the robot, and lay a foundation for the robot to solve the multi-demand of the user.
Referring to fig. 4, in some embodiments of the present invention, leg assembly 600 includes: and a leg 610. The support leg 610 is in a squatting shape, is arranged on the lower side of the body 200 and is connected with the sliding plate 620 of the functional base 700, a heat dissipation and ventilation hole is formed in the support leg 610, and a sensor is arranged in the support leg 610 so as to detect sensors of air environment, body temperature, human body andor gestures and the like; wherein, one side of the body close to the supporting leg 610 and one side of the functional base 700 close to the supporting leg 610 are both provided with a heat insulation protection module.
Specifically, the legs 610 include a large leg and a small leg, which are connected in a bent manner to allow the legs 610 to assume a squat shape to simulate a skiing action, and a sliding plate 620 is provided between the bottom of the leg assembly 600 and the functional base 700. And the inside cavity of landing leg 610 sets up, and its inside environmental sensor, body temperature sensor and distance or posture sensor etc. that is provided with to humiture, air quality, smog etc. in the detection ring border, and can set up corresponding louvre on the side of landing leg 610, cooperation landing leg 610 left and right and rear side heat dissipation and air vent avoid the inside integrated temperature sensitive sensor of landing leg 610 to receive the temperature influence of body 200 and function base, guarantee the sensing precision of sensor. Through leg assembly 600, room environment perception, human body temperature perception, user gesture perception ability can be realized. The sensing capabilities of the built-in radar sensor and the close-range human body sensor are combined, so that the functions of reporting the active environment condition, reminding a user of measuring the body temperature or sensing the approach of a hand to the body temperature and the like are conveniently realized.
Meanwhile, the front surfaces of all parts of the robot are not provided with heat dissipation and vent holes, so that the privacy of a user is mainly considered to be protected, lawless persons are prevented from mounting hidden sensors through the heat dissipation and vent holes on the front surface of the robot, the rights and interests of the user are prevented from being damaged, and the attractiveness of the appearance of a robot product is guaranteed; the built-in sensor module comprises sensors of air environment, body temperature, human body and/or gestures and the like which are arranged in the supporting legs 610, and by the design, the robot can conveniently acquire more comprehensive environment and user sign data to establish health files for users, and the health files can effectively carry out targeted health management on the health of the users; in addition, the built-in sign sensor of the robot can also reduce the consumption cost of users, improve the viscosity and high frequency of the robot and lay a foundation for the falling of innovative business modes in the digital operation of smart families and communities. Meanwhile, the use habits and figures of the user can be analyzed through the collected comprehensive data, and a foundation is laid for the robot to better serve the user.
Fig. 9 is a schematic diagram of a base structure of a capture crown service robot according to some embodiments of the invention. Referring to fig. 1, 4 and 9, in some embodiments of the invention, a functional base 700 includes a functional station 701, a display 710, and a base cover 720. The functional table 701 is disposed obliquely and has a top end connected to the leg 610. A display screen 710 is provided on the top surface of the function station 701 for displaying a dynamic picture. The base cover 720 is disposed on the bottom surface of the functional table 701, and a plurality of functional communication ports are disposed on the base cover 720. In addition, the outer base cover 720 and the body 7-shaped back plate cover can be connected into a whole and can be further designed to be thinner and detached, so that the cost can be reduced, and the structure is easy to realize.
Specifically, the function table 701 is in an irregular cylinder shape and is formed by cutting two sides, a front side and a bottom surface of an inclined cylinder, the left side, the right side, the front side and the bottom plane of the cut irregular cylinder-shaped function table are cut into planes, the top surface and the horizontal plane form an inclined included angle, and the display screen 710 is arranged on the top surface of the function table 701, so that a user can operate, watch and experience functions, services and contents provided by the robot conveniently; this design makes things convenient for side and front signal and interface input/output about the function platform 701, still can reduce the cylindrical occupation space of slope, the horizontal direction cutting plane still is favorable to the robot to place stably or install, still be favorable to enlarging skiing plateau function base inner space, still other sensors of position installation above the display screen of benefiting, the pleasing to the eye degree of robot still on the whole, circular side can spray into the circular outward appearance of earth, cylindrical top surface can spray into white snow field colour, the olympic spirit and the pining spirit of symbolizing the global participation of the winter and aoi meeting.
The front side of the functional table 701 is provided with a microphone array 726, the upper part of the top surface is provided with a physical sign sensor 740, and the left side and the right side are provided with a plurality of functional communication ports such as a 3.5-inch earphone socket, an SPDIF optical fiber audio interface 721, an HDMI interface 722, a USB interface 723, a volume adjusting switch 725, a privacy switch, a power switch and the like. The multifunctional communication port and the physical sign sensor are designed to be beneficial to improving and expanding the interactive experience of the robot, protecting the privacy of a user, meeting the multi-scene experience requirements of the user, improving the viscosity of the user to the robot, and meanwhile, the robot is convenient to analyze the habit of the user and figure according to the times and scenes that the user uses the interface and the sensor, so that the robot can provide functions and services for the user more actively in the future. The design of the physical sign sensor on the top surface of the cylindrical functional part not only enables a user to use the sensor daily frequently, but also more importantly attracts the user to input data of other medical grade physical sign sensors in the house into the robot in multiple ways, so that the robot can acquire more comprehensive data to establish a health file for the user, and the health file can effectively carry out targeted health management on the health of the user; in addition, the built-in sign sensor of the robot can reduce the consumption cost of users, improve the viscosity and high frequency of the robot, and lay a foundation for the intelligent household and the community to fall to the ground in an innovative business mode during digital operation.
The base cover 720 is in a shape close to a claw of the excavator, and the back of the base cover is provided with a double RJ45 network interface, a TypeC interface, an emergency button 727, distance sensing bits, a sound outlet, a heat dissipation hole and a fixed base; the opening direction of the base cover 700 is fixed with the connecting screw at the bottom edge of the functional table 701 to form the functional base 700 of the skiing high platform. The 'excavator claw shape' design is beneficial to increasing the internal space of the functional base 700 of the skiing high platform, and the 'excavator claw shape' base cover 720 and the functional platform 701 are combined to form the functional base 700 of the skiing high platform, so that the realization of a hardware structure is facilitated, the attractiveness and the stability of the functional base 700 of the skiing high platform are improved, and the occupied desktop space can be reduced. The base cover 720 is designed for sensing distance, which facilitates the robot to sense and judge the state of the scene on the desktop, such as the distance from the wall to judge whether the user moves the robot and/or whether the robot is the reference detection point position, so that the robot can provide the scene function and service for the user better and actively.
The fixing seats 728 are arranged up and down, the two fixing seats are a pair, the left side and the right side of the back face are respectively provided with a pair of fixing seats 728, so that the double upper and lower fixing seats 728 are formed, and the desk lamp can be conveniently fixed on a desktop or fixed on a wall surface through a support. The design of the double network interfaces facilitates the desk lamp to be connected and communicated from a room network interface, namely, the robot converts a wired network into Wifi wireless and/or IoT wireless to carry out wireless coverage indoors, and meanwhile, the output network interface facilitates the communication connection of a user with an external computer or intelligent equipment. Meanwhile, the indoor wireless coverage can intelligently switch on and off the control function by combining the capability of sensing the user behavior of the indoor space of the robot with a built-in radar sensor, and actively care the functions of switching on and off a privacy switch and/or actively controlling volume and/or actively outputting an instruction and the like of the user. Such as: sensing that a child goes to bed and sleeps, the robot automatically closes wireless Wifi coverage to prevent the child from surfing the Internet on the bed by using a mobile phone; if the female owner is sensed to enter the room, actively caring reminds the user to close the camera sensor so as to protect the privacy of the user; and if the user is sensed to get up at night, the robot automatically opens an account and turns on a small night lamp on the back.
Fig. 10 is a schematic structural diagram illustrating a first implementation of a mounting bracket of a captivity cap service robot in some embodiments of the invention. Fig. 11 is a schematic structural diagram illustrating a second implementation of a mounting bracket of a captivity cap service robot in some embodiments of the invention. Fig. 12 is a schematic structural view of a support bracket of a captivity robot in some embodiments of the invention. Referring to fig. 10, 11 and 12, in some embodiments of the invention, the capturing crown service robot further includes: a bracket 730 is mounted. Mounting brackets 730 are provided on the base cover 720, the mounting brackets 730 including, but not limited to, triangular placement reinforcing brackets 732, clamp-type fixed mounting brackets 731, and wall-mounted fixed mounting brackets. Hold in the palm thing support 733 and pass through the buckle setting before function platform 701, hold in the palm the thing support and form the space of bearing object together with function platform 701, and hold in the palm the thing support and be the slide head molding of slide 620, this design convenience of customers places article such as cell-phone, books, can combine together with skier's slide again, has promoted the pleasing to the eye degree of product and has experienced the sense with mutual. The mounting bracket 730 is used in cooperation with the left and right fixed bases 728 at the back of the desk lamp to meet various requirements of different scenes of different users. The clamp type mounting bracket 731 comprises a double-fixing-seat connecting part and a clamp type table edge clamping part, and the robot is fixed on a table, so that the safety of equipment is protected.
The placement type mounting bracket 732 is triangular or multi-triangular, is connected with the robot double-fixing seat 728 on one side and is attached to the desktop on the other side, supports the robot and can not turn over backwards or leftwards or rightwards, and equipment safety is protected.
The wall support includes the fixed screw of wall, vertical pole and the fixed knot of pole head, and on the big vertical pole of robot fixing base 728 cover, the fixed knot of pole head was fixed the vertical pole and is formed the closed loop, prevents that the robot from droing and the vertical pole fracture. The bracket form is not limited to the form of the components which are attached and delivered in a standard way, and a user also needs to customize or purchase the mounting bracket 730 with various characteristics so as to meet the personalized requirement and aesthetic requirement of the user; the design is favorable for improving the stability of the placement or installation of the robot so as to adapt to the multi-scene experience requirements of users.
The object support 733 is nearly shaped like a dune and is provided with a double fixing plate, a double sliding plate head and a vertical connecting piece, the sliding plate head is vertical to the sliding plate 620, the double fixing plate is horizontally arranged on a plane, the vertical connecting piece is arranged on the top wall of the double fixing plate, the sliding plate head is arranged on the vertical connecting piece, and the three can be integrally formed and can also be welded and fixed. The design is convenient for a user to charge the mobile phone or the book on the inclined plane or learn, and meanwhile, the scene of the skiing sports of the athlete taking the hat can be more vividly restored, and the user is encouraged. This design avoids the problem of the functional base 700 occupying desktop space, and the snap-in connection is very convenient to remove; the object supporting bracket 733 and the functional base 700 form the cavity, so that a user can place articles such as a mobile phone, a book, a dictionary and the like in the cavity, the space utilization rate of a desktop is favorably improved, and the experience feeling and the product viscosity of the user are improved simultaneously
In some embodiments of the present application, the captivating crown service robot may further be provided with various decorative patterns or colors. The decorative patterns or colors are arranged at positions including but not limited to head characteristic patterns and/or colors such as head 100 goggles, hand characteristic patterns and/or colors such as elbow guard and glove shape of the rotary arm lamp assembly 500, human-shaped shapes, front and round sides of the functional table 701, lower parts of the leg assemblies 600, the sliding plates 620, the base covers 720 and other surface overall decorative patterns or colors; the design is beneficial to the attractiveness of the overall appearance of the robot, the closeness of the user to the robot product and the mental resonance, the reduction of the excessive requirement of the user on the appearance form, and the standardization of the robot product and the reduction of the implementation and iterative optimization cost.
Based on the concept of the captivity service robot, fig. 13 is a flowchart of a commander-free active intelligent implementation method according to an embodiment of the present invention. As shown in fig. 13, the method includes the steps of:
s110, determining a reference detection point for placing the captivity crown service robot, and guiding a user to arrange the captivity crown service robot at the reference detection point.
The benchmark probe point is the most frequently used position for placing the capturing crown service robot, the capturing crown service robot with the advantages and the characteristics of the desk lamp is usually arranged at the positions close to the wall, such as the desktop, the bedside, the sofa and the like, in addition, the fixed radar sensor is arranged for expanding the space perception range for networking communication, and the corresponding benchmark probe point is usually the wall of a room.
In the practical application process, after a user or an installer starts a crown capturing service robot and/or installs and network other room radar sensor reference detection points with normal communication based on the crown capturing service robot, the crown capturing service robot senses the existence of the user, actively outputs voice and/or screen display and/or projection and/or light to guide the user to place the crown capturing service robot in the most frequently used scene (the reference detection points), the crown capturing service robot judges whether the crown capturing service robot is the reference detection point by using a built-in sensor, if the crown capturing service robot senses that the back surface is more than 50CM away from the wall surface, the voice or display and the light can be actively output to confirm the reason or the authenticity of the position with the user, a privacy switch is closed (when the sensing function state is the starting state), a camera module is rotated upwards to the limit (when the sensing function defaults to be in the normal monitoring state or when the rotation shaft is at any middle position, so as to ensure that the crown capturing service robot can sense the normal horizontal direction, if the system is configured to be a fixed camera module, the step is not needed), the user enters the next step after confirming in multiple modes or sensing that the user operates the mobile position of the crowing service robot and/or adjusts the sensing direction, if the user does not operate or confirm, the system actively makes voice and/or screen display and/or projection and/or light to remind the user to operate when the user is sensed to exist next time.
The captivation service robot can be installed in a living room, a restaurant, a bedroom, a study and other scenes, and also can be installed in a plurality of scenes such as an office, a living room, an apartment, a conference room, a ward, an exhibition room, a store, a school, a factory and the like. Meanwhile, the problems of cost, demand and deployment of total space sensing are considered, and spaces (such as a kitchen, a toilet, a passageway, an elevator hall, a bedroom, a living room and the like) where the crown capturing service robot does not need to be deployed exist, the networked communication can be carried out with the reference detection point radar sensor fixedly installed in a special space, so that the space sensing range of the crown capturing service robot is expanded (the reference detection points of a plurality of rooms are managed by the crown capturing service robot at the same time), and a family can intelligently sense the total space of the family based on at least one crown capturing service robot, and real indoor total space commandedless active intelligence is realized. Certainly, if the mode of only adopting the fixed service robot and the fixed installation radar sensor reference detection point based on the fixed service robot networking communication expansion space perception range can also realize the indoor full-space command-free active intelligence, the advantages of the captivity service robot compared with the fixed service robot are that the position is flexible, the scene is more, the distance from the user is close, the communication and the electricity taking are convenient, the installation is avoided, the matching is avoided, and the landing is easier. Therefore, the key problem of realizing commander-free active intelligence is that how to convert the user perception of the indoor space into the behavior and demand judgment of the user based on the captivity service robot. Therefore, the method is innovated, and carries out algorithm judgment on the captivity-measure robot and the space perception user of the reference detection point sensor based on the captivity-measure robot for expanding the space perception range so as to identify the behavior and the demand of the user in the family space, acquire information and/or output own scenes and/or output corresponding functions and services through other intelligent devices or systems which are communicated in a network. The crowning service robot can independently and quickly realize commandedless type active intelligence, and can endow other traditional intelligent equipment with networked communication with commadless type active intelligent functions, thereby thoroughly changing the series problems of passiveness, manual operation, user independent management, inconvenient voice, complex integration, difficult installation, difficult standardization, difficult popularization and landing and the like of the traditional family intelligent system, and ensuring that the life, work, study, entertainment and home furnishing of a user are easier, safer and more intelligent.
Fig. 14 is a sub-flowchart of a commander-less active intelligent implementation method according to an embodiment of the present invention.
And S120, performing indoor space sensing based on the reference detection points to configure a spatial structure coordinate graph according to sensing results.
The perception result is the result obtained by detecting indoor objects and structures by the captivity-enhanced robot and/or the fixedly-mounted radar sensor based on the extended space networking communication of the captivity-enhanced robot, and at the moment, the indoor space environment is determined so as to determine a space structure coordinate graph according to the space structure, and the space structure coordinate graph is parameterized description of the indoor space environment.
Specifically, the configuration of the spatial structure coordinate graph in this embodiment mainly includes two modes, namely, a manual mode and an automatic mode, wherein the automatic mode further includes three conditions of combination according to image data, radar data, and multi-sensor data, that is, step S120 includes steps S121 to S124:
and S121, determining a spatial structure layout according to the adjustment operation and confirmation instruction of the user based on the preset structure layout and/or the actual structure layout imported by the user.
In this embodiment, the captain service robot further provides a system modeling program configuration interface, so as to configure the screen display content and/or the voice broadcast of the preset structure layout and/or the actual structure layout imported by the user through the system modeling program, and certainly, the captain service robot is also configured with an actual structure layout import path, and based on the system modeling program configuration interface, a file format, a conventional main structure or an article parameter can be set for the user to confirm or adjust (for example, positions and specifications of a door, a window, a sofa, a television, a wall width wall length and the like of a living room scene), the user completes content input according to the content or guidance, and marks one to three reference detection point positions, if the user standard exceeds more than three reference detection point positions or the system senses that any point of a space may be a detection point when an indoor room exceeds a preset size, the captivation crown service robot actively reminds a user of installing at least 3 positioning beacons or base stations at the positions with obvious indoor structural features by voice andor screen display andor projection andor light, marks the positions on a spatial structure layout diagram, and generates an indoor spatial structure layout diagram based on the reference detection position by combining the detection direction of the geomagnetic sensor and the detection direction of the reference detection point robot.
And S122, if the adjusting operation and the confirming instruction of the user are not detected, acquiring an indoor space image, identifying indoor articles and space structure characteristics based on the indoor space image, and generating a space structure layout according to the indoor articles and the space structure characteristics and preset characteristic data.
When the crowing service robot senses that the user exists in the space, the screen display andor voice andor projection andor light guides the user to input the configuration content, and the user does not operate or is unwilling to operate for more than the preset custom time, or the user directly operates and confirms that the operation is not performed or not performed: the captivation service robot automatically starts a system modeling program after sensing that a user leaves, for example, when sensing that the user is in a space, the starting of the system modeling program is suspended, or a screen display or active voice or projection output 'starts the system modeling program', the screen display or projection or active voice output 'requests the user to self-define time (for example, 5 minutes) to adapt to environment time, and requests the user to leave a room', so that the system can identify the space to model; the method comprises the following steps that a single or double camera module shoots an indoor space picture, and main indoor conventional articles and space structure characteristics (such as beds, sofas, windows, bedside cabinets, tables, chairs, doors, floor tiles and the like) in the picture are identified through an identification technology; the robot for taking crown services takes pictures according to a fixed scale (for example, the horizontal and vertical scales of the pictures are 1CM x 1CM, when the shooting distance is 1 meter, the actual object size is 0.2 x 0.2 meter, and conversely, if the known object size is 0.2 x 0.2 meter, the distance of the camera from the object is 1 meter) and the characteristic data (such as door (0.9 meter wide and 1.9 meter high), window (0.9-1.05 meter high), sofa (0.42 meter high), bed (0.5 meter high, 1.2-1.8 meter wide and 1.9-2 meter long), bedside table (0.5 meter wide and 0.4 meter deep and 0.7 meter high), table (0.8 meter wide and 1.4 meter long) and the same principle that people can see objects or telescopes with fixed distances or the same principles of people, judging and outputting the shape, size and direction of the indoor space structure and the conventional object and the distance from the crown capturing service robot, and generating the space structure layout in the indoor space visible area by the crown capturing service robot according to the detection direction of the geomagnetic sensor and the detection direction of the crown capturing service robot; when the crown capturing service robot senses that a user exists, active voice output or screen display or projection output is performed, namely, a master is requested to rotate the robot left or right in place by 60 degrees (the monitoring angle of a camera module configured in the system is generally not lower than 60 degrees, images shot at the angle are not easy to deform, the images are respectively rotated by 60 degrees left and right, and the images or the spaces are spliced by exactly 180 degrees, so that the crown capturing service robot placed close to a wall can sense the indoor space comprehensively); when the crown capturing service robot senses that the user rotates, the crown capturing service robot judges that the user rotates by 60 degrees to actively stop the user through detection parameters of a geomagnetic sensor and/or a triaxial gyroscope, starts a modeling program after sensing that the user leaves a room, repeats the previous steps, generates left and right areas of a reference area of the indoor space crown capturing service robot to perform automatic configuration, and generates a spatial structure layout of the left and right areas of the reference visible area of the indoor space crown capturing service robot; the captivation crown service robot is combined and spliced into a complete spatial structure layout map according to the spatial structure layout of the robot reference detection area and the left and right detection areas, and generates an indoor spatial structure layout map based on the reference detection position of the captivation crown service robot by taking the reference detection point as a coordinate dot and a geomagnetic sensor.
And S123, or if the adjusting operation and the confirming instruction of the user are not detected, acquiring indoor radar detection data, identifying indoor articles and space structure characteristics based on the indoor radar detection data, and generating a space structure layout according to the indoor articles, the default specification parameters of the system and the space structure characteristics and preset characteristic data.
The method comprises the following steps that a user or an installer starts a crown capturing service robot, when the crown capturing service robot and/or a fixedly-installed radar sensor based on extended space networking communication of the crown capturing service robot senses that the user exists indoors, voice or screen display or projection is actively output to guide the user to place the crown capturing service robot in a most frequently used scene (a reference detection point), the back surface of the crown capturing service robot is placed in parallel with a wall surface, and the screen display or voice guides the user to input configuration content, so that the user does not operate or unwinds to operate for a time exceeding the self-definition time, or the user directly confirms that the user does not operate or does not operate, the system starts an automatic configuration program of a modeling system; the crown capturing service robot starts a radar detection static object mode, describes an indoor space structure and the shape and size of an object according to the size of the reflection area of electromagnetic waves, and generates a space structure layout diagram, such as a space structure layout diagram, and the direction and distance of the radar relative to the object and system default specification parameters; when the crown-capturing service robot is provided with a miniature projector for projecting a projection distance of 2 meters, the area of a projection screen is 60 inches; if the built-in single radar sensor of the captivity-enabled robot has a limited detection angle (if the detection angle of a single radar is 90 degrees, a 180-degree detection angle can be formed by double radars), and when the system senses that a user exists, the system actively requests the user to rotate the captivity-enabled robot leftwards and/or rightwards so as to conveniently configure the system to generate a complete spatial structure layout; the captivation service robot compares the generated indoor space structure layout data with the general characteristic data of the household conventional object and the space structure to judge whether the detected object specification is the specification of a real object, if not, the system automatically redetects the object to compare specification parameters, if redetection exceeds the self-determined times and the deviation of the compared object specification parameters is still larger, one-time abnormal recognition is recorded, a space structure layout drawing is generated, and simultaneously, an object or a space with large recognition deviation is marked, when a user is sensed to leave home, a camera is started and the camera is in a forward monitoring state, the object specification is rechecked through video recognition or active voice or screen display or projection and user confirmation are carried out, if: perceiving the presence of the user, actively voice-asking the user, "host, asking for how wide the door of the room? ". And if the video rechecking or the user confirms that the deviation between the object specification and the detection specification is large, the system automatically feeds back to the service platform for algorithm optimization and verification. For example, the system detects the door width of 0.5 m, and the actual door width is 1.2 m.
And S124, establishing a spatial structure coordinate diagram by taking the reference detection point as a coordinate origin based on the spatial structure layout diagram and the perception direction of the captivity service robot.
The capturing crown service robot establishes a logical relationship according to the spatial structure layout, the detection direction of the geomagnetic sensor, the detection direction, the detection angle and the range of the capturing crown service robot and/or other room radar sensors with normal networking communication, the coordinate dots of the reference detection point and the like, and generates a spatial structure coordinate graph, namely, the human body coordinate obtained by the movement of the user in the detection range of the capturing crown service robot can find the corresponding spatial position or coordinate in the corresponding indoor spatial structure layout. If the coordinate of the reference detection point is (0, 0), the reference detection point is positioned at the position close to the wall in the middle of the desk in the indoor space layout; the following steps are repeated: the coordinates of the user in the detection range of the capturing crown service robot are (x, y), and the coordinates correspond to the midpoint position of the door in the indoor structure space structure layout diagram and are represented by (x1, y 1). When the captivity service robot senses that the user exists, the captivity service robot actively inquires the user whether other frequently-used scenes exist in the room, and if the user confirms that the frequently-used scenes exist in the room, the user is requested to place the captivity service robot in other application scenes for system configuration. The crown capturing service robot judges the spatial position according to the autonomy, and then generates a spatial structure coordinate graph of the crown capturing service robot at any spatial position, detection direction and range by combining a spatial structure coordinate graph, the position of a reference detection point, the detection angle and direction and the direction detected by a geomagnetic sensor. The spatial structure layout diagram is only a common plane diagram, the reference detection point is also only one point on the plane diagram, and the two-dimensional vertical coordinate of the plane diagram needs to be associated with the polar coordinate detected by the crown capturing service robot, so that the person can find the vertical two-dimensional coordinate corresponding to the indoor spatial structure plane at the polar coordinate of the detection range of the crown capturing service robot and other room radar sensors with normal networking communication, and the next sensing area division can be realized only by using the spatial structure coordinate diagram, because each sensing area is formed by the vertical two-dimensional coordinates of one area. According to the application scene requirements, a virtual perception area can be set.
And S130, dividing a sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area.
After the spatial structure coordinate graph is determined, the spatial structure layout graph, the detection direction of the geomagnetic sensor, the detection direction, the angle and the range of the crowing service robot and other room radar sensors with normal networking communication and the logic relation between the positions of the reference detection points are recorded in the spatial structure coordinate graph, the indoor space is subjected to sensing area division based on the spatial structure coordinate graph, a plurality of coordinate areas such as a bed area, a window area, a desk area, a door area, a television area, a sofa area and a projection screen (wall surface) area are obtained, different scene events are set based on the different coordinate areas, at least one trigger condition is set for the different scene events, and when the indoor behaviors and states of the user meet the trigger conditions, the user is indicated to be in the scene event. Exemplary, first, the position (coordinate) information is the only factor in the trigger condition, such as: only the outer bedside of the bed with the side close to the wall can trigger the scene; the following steps are repeated: the scene can be triggered around the dining table in the middle; secondly, generating a logic triggering condition of the scene event by combining time factors, if the scene is triggered, tracing the user positioning coordinate of the self-defined time (such as 1 second time) before the scene is triggered, if the user positioning coordinate is positioned outside the scene area, judging that the user enters the coordinate area, and if the user positioning coordinate is not positioned outside the scene area, judging that the user leaves the coordinate area; and if the user coordinate is not changed, judging that the user continuously exists or giving false alarm to abandon the processing. It will be appreciated that the trigger condition referred to in this embodiment may also include a logical requirement for a series of consecutive actions by the user, also referred to as a logical condition. The design can also effectively solve the problem of poor timing control user experience of the basic sensing capability of the traditional sensor or radar sensor. If the user keeps the toilet timing self-defined time (such as 1 minute) and does not move, the light is automatically turned off.
And S140, sensing user information based on the reference detection point to determine a current scene based on the user information and the trigger condition.
The user information comprises the information of the motion and the pose of the indoor people and objects needing attention at different moments. The method mainly performs scene judgment according to the positioning coordinates (radar normal sensing mode, static object identification mode configured by a distinguishing system) of people or objects moving indoors and specific triggering conditions. Such as: if the user is sensed to fall down, outputting an emergency pre-scene; if the user station slides upwards or downwards in the projection screen or the television area beside the projection screen or the television, the system judges that the display content returns or turns a page downwards; if the captivity service robot is arranged between the projection screen or the television and the user, the detection direction faces towards the user, and the projection direction is opposite to the detection direction, when the user moves back and forth, left and right, stands, squats and the like in the virtual projection screen or the television area in the detection direction, the system judges that the virtual person or object with the displayed content synchronously outputs the movement of moving back and forth, left and right, standing, squats and the like, and the human body perception interaction and the like of the displayed content are realized. That is, the sensing user information based on the reference probe point to determine the current scene based on the user information and the trigger condition includes: sensing a user positioning coordinate, and determining an action record of a user based on the positioning coordinate and corresponding time; determining the current scene based on the action record matching the trigger condition.
S150, generating an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, acquiring information through an input module and/or outputting functions and services through an output module based on the execution instruction, and/or sending the execution instruction to a connecting networking device through a communication module for input and/or output.
The method mainly comprises the steps that according to a scene where a user is located and user information, which services need to be provided by the user are judged, the current scene comprises a pre-scene event, a scene event trigger list and priority, the user information comprises information such as the number of coordinates, characteristics, physical signs, time, networking equipment, equipment states and logic relations among multi-room scenes, and the execution instruction comprises abandoning processing (for example, the action of a non-user such as an animal is detected), a control instruction, display content, an execution program, interactive voice information, reminding voice information and the like; the captivity service robot is provided with an input module and an output module simultaneously, so that various scene services including voice, screen display, projection, light and the like can be output through the input module and/or the output module, the captivity service robot can be prevented from being provided with the output module and being required to be provided with a matched system to output the scene, the system landing is simplified, and the captivity service robot can better serve users. Such as: sensing that a child gets up in the morning on weekends, outputting foreign language greeting voice, if sensing that the child plays in a room in the morning, triggering a bedside event, outputting foreign language voice of 'not sleeping time now', creating a scene of initiating conversation with the child in foreign languages, if sensing that the child plays in the room in the afternoon but does not have a triggering event, outputting foreign language music, poetry, stories or videos and the like which are usually liked by the child, enabling the child to play in an immersive foreign language environment, and unconsciously culturing the senses of the child; if the solitary old people do not get up in the morning at 9 o' clock, waking up the old people by waking up the old people or music is repeatedly output in the self-defining process, if the old people are aware of activities in bed, a voice prompt asking the old people to stretch hands to the crown-seizing service robot to measure the body temperature is output, if the body temperature of the old people is detected to be high fever, and the system pushes user fever information to a service platform or a community health center or a relative mobile phone or a government service center.
The embodiment provides a command-free active intelligent implementation method, which includes the steps of firstly determining a reference detection point for placing a crowing service robot, then guiding a user to arrange the crowing service robot at the reference detection point, conducting indoor space sensing based on the reference detection point to configure a spatial structure coordinate graph according to a sensing result, dividing a sensing area based on the spatial structure coordinate graph, configuring a trigger condition of a scene event based on the sensing area, sensing user information based on the reference detection point to determine a current scene based on the user information and the trigger condition, and finally generating an execution instruction according to a preset execution logic based on the current scene and the user information, wherein the crowing service robot executes instruction acquisition information and/or outputs scene services and/or sends the execution instruction to other execution equipment in communication connection, and the method can analyze the requirements of the user according to the behavior of the user and the specific indoor space in real time, corresponding functions or services are actively provided for the user, the user does not need to passively operate the equipment to issue instructions according to the requirements of the user, the intelligent degree is higher, and the use is more convenient and faster.
Optionally, in some embodiments, fig. 15 is a flowchart of a method for implementing commandedless active intelligence according to an embodiment of the present invention. As shown in fig. 15, the method includes:
s210, determining a reference detection point for placing the captivity crown service robot, and guiding a user to arrange the captivity crown service robot at the reference detection point;
s220, indoor space sensing is conducted on the basis of the reference detection points, and a space structure coordinate graph is configured according to sensing results.
And S230, dividing a sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area.
And S240, sensing user information based on the reference detection point to determine a current scene based on the user information and the trigger condition.
And S250, judging whether the current scene is matched with the reference detection point.
And S260, if the robot positions are not matched, guiding the user to adjust the pose of the captivation service robot, and detecting the pose adjustment operation of the captivation service robot by the user.
And S270, adjusting the spatial structure coordinate graph and the sensing area according to the pose adjusting operation.
S280, generating an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, dividing the sensing area based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing area.
The difference between the present embodiment and the foregoing embodiment lies in steps S250-280, which are intended to consider that in the actual use process, considering that the sensing range of the sensor is limited, a situation that the pose of the crown capture service robot needs to be adjusted may occur: if the direction of the captivity service robot is adjusted in the using process of the user to adapt to the actual using requirement of the user or the detection angle of the captivity service robot is lower than 180 degrees, the problem of multiple detection scenes exists, and the system can synchronously adjust the detection direction and range of the captivity service robot corresponding to the spatial structure coordinate diagram according to the direction of the captivity service robot rotated by the user.
Optionally, in some embodiments, a countermeasure situation is further provided when the user has no feedback after collecting information through the input module and/or outputting the scene service and/or sending the execution instruction to other communicatively connected devices based on the execution instruction, and after step S280, steps S290-200 (not shown) are added:
s290, judging whether scene feedback of the user based on the execution instruction is sensed;
and S200, if not, generating an abnormal instruction based on the current scene, and sending the abnormal instruction to an abnormality coping device.
For example: the user is a chronic disease patient, the meal time is 12 pm, the user is sensed to have active voice output to remind the user to take medicine, if the user is over-user-defined and does not have interactive response or information such as name and dosage of the medicine determined by the coronary capturing service robot, and the coronary capturing service robot records the abnormal information of the medicine taking of the user once.
Optionally, in some embodiments, in order to further optimize the service experience, a self-learning mechanism is further provided to autonomously record habits of the user and provide services for the user in a targeted manner, and specifically, after step S200, step S201 is further included (not shown):
s201, recording the occurrence frequency of the current scene and the feedback frequency with the scene feedback, determining the stage habit or knowledge mastering level of the user according to the occurrence frequency and the feedback frequency, and generating a benign guiding scheme according to the stage habit or knowledge mastering level.
Specifically, in the self-defined time, the execution of the same scene exceeds the preset self-defined times, and meanwhile, the times that the user does not make feedback on the scene exceeds the preset negative times, the stage habit or knowledge mastering level of the user is judged, if the stage habit is benign, a benign guiding scheme is formulated to actively care and remind the user to execute the scene or abandon the execution according to a preset time threshold or the occurrence interval of the same scene; if the stage habits are not benign, the benign guiding scheme actively takes care of reminding the user that the bad life habits need to be corrected; and the user executes a benign habit scene, and the system can actively and voice encourage or confirm the user behavior. Such as: in one week, the user sleeps at 1 o' clock at 3 nights, and the system automatically generates bad habits and customs in one period. Such as: when the crown capturing service robot senses a user trigger event, the foreign language dialogue interaction is initiatively initiated to the user for a super-user-defined number of times (for example, 5 times), but the user has no feedback all the time, the system judges that the user does not master the interactive foreign language sentence, and automatically adjusts and outputs the interactive foreign language sentence, or outputs an explanation sentence or outputs a native language inquiry sentence.
Example two
Fig. 16 is a schematic structural diagram of a device for implementing commandedless active intelligence according to a second embodiment of the present invention. As shown in fig. 16, the apparatus 800 for implementing commandemand-less active intelligence of the present embodiment includes:
the placement guide module 810 is used for determining a reference detection point for placing the captivity crown service robot and guiding a user to arrange the captivity crown service robot at the reference detection point;
a spatial sensing module 820, configured to perform indoor spatial sensing based on the reference probe points to configure a spatial structure coordinate graph according to a sensing result;
a scene configuration module 830, configured to divide a sensing region based on the spatial structure coordinate map, and configure a trigger condition of a scene event based on the sensing region;
a user sensing module 840 configured to sense user information based on the reference probe point to determine a current scene based on the user information and the trigger condition;
and the execution module 850 is configured to generate an execution instruction according to a preset execution logic based on the current scene, the sensing area, and the user information, and acquire information through an input module and/or output a scene service through an output module based on the execution instruction and/or send the execution instruction to other communication-connected devices.
Optionally, in some embodiments, guiding the user to set the capture crown service robot at the reference detection point includes: the user is guided to place the crown capturing service robot at the reference detection point through voice or screen display or projection or light, so that the back of the crown capturing service robot is parallel to the wall surface, and the distance between the back of the crown capturing service robot and the wall surface is sensed.
Optionally, in some embodiments, the performing indoor spatial sensing based on the reference probe point to configure the spatial structure layout according to the sensing result includes: determining a spatial structure layout according to the adjustment operation and confirmation instruction of a user based on a preset structure layout and/or an actual structure layout imported by the user; if the adjusting operation and the confirmation instruction of the user are not detected, acquiring an indoor space image, identifying indoor articles and space structure characteristics based on the image, and generating a space structure layout diagram according to the indoor articles and the space structure characteristics and preset characteristic data; or if the adjusting operation and the confirmation instruction of the user are not detected, acquiring indoor radar detection data, identifying indoor articles and spatial structure characteristics based on the indoor radar detection data, and generating a spatial structure layout according to the indoor articles and the spatial structure characteristics and the preset characteristic data; and establishing a spatial structure coordinate graph by taking the reference detection point as a coordinate origin based on the spatial structure layout graph and the perception direction of the captivity crown service robot.
Optionally, in some embodiments, the method further includes: recording the occurrence frequency of the current scene and the feedback frequency with scene feedback, determining the stage habit or knowledge mastering level of the user according to the occurrence frequency and the feedback frequency, and generating a benign guiding scheme according to the stage habit or the knowledge mastering level.
Optionally, in some embodiments, the method further includes: judging whether the current scene is matched with the reference detection point; if not, guiding the user to adjust the pose of the captivity crown service robot, and detecting the pose adjustment operation of the captivity crown service robot by the user; and adjusting the spatial structure coordinate graph and the sensing area according to the pose adjusting operation.
Optionally, in some embodiments, sensing user information based on the reference probe point to determine the current scene based on the user information and the trigger condition includes: sensing a user positioning coordinate, and determining an action record of a user based on the positioning coordinate and corresponding time; the current scenario is determined based on the action record matching the trigger condition.
Optionally, in some embodiments, the method further includes generating an execution instruction according to a preset execution logic based on the current scene, the sensing area, and the user information, acquiring information through an input module based on the execution instruction, outputting a scene service through an output module based on the execution instruction, and/or sending the execution instruction to other devices in communication connection, further including: judging whether scene feedback of a user based on an execution instruction is sensed; if not, generating an abnormal instruction based on the current scene, and sending the abnormal instruction to the abnormality coping equipment.
The commandless active intelligent realization device provided by the embodiment of the invention can execute the commandless active intelligent realization method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. Those skilled in the art will appreciate that the present invention is not limited to the particular embodiments described herein, and that various obvious changes, rearrangements and substitutions will now be apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in some detail by the above embodiments, the invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the invention, and the scope of the invention is determined by the scope of the appended claims.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Numerous obvious variations, adaptations and substitutions will occur to those skilled in the art without departing from the scope of the invention. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A capture crown service robot, wherein the capture crown service robot is in the form of a capture crown scene of a sport of a human athlete, and the capture crown service robot comprises:
a head (100), said head (100) having a light and/or a micro-projection module built therein;
a body (200), the body (200) having built-in radar sensors to sense space and human body and behavior;
a connecting assembly (300) by which the head (100) is detachably connected to the body (200);
the camera shooting rotating module (400) is rotatably connected to the body (200) to perform video recognition or monitoring on the environment, people or objects;
a leg assembly (600) disposed under the torso (200);
the rotating arm lamp assembly (500) is rotatably connected below the body (200), a sensor arranged in the leg assembly (600) senses an environment or user gesture or physical sign sensor, a radar sensor arranged in the body (200) senses user behavior, and the rotating arm lamp assembly (500) is used for intelligently illuminating according to the environment or the user behavior, gesture or physical sign;
the functional base (700) is arranged below the leg assembly (600), a sliding plate (620) is arranged on the functional base (700), and the sliding plate (620) corresponds to the leg assembly (600).
2. The captivating crown service robot according to claim 1, wherein the connection assembly (300) includes:
the magnetic base is arranged on the magnetic head (100);
the magnetic iron block (340) is arranged at the top of the body (200) and is adsorbed to the magnetic base; and/or
The connection assembly (300) further comprises:
a cover plate (310) arranged on the top of the body (200);
a rotary base (320) provided on the cap plate (310) and connected with the head (100) such that the head (100) can rotate in a first direction; and
a rotation shaft (330) provided on the rotation base (320) and connected with the head (100) such that the head (100) can rotate in a second direction;
wherein, a projection module used for video interaction with a user is arranged on the head part (100).
3. The captivating crown service robot according to claim 1, wherein the camera rotation module (400) includes:
a mounting base (410) provided on the body (200);
a rotating part (420) rotatably connected to the mounting base (410);
and a camera (430) provided on the rotating portion (420).
4. The crowning service robot of claim 1, wherein the rotating arm lamp assembly (500) comprises:
an arm (510) rotatably connected to the body (200);
the upper arc lamp (520) is arranged on the arm rod (510) and is used for emergency, night, color-changing scene and atmosphere illumination;
the lower bevel lamp (530) is arranged below the arm lever (510) and is used for lighting the non-blue-light and non-flicker health lamp;
a light shielding edge (560) arranged in front of the arm lever (510) for shielding light;
and the adsorption part (540) is arranged on the inner vertical surface of the arm lever (510) and is used for matching with the built-in magnet of the body (200) to fix the arm lever (510).
5. The robot for crown grabbing service according to claim 4, wherein the body (200) is provided with a receiving gap, an arm rod mounting bin and an arm rod mounting seat, the arm rod mounting seat is used for being rotatably connected with the arm rod (510), and a connecting magnet is arranged in the arm rod mounting bin and used for adsorbing and fixing the arm rod (510) with the adsorption part (540) after the arm rod (510) is upwards rotated; the receiving notch is used for placing the arm lever (510) which is folded downwards.
6. The capturing crown service robot according to any one of claims 1 to 5, wherein the connection assembly (300), the camera rotation module (400), the rotating arm lamp assembly (500), the leg assembly (600) and the function base (700) are provided with a state detection module connected with the processing unit, and the state detection module is used for detecting the operation state and the module state of the corresponding structure.
7. The capturing crown service robot according to any one of claims 1 to 5, wherein the leg assembly (600) is squat-shaped, the leg assembly (600) including:
the supporting legs (610) are arranged on the lower side of the body (200) and connected with the sliding plate of the functional base (700), heat dissipation and ventilation holes are formed in the supporting legs (610), and sensors are arranged in the supporting legs (610) so as to detect air environment, body temperature, human bodies and/or gestures;
one side of the body (200) close to the leg and one side of the functional base (700) close to the leg are both provided with heat insulation protection modules.
8. The crowning service robot of any one of claims 1 to 5, characterized in that the functional base (700) comprises:
the functional table (701), the functional table (701) is arranged obliquely, and a plurality of functional interfaces are arranged on the functional table (701);
the base cover (720) is arranged on the bottom surface of the functional table (701), and a plurality of functional communication ports are formed in the base cover (720);
and the display screen (710) is arranged on the top surface of the functional table (701) and is used for displaying the dynamic picture.
9. The captivation crown service robot of any one of claims 1-5, further comprising:
a mounting bracket (730) disposed on the base cover (720), wherein the mounting bracket (730) includes, but is not limited to, a triangular-shaped placement reinforcement bracket (732), a pincer-type fixed mounting bracket (731), and a wall-mounted fixed mounting bracket; and
and the object supporting bracket (733) is arranged in front of the functional table (701) through a buckle, the object supporting bracket and the functional table (701) form a space for supporting an object, and the object supporting bracket (733) is a sliding plate head model of the sliding plate (620).
10. A method for implementing commander-free active intelligence, which is applied to the captivity crown service robot of any one of the above claims 1 to 9, and the method comprises:
determining a reference detection point for placing a captivity crown service robot, and guiding a user to arrange the captivity crown service robot at the reference detection point;
performing indoor space sensing based on the reference detection points to configure a spatial structure coordinate graph according to a sensing result;
dividing a sensing region based on the spatial structure coordinate graph, and configuring a trigger condition of a scene event based on the sensing region;
sensing user information based on the reference probe point to determine a current scene based on the user information and the trigger condition;
and generating an execution instruction according to a preset execution logic based on the current scene, the sensing area and the user information, acquiring information and/or outputting functions and services through an input module and/or an output module based on the execution instruction, and/or sending the execution instruction to a connecting networking device for inputting and/or outputting through a communication module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210625935.3A CN114986532A (en) | 2022-06-02 | 2022-06-02 | Crown capturing service robot and non-command type active intelligence implementation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210625935.3A CN114986532A (en) | 2022-06-02 | 2022-06-02 | Crown capturing service robot and non-command type active intelligence implementation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114986532A true CN114986532A (en) | 2022-09-02 |
Family
ID=83030912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210625935.3A Pending CN114986532A (en) | 2022-06-02 | 2022-06-02 | Crown capturing service robot and non-command type active intelligence implementation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114986532A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115883274A (en) * | 2022-09-26 | 2023-03-31 | 四川启睿克科技有限公司 | httpServer-based intelligent interconnection method for realizing intelligent interconnection in active intelligent home |
CN117253340A (en) * | 2023-09-19 | 2023-12-19 | 重庆宗灿科技发展有限公司 | Robot-based intelligent accompanying system and method |
-
2022
- 2022-06-02 CN CN202210625935.3A patent/CN114986532A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115883274A (en) * | 2022-09-26 | 2023-03-31 | 四川启睿克科技有限公司 | httpServer-based intelligent interconnection method for realizing intelligent interconnection in active intelligent home |
CN115883274B (en) * | 2022-09-26 | 2024-05-14 | 四川启睿克科技有限公司 | Intelligent interconnection method for realizing active intelligent home based on HTTPSERVER |
CN117253340A (en) * | 2023-09-19 | 2023-12-19 | 重庆宗灿科技发展有限公司 | Robot-based intelligent accompanying system and method |
CN117253340B (en) * | 2023-09-19 | 2024-06-11 | 重庆宗灿科技发展有限公司 | Robot-based intelligent accompanying system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114986532A (en) | Crown capturing service robot and non-command type active intelligence implementation method | |
CN107980106B (en) | Environmental control system | |
CN108390859B (en) | Intelligent robot device for intercom extension | |
CN109634129B (en) | Method, system and device for realizing active care | |
CN110275443A (en) | Intelligent control method, system and the intelligent apparatus of active | |
CN109445288A (en) | A kind of implementation method of wisdom family popularization and application | |
CN108924612A (en) | A kind of art smart television device | |
CN103890683A (en) | Thermostat with ring-shaped control member | |
TW201245919A (en) | Brightness adjusting method and system with photographic device | |
US11444710B2 (en) | Method, apparatus, and system for processing and presenting life log based on a wireless signal | |
WO2021213193A1 (en) | Intelligent robot weak box | |
CN104582773A (en) | A breathing apparatus having a display with user selectable background | |
CN109855575A (en) | Intelligent apparatus, indoor human body 3-D positioning method and wisdom family implementation method | |
CN108845595A (en) | A kind of split type temperature control devices and methods therefor with gateway function | |
WO2020191755A1 (en) | Implementation method for smart home and smart device | |
US20210136681A1 (en) | Method, apparatus, and system for wireless monitoring with flexible power supply | |
CN109417842B (en) | Presence simulation system and method | |
CN205405107U (en) | Intelligent housing system based on wireless sensor network | |
CN114845442A (en) | Intelligent illumination method and device based on desk lamp, desk lamp and storage medium | |
CN109324693A (en) | AR searcher, the articles search system and method based on AR searcher | |
EP3033457A2 (en) | Room organizing system | |
CN114278892A (en) | Intelligent lighting system and method for Internet of things | |
CN217530864U (en) | Capture service robot | |
CN209514548U (en) | AR searcher, the articles search system based on AR searcher | |
CN106028011A (en) | New type portable smart projection bracelet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |