CN114371632A - Intelligent equipment control method, device, server and storage medium - Google Patents

Intelligent equipment control method, device, server and storage medium Download PDF

Info

Publication number
CN114371632A
CN114371632A CN202111643929.2A CN202111643929A CN114371632A CN 114371632 A CN114371632 A CN 114371632A CN 202111643929 A CN202111643929 A CN 202111643929A CN 114371632 A CN114371632 A CN 114371632A
Authority
CN
China
Prior art keywords
scene
target
new scene
task
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111643929.2A
Other languages
Chinese (zh)
Inventor
高斌
陈莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202111643929.2A priority Critical patent/CN114371632A/en
Publication of CN114371632A publication Critical patent/CN114371632A/en
Priority to PCT/CN2022/105812 priority patent/WO2023124017A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The embodiment of the invention provides an intelligent device control method, an intelligent device control device, a server and a storage medium, wherein the method comprises the following steps: shooting a new scene which the intelligent equipment enters at present to obtain an image corresponding to the new scene; comparing and matching the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, wherein the modeling data comprises at least one of images obtained by shooting the reference scenes at different angles or at different positions and three-dimensional scene models corresponding to the reference scenes; and controlling the intelligent equipment to execute the target task in the new scene based on the target reference scene. By adopting the method and the device, when the intelligent device newly enters a new scene, a more similar target reference scene can be found for the new scene. The target task may then be executed in the new scene with reference to the target reference scene. The intelligent device can be enabled to efficiently execute the target task when a new scene is entered.

Description

Intelligent equipment control method, device, server and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent device control method, an intelligent device control device, a server and a storage medium.
Background
In the related art, the remote control of the smart device, which may be a robot, for example, may be implemented by a cloud server. The cloud server outputs a control instruction for controlling the intelligent equipment according to the environment data collected and reported by the intelligent equipment and the task needing to be executed at present, and sends the control instruction to the intelligent equipment, so that the intelligent equipment executes actions based on the control instruction.
If the intelligent device newly enters a new scene, and modeling data corresponding to the new scene does not exist in the cloud server, how the intelligent device can efficiently execute the target task is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an intelligent device control method, an intelligent device control device, a server and a storage medium, which are used for controlling intelligent devices to efficiently execute target tasks.
In a first aspect, an embodiment of the present invention provides an intelligent device control method, where the method includes:
shooting a new scene which the intelligent equipment enters at present to obtain an image corresponding to the new scene;
comparing and matching the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, wherein the modeling data comprises at least one of images obtained by shooting the reference scenes at different angles or different positions and three-dimensional scene models corresponding to the reference scenes;
and controlling the intelligent equipment to execute a target task in the new scene based on the target reference scene.
Optionally, the target task is a navigation task;
the controlling the intelligent device to execute the target task in the new scene based on the target reference scene comprises:
and planning a traveling route of the intelligent device for executing the navigation task in the new scene based on the modeling data corresponding to the target reference scene.
Optionally, the target task is a map scanning task, and the map scanning task is a task for creating modeling data corresponding to the new scene;
the controlling the intelligent device to execute the target task in the new scene based on the target reference scene comprises:
determining target map scanning strategy data corresponding to the target reference scene according to a corresponding relation between a preset reference scene and the map scanning strategy data;
controlling the intelligent device to execute the scanning task based on the target scanning strategy data.
Optionally, the scan strategy data at least includes any one of:
travel speed, travel angle, travel route, travel strategy;
wherein the travel policy is to instruct the smart device to travel on the left side, the right side, or the middle of the aisle.
Optionally, the new scene and the target reference scene are rooms similar in structure and furnishing.
Optionally, before comparing and matching the image corresponding to the new scene with the modeling data corresponding to a plurality of preset reference scenes, the method further includes:
the method comprises the steps of obtaining images obtained by shooting a reference scene at different angles or at different positions through a terminal of a shooting person in the reference scene, and recording a walking route of the shooting person in the process of shooting the images as a traveling route in the scanning strategy data corresponding to the reference scene.
Optionally, the method further comprises:
if the image corresponding to the new scene is not matched with the modeling data corresponding to the plurality of preset reference scenes, obtaining the modeling data corresponding to the new scene provided by a manager of the new scene, wherein the modeling data corresponding to the new scene is a three-dimensional scene model;
acquiring control behavior data generated when a background person controls a virtual twin body corresponding to the intelligent device to execute the target task in the three-dimensional scene model;
and controlling the intelligent equipment to execute the target task in the new scene based on the control behavior data.
Optionally, the method further comprises:
in the process of traveling when the intelligent equipment executes the target task, parallel lines matched with the traveling direction corresponding to the target operation and control in the current position area of the intelligent equipment are detected;
determining a travel angle corresponding to the target manipulation;
and if the target included angle exists between the advancing angle corresponding to the target operation and the parallel line, controlling the intelligent equipment to call back the target included angle for advancing.
Optionally, the method further comprises:
in the process of advancing when the intelligent equipment executes the target task, if an instruction for controlling the intelligent equipment to turn is received, which is input by background personnel, whether an intersection exists in the advancing direction of the intelligent equipment is detected;
and if the intersection exists in the advancing direction, planning an advancing route which advances to the turning direction corresponding to the turning instruction through the intersection based on the modeling data corresponding to the intersection.
In a second aspect, an embodiment of the present invention provides an intelligent device control apparatus, including:
the shooting module is used for shooting a new scene which the intelligent equipment enters at present to obtain an image corresponding to the new scene;
the matching module is used for comparing and matching the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, wherein the modeling data comprises at least one of images obtained by shooting the reference scenes at different angles or different positions and three-dimensional scene models corresponding to the reference scenes;
and the control module is used for controlling the intelligent equipment to execute the target task in the new scene based on the target reference scene.
Optionally, the target task is a navigation task;
the control module is configured to:
and planning a traveling route of the intelligent device for executing the navigation task in the new scene based on the modeling data corresponding to the target reference scene.
Optionally, the target task is a map scanning task, and the map scanning task is a task for creating modeling data corresponding to the new scene;
the control module is configured to:
determining target map scanning strategy data corresponding to the target reference scene according to a corresponding relation between a preset reference scene and the map scanning strategy data;
controlling the intelligent device to execute the scanning task based on the target scanning strategy data.
Optionally, the scan strategy data at least includes any one of:
travel speed, travel angle, travel route, travel strategy;
wherein the travel policy is to instruct the smart device to travel on the left side, the right side, or the middle of the aisle.
Optionally, the new scene and the target reference scene are rooms similar in structure and furnishing.
Optionally, the shooting module is further configured to:
the method comprises the steps of obtaining images obtained by shooting a reference scene at different angles or at different positions through a terminal of a shooting person in the reference scene, and recording a walking route of the shooting person in the process of shooting the images as a traveling route in the scanning strategy data corresponding to the reference scene.
Optionally, the control module is further configured to:
if the image corresponding to the new scene is not matched with the modeling data corresponding to the plurality of preset reference scenes, obtaining the modeling data corresponding to the new scene provided by a manager of the new scene, wherein the modeling data corresponding to the new scene is a three-dimensional scene model;
acquiring control behavior data generated when a background person controls a virtual twin body corresponding to the intelligent device to execute the target task in the three-dimensional scene model;
and controlling the intelligent equipment to execute the target task in the new scene based on the control behavior data.
Optionally, the control module is further configured to:
in the process of traveling when the intelligent equipment executes the target task, parallel lines matched with the traveling direction corresponding to the target operation and control in the current position area of the intelligent equipment are detected;
determining a travel angle corresponding to the target manipulation;
and if the target included angle exists between the advancing angle corresponding to the target operation and the parallel line, controlling the intelligent equipment to call back the target included angle for advancing.
Optionally, the control module is further configured to:
in the process of advancing when the intelligent equipment executes the target task, if an instruction for controlling the intelligent equipment to turn is received, which is input by background personnel, whether an intersection exists in the advancing direction of the intelligent equipment is detected;
and if the intersection exists in the advancing direction, planning an advancing route which advances to the turning direction corresponding to the turning instruction through the intersection based on the modeling data corresponding to the intersection.
In a third aspect, an embodiment of the present invention provides a server, which includes a processor and a memory, where the memory stores executable codes, and when the executable codes are executed by the processor, the processor may implement at least the intelligent device control method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of a server, causes the processor to implement at least the smart device control method of the first aspect.
By adopting the method and the device, when the intelligent device newly enters a new scene, because the modeling data corresponding to the new scene does not exist in the cloud server, the corresponding image can be shot for the new scene, and then the target reference scene which is similar to the new scene can be found based on the comparison and matching between the image and the data corresponding to the plurality of preset reference scenes. The target task may then be executed in the new scene with reference to the target reference scene. In this way, the intelligent device can efficiently execute the target task when newly entering a new scene.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a control method for an intelligent device according to an embodiment of the present invention;
fig. 2 is a schematic view of a scenario for operating an intelligent device according to an embodiment of the present invention;
fig. 3 is a schematic view of another scenario for operating an intelligent device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent device control apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The embodiment of the invention provides an intelligent device control method which can be applied to a cloud server. As shown in fig. 1, the intelligent device control method provided in the embodiment of the present invention may include the following steps:
101. and shooting a new scene which the intelligent equipment enters at present to obtain an image corresponding to the new scene.
102. And comparing and matching the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, wherein the modeling data comprises at least one of images obtained by shooting the reference scenes at different angles or at different positions and three-dimensional scene models corresponding to the reference scenes.
103. And controlling the intelligent equipment to execute the target task in the new scene based on the target reference scene.
The scenes can be various scenes such as hospitals, hotels, schools, nursing homes, office buildings, factories, outdoors and the like.
The intelligent device can be a robot or the like. The intelligent device can perform different tasks in different scenes, or provide different services, such as sweeping, cleaning, security, distribution and the like. Before providing services, the cloud server needs to know scenes so as to remotely control the intelligent device to reach a specified place through navigation and obstacle avoidance to provide the services. Therefore, modeling data corresponding to the scene needs to be created in the cloud server, and a traveling route of the intelligent device when the intelligent device executes a task can be planned through the modeling data.
If the intelligent device newly enters a new scene and the modeling data corresponding to the new scene does not exist in the cloud server, how the intelligent device executes the target task is the problem to be solved by the embodiment of the invention.
When the intelligent equipment enters a new scene, the image shooting device arranged on the intelligent equipment can be started, and images corresponding to scenes opposite to the intelligent equipment in the new scene can be collected through the image shooting device. For example, the smart device enters a room and faces the smart device is a table, and the smart device can capture an image of the table.
After the image corresponding to the new scene is obtained through shooting, the image corresponding to the new scene and the modeling data corresponding to the plurality of preset reference scenes can be compared and matched to determine the target reference scene matched with the new scene.
The modeling data includes images obtained by shooting the reference scene at different angles or at different positions, three-dimensional scene models corresponding to the reference scene, and the like.
It should be noted that the target reference scene matched with the new scene may be a scene with a very high similarity to the new scene. Alternatively, the new scene and the target reference scene may be rooms similar in structure and furnishing.
For example, the new scene and the target reference scene may be different wards of the same hospital, different guest rooms of the same hotel, different classrooms of the same school, different workshops of the same factory, etc. Alternatively, the new scene and the target reference scene may be different floors in the same building.
The new scene and the target reference scene are characterized by very high similarity between them.
The multiple preset reference scenes in the embodiment of the present invention may be scenes that the present intelligent device or other intelligent devices have arrived at. When the intelligent device reaches any reference scene, scanning is carried out on the reference scene, and modeling data corresponding to the reference scene can be obtained through the scanning.
Based on the similarity, the similarity between the image corresponding to the new scene and the modeling data corresponding to each preset reference scene can be calculated, and the reference scene with the similarity higher than a preset threshold value is determined as the target reference scene matched with the new scene.
For example, assuming that the smart device has just scanned in room 101 of hotel a, images of the locations and corners of room 101 have been acquired, say 10 images. Currently the smart device comes to a 102 room scan of hotel a and after entering 102 room, an image P of 102 room can be taken. Then, the similarity between 10 images of the image P corresponding to the 101 rooms is calculated, and assuming that the image P 'of the 10 images is photographed at an angle and a position close to the image P in a different room, the similarity between the image P' and the image P is higher than a preset threshold. Further, it is determined 102 that the room matches the room 101.
Of course, in addition to calculating the similarity between the image P and the image corresponding to the room 101, if the smart device has gone through other scenes, the similarity between the image P and the images corresponding to the other scenes also needs to be calculated. However, theoretically, since different rooms of the hotel a may be similar, a room 101 matching the room 102 can be determined, and other scenes are not similar to the room 101, so that the similarity between the images is low and the room 102 cannot be matched with other scenes.
After the target reference scene is determined in the manner described above, the intelligent device may be controlled to execute the target task in the new scene based on the target reference scene.
The target task may include a navigation task or a scan task. The scanning task refers to a task of creating modeling data corresponding to a new scene.
Optionally, the target task is a navigation task, and based on the target reference scene, the process of controlling the smart device to execute the target task in the new scene may be implemented as follows: and planning a traveling route of the intelligent equipment for executing the navigation task in the new scene based on the modeling data corresponding to the target reference scene.
Still in the above example, since the structures, the positions of the passages, the positions of the beds, the positions of the bedside cabinets, the positions of the television cabinets, and the like of different rooms of the hotel a are similar or identical, the intelligent device has already gone through the 101 room map, and therefore the modeling data corresponding to the 101 room can be obtained. Further, it is also feasible to use the modeling data of the room 101 in the room 102, so that the travel route of the intelligent device for executing the navigation task in the room 102 can be planned based on the modeling data of the room 101.
For example, the smart device needs to go forward 1 meter to the tv cabinet in room 101 to take a cup and then turn right 0.5 meter to reach the front of the tv cabinet. And the intelligent device needs to reach the front of the television cabinet in the room 102, and also needs to travel 1 meter forwards and then turn 0.5 meter to the right.
Optionally, the target task is a map scanning task, and based on the target reference scene, the process of controlling the intelligent device to execute the target task in the new scene may be implemented as follows: determining target map scanning strategy data corresponding to a target reference scene according to a corresponding relation between a preset reference scene and the map scanning strategy data; and controlling the intelligent equipment to execute the scanning task based on the target scanning strategy data.
Optionally, the scan strategy data may include at least any one of the following: travel speed, travel angle, travel route, travel strategy. Wherein the travel policy is used to instruct the smart device to travel on the left, right, or middle of the aisle.
It can be understood that the smart device has performed a scan in the reference scene once, and therefore, the scan strategy data used when executing the scan task corresponds to the scan strategy data, and the corresponding scan strategy data can be recorded and stored in the process that the smart device executes the scan task in each reference scene. The sweep strategy data depicts how the intelligent device performs the sweep tasks in order in the reference scene.
Since the similarity between the new scene and the target reference scene is very high, how the intelligent device executes the scanogram task in the target reference scene can execute the scanogram task again in the new scene in the same way. For example, the smart device performs the sweeping task according to the travel route X in the 101 room of hotel a, and the smart device can still perform the sweeping task according to the travel route X in the 102 room of hotel a.
Optionally, if the smart device has not executed a scanogram task in a reference scene similar to the new scene before, an image obtained by shooting the reference scene at different angles or at different positions through a held terminal in the reference scene by a shooting person may also be obtained, and a route traveled by the shooting person in the process of shooting the image is recorded as a traveling route in the scanogram policy data corresponding to the reference scene.
The terminal can be a mobile phone, a tablet computer and the like.
It will be appreciated that the photographer may enter a target reference scene similar to the new scene to photograph the reference scene at a different angle or at a different location, or the photographer may enter the new scene directly to photograph the new scene at a different angle or at a different location.
If the shooting is carried out in the target reference scene, the modeling data corresponding to the target reference scene can be obtained. And after the modeling data corresponding to the target reference scene is obtained, the target reference scene can be compared and matched with the new scene.
In the process of shooting the target reference scene by the shooting personnel, the walking route of the shooting personnel can be recorded, so that the intelligent equipment can also travel by referring to the same route in the process of scanning the image. Since the walking speed of the person is greatly different from the traveling speed of the device, the parameters such as the walking speed of the imaging person may not be referred to.
If the situation is that the shooting is carried out in a new scene, the step of matching can be skipped. The route that the shooting personnel walked in the new scene can be directly recorded, and the intelligent device can also refer to the same route and travel in the process of sweeping.
It should be noted that, in the above case, since the image of the new scene captured manually is more fit to the image obtained by the smart device scan than the image obtained by the smart device scan itself, which is convenient for the algorithm of the smart device to use, the smart device needs to perform the scan task by itself to obtain the image of the new scene without using the image captured by the photographer.
Optionally, if the image corresponding to the new scene does not match the modeling data corresponding to the multiple preset reference scenes, obtaining the modeling data corresponding to the new scene provided by a manager of the new scene, wherein the modeling data corresponding to the new scene is a three-dimensional scene model; acquiring control behavior data generated when a virtual twin body corresponding to the background personnel control intelligent equipment executes a target task in the three-dimensional scene model; and controlling the intelligent equipment to execute the target task in the new scene based on the control behavior data.
It should be noted that, for some new scenes, there are corresponding architectural drawings, pre-established three-dimensional scene models, and the like. For example, for some hotels, the three-dimensional scene model corresponding to the hotel may be displayed on the official website of the hotel, so that the guests can know the style of the hotel and the room that can be selected more clearly. Under the condition, the established three-dimensional scene model corresponding to the new scene can be directly obtained, and the intelligent device is mapped into the three-dimensional scene model, namely the virtual twin corresponding to the intelligent device is added into the three-dimensional scene model. And then, enabling background personnel to control the virtual twin body to execute a target task in the three-dimensional scene model, and recording control behavior data generated in the control process. And finally, the intelligent device can be controlled to reproduce the process of executing the target task in the real new scene based on the control behavior data, so that the target task can be executed in the real new scene. Wherein, the three-dimensional scene model can also be a digital twin world.
It should be noted that, in the process of the background personnel operating the virtual twin, the virtual twin may collide with the obstacle in the three-dimensional scene model, and the erroneous operation and control may be corrected in time when the collision occurs. Because the collision does not occur in a real new scene or on a real intelligent device, the part loss of the intelligent device is avoided. Through simulation of the target task executing process by the virtual twin body in the three-dimensional scene model, collision between the intelligent device and an actually existing obstacle in a real new scene can be effectively avoided, loss of the intelligent device can be reduced finally, and cost for executing the target task is reduced.
In the process that the intelligent device executes the target task based on the target reference scene, in order to further improve the safety of executing the task, the intelligent device is prevented from colliding. Optionally, the method provided in the embodiment of the present invention may further include: in the process of advancing when the intelligent equipment executes the target task, parallel lines matched with the advancing direction corresponding to the target operation in the current position area of the intelligent equipment are detected; determining a traveling angle corresponding to target operation; and if the target included angle exists between the advancing angle corresponding to the target operation and the parallel line, controlling the intelligent equipment to call back the target included angle for advancing.
As shown in fig. 2, the parallel lines on both sides shown on the left of fig. 2 represent wall surfaces, and the middle of the wall surface is a passageway. The intelligent device is advancing towards an advancing angle with intersection points of the two side wall surfaces, and if the intelligent device is not corrected in time, the intelligent device collides with the left side wall surface after a period of time. Assuming that the target included angle between the advancing angle and the wall surface on the left side is 15 degrees, the advancing angle of the intelligent device can be automatically adjusted back to 15 degrees in the clockwise direction, so that the intelligent device can advance towards a direction parallel to the wall surfaces on the two sides as shown in the right diagram of fig. 2.
Optionally, the method provided in the embodiment of the present invention may further include: in the process of advancing when the intelligent equipment executes a target task, if an instruction for controlling the intelligent equipment to turn is received, which is input by background personnel, whether an intersection exists in the advancing direction of the intelligent equipment is detected; and if the intersection exists in the advancing direction, planning an advancing route which advances to the turning direction corresponding to the turning instruction through the intersection based on the modeling data corresponding to the intersection.
In the process that the intelligent device executes the target task by referring to the target reference scene, background personnel can check the process that the intelligent device executes the target task through the monitoring picture. It can be understood that, since the smart device only performs the target task with reference to the target reference scene with a very high similarity, if the positions of some obstacles in the target reference scene are different from those in the new scene, the smart device cannot perform the target task with reference to the target reference scene completely, and needs to be adjusted in time. At this time, the background personnel can intervene, and the background personnel can send a corresponding control instruction to the intelligent device, for example, an instruction for the intelligent device to turn left or turn right.
As shown in fig. 3, it is assumed that the background person operates the smart device to get to an intersection and sends a right turn instruction to the smart device, so that it can be determined that the smart device needs to travel to the right side after passing through the intersection. It is understood that a variety of sensors may be provided on the smart device, such as depth cameras, lidar, ultrasonic sensors, gyroscopes, and the like. Whether an intersection really exists in front of the intelligent equipment can be detected through the sensor, if so, modeling data corresponding to the intersection is further determined according to the data collected by the sensor, and a target traveling route which travels to the right side direction after passing through the intersection is planned based on the modeling data.
As shown in the left diagram of fig. 3, if the intelligent device is manually operated to bend to the right, the whole turning route is not smooth. In a poor situation, background personnel may need to adjust the traveling angle for multiple times, or find that the intelligent device is about to hit a wall surface at a corner, and finally operate the intelligent device to travel to the right direction through the intersection in a mode of reversing back again to advance again. By adopting the scheme provided by the embodiment of the invention, the intelligent equipment can automatically plan a smooth target traveling route, and the target traveling route is also a route which has the highest efficiency and travels towards the right direction through the intersection as the traveling angle does not need to be adjusted for multiple times or the intelligent equipment backs up.
By adopting the method and the device, when the intelligent device newly enters a new scene, because the modeling data corresponding to the new scene does not exist in the cloud server, the corresponding image can be shot for the new scene, and then the target reference scene which is similar to the new scene can be found based on the comparison and matching between the image and the data corresponding to the plurality of preset reference scenes. The target task may then be executed in the new scene with reference to the target reference scene. In this way, the intelligent device can efficiently execute the target task when newly entering a new scene.
The smart device control apparatus according to one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these intelligent device control means may be constructed by configuring the steps taught in the present scheme using commercially available hardware components.
Fig. 4 is a schematic structural diagram of an intelligent device control apparatus according to an embodiment of the present invention, and as shown in fig. 4, the apparatus includes:
the shooting module 41 is configured to shoot a new scene currently entered by the smart device to obtain an image corresponding to the new scene;
a matching module 42, configured to compare and match the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, where the modeling data includes at least one of images obtained by shooting the reference scenes at different angles or at different positions and three-dimensional scene models corresponding to the reference scenes;
a control module 43, configured to control the smart device to execute a target task in the new scene based on the target reference scene.
Optionally, the target task is a navigation task;
the control module 43 is configured to:
and planning a traveling route of the intelligent device for executing the navigation task in the new scene based on the modeling data corresponding to the target reference scene.
Optionally, the target task is a map scanning task, and the map scanning task is a task for creating modeling data corresponding to the new scene;
the control module 43 is configured to:
determining target map scanning strategy data corresponding to the target reference scene according to a corresponding relation between a preset reference scene and the map scanning strategy data;
controlling the intelligent device to execute the scanning task based on the target scanning strategy data.
Optionally, the scan strategy data at least includes any one of:
travel speed, travel angle, travel route, travel strategy;
wherein the travel policy is to instruct the smart device to travel on the left side, the right side, or the middle of the aisle.
Optionally, the new scene and the target reference scene are rooms similar in structure and furnishing.
Optionally, the shooting module 41 is further configured to:
the method comprises the steps of obtaining images obtained by shooting a reference scene at different angles or at different positions through a terminal of a shooting person in the reference scene, and recording a walking route of the shooting person in the process of shooting the images as a traveling route in the scanning strategy data corresponding to the reference scene.
Optionally, the control module 43 is further configured to:
if the image corresponding to the new scene is not matched with the modeling data corresponding to the plurality of preset reference scenes, obtaining the modeling data corresponding to the new scene provided by a manager of the new scene, wherein the modeling data corresponding to the new scene is a three-dimensional scene model;
acquiring control behavior data generated when a background person controls a virtual twin body corresponding to the intelligent device to execute the target task in the three-dimensional scene model;
and controlling the intelligent equipment to execute the target task in the new scene based on the control behavior data.
Optionally, the control module 43 is further configured to:
in the process of traveling when the intelligent equipment executes the target task, parallel lines matched with the traveling direction corresponding to the target operation and control in the current position area of the intelligent equipment are detected;
determining a travel angle corresponding to the target manipulation;
and if the target included angle exists between the advancing angle corresponding to the target operation and the parallel line, controlling the intelligent equipment to call back the target included angle for advancing.
Optionally, the control module 43 is further configured to:
in the process of advancing when the intelligent equipment executes the target task, if an instruction for controlling the intelligent equipment to turn is received, which is input by background personnel, whether an intersection exists in the advancing direction of the intelligent equipment is detected;
and if the intersection exists in the advancing direction, planning an advancing route which advances to the turning direction corresponding to the turning instruction through the intersection based on the modeling data corresponding to the intersection.
The apparatus shown in fig. 4 may execute the intelligent device control method provided in the foregoing embodiments shown in fig. 1 to fig. 3, and the detailed execution process and technical effect refer to the description in the foregoing embodiments, which are not described herein again.
In a possible design, the structure of the intelligent device control apparatus shown in fig. 4 may be implemented as a server, as shown in fig. 5, where the server may include: a processor 91, and a memory 92. Wherein the memory 92 has stored thereon executable code, which when executed by the processor 91, makes the processor 91 at least implement the intelligent device control method as provided in the foregoing embodiments shown in fig. 1 to 3.
Optionally, the server may further include a communication interface 93 for communicating with other devices.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium having executable codes stored thereon, which when executed by a processor of a server, cause the processor to implement at least the smart device control method provided in the foregoing embodiments shown in fig. 1 to 3.
The above-described apparatus embodiments are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The intelligent device control method provided in the embodiment of the present invention may be executed by a certain program/software, the program/software may be provided by a network side, the server mentioned in the foregoing embodiment may download the program/software into a local nonvolatile storage medium, and when it needs to execute the intelligent device control method, the program/software is read into a memory by a CPU, and then the CPU executes the program/software to implement the intelligent device control method provided in the foregoing embodiment, and an execution process may refer to the illustration in fig. 1 to fig. 3.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. An intelligent device control method, comprising:
shooting a new scene which the intelligent equipment enters at present to obtain an image corresponding to the new scene;
comparing and matching the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, wherein the modeling data comprises at least one of images obtained by shooting the reference scenes at different angles or different positions and three-dimensional scene models corresponding to the reference scenes;
and controlling the intelligent equipment to execute a target task in the new scene based on the target reference scene.
2. The method of claim 1, wherein the target task is a navigation task;
the controlling the intelligent device to execute the target task in the new scene based on the target reference scene comprises:
and planning a traveling route of the intelligent device for executing the navigation task in the new scene based on the modeling data corresponding to the target reference scene.
3. The method according to claim 1, wherein the target task is a scanning task, and the scanning task is a task of creating modeling data corresponding to the new scene;
the controlling the intelligent device to execute the target task in the new scene based on the target reference scene comprises:
determining target map scanning strategy data corresponding to the target reference scene according to a corresponding relation between a preset reference scene and the map scanning strategy data;
controlling the intelligent device to execute the scanning task based on the target scanning strategy data.
4. A method according to claim 3, wherein the map strategy data comprises at least any one of:
travel speed, travel angle, travel route, travel strategy;
wherein the travel policy is to instruct the smart device to travel on the left side, the right side, or the middle of the aisle.
5. The method of claim 1, wherein the new scene and the target reference scene are rooms similar in structure and furnishing.
6. The method of claim 1, wherein before comparing and matching the image corresponding to the new scene with the modeling data corresponding to a plurality of preset reference scenes, the method further comprises:
the method comprises the steps of obtaining images obtained by shooting a reference scene at different angles or at different positions through a terminal of a shooting person in the reference scene, and recording a walking route of the shooting person in the process of shooting the images as a traveling route in the scanning strategy data corresponding to the reference scene.
7. The method of claim 1, further comprising:
if the image corresponding to the new scene is not matched with the modeling data corresponding to the plurality of preset reference scenes, obtaining the modeling data corresponding to the new scene provided by a manager of the new scene, wherein the modeling data corresponding to the new scene is a three-dimensional scene model;
acquiring control behavior data generated when a background person controls a virtual twin body corresponding to the intelligent device to execute the target task in the three-dimensional scene model;
and controlling the intelligent equipment to execute the target task in the new scene based on the control behavior data.
8. The method of claim 1, further comprising:
in the process of traveling when the intelligent equipment executes the target task, parallel lines matched with the traveling direction corresponding to the target operation and control in the current position area of the intelligent equipment are detected;
determining a travel angle corresponding to the target manipulation;
and if the target included angle exists between the advancing angle corresponding to the target operation and the parallel line, controlling the intelligent equipment to call back the target included angle for advancing.
9. The method of claim 1, further comprising:
in the process of advancing when the intelligent equipment executes the target task, if an instruction for controlling the intelligent equipment to turn is received, which is input by background personnel, whether an intersection exists in the advancing direction of the intelligent equipment is detected;
and if the intersection exists in the advancing direction, planning an advancing route which advances to the turning direction corresponding to the turning instruction through the intersection based on the modeling data corresponding to the intersection.
10. An intelligent device control apparatus, comprising:
the shooting module is used for shooting a new scene which the intelligent equipment enters at present to obtain an image corresponding to the new scene;
the matching module is used for comparing and matching the image corresponding to the new scene with modeling data corresponding to a plurality of preset reference scenes to determine a target reference scene matched with the new scene, wherein the modeling data comprises at least one of images obtained by shooting the reference scenes at different angles or different positions and three-dimensional scene models corresponding to the reference scenes;
and the control module is used for controlling the intelligent equipment to execute the target task in the new scene based on the target reference scene.
11. A server, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the smart device control method of any of claims 1-9.
12. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of a server, causes the processor to perform the smart device control method of any one of claims 1-9.
CN202111643929.2A 2021-12-29 2021-12-29 Intelligent equipment control method, device, server and storage medium Pending CN114371632A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111643929.2A CN114371632A (en) 2021-12-29 2021-12-29 Intelligent equipment control method, device, server and storage medium
PCT/CN2022/105812 WO2023124017A1 (en) 2021-12-29 2022-07-14 Intelligent device control method and apparatus, and server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111643929.2A CN114371632A (en) 2021-12-29 2021-12-29 Intelligent equipment control method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN114371632A true CN114371632A (en) 2022-04-19

Family

ID=81141749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111643929.2A Pending CN114371632A (en) 2021-12-29 2021-12-29 Intelligent equipment control method, device, server and storage medium

Country Status (2)

Country Link
CN (1) CN114371632A (en)
WO (1) WO2023124017A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115847488A (en) * 2023-02-07 2023-03-28 成都秦川物联网科技股份有限公司 Industrial Internet of things system for cooperative robot monitoring and control method
WO2023124017A1 (en) * 2021-12-29 2023-07-06 达闼机器人股份有限公司 Intelligent device control method and apparatus, and server and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106933227A (en) * 2017-03-31 2017-07-07 联想(北京)有限公司 The method and electronic equipment of a kind of guiding intelligent robot
CN108638062A (en) * 2018-05-09 2018-10-12 科沃斯商用机器人有限公司 Robot localization method, apparatus, positioning device and storage medium
CN109906435A (en) * 2016-11-08 2019-06-18 夏普株式会社 Mobile member control apparatus and moving body control program
CN110457406A (en) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 Map constructing method, device and computer readable storage medium
CN110533553A (en) * 2018-05-25 2019-12-03 阿里巴巴集团控股有限公司 Service providing method and device
CN110569913A (en) * 2019-09-11 2019-12-13 北京云迹科技有限公司 Scene classifier training method and device, scene recognition method and robot
CN110765525A (en) * 2019-10-18 2020-02-07 Oppo广东移动通信有限公司 Method, device, electronic equipment and medium for generating scene picture
CN110889871A (en) * 2019-12-03 2020-03-17 广东利元亨智能装备股份有限公司 Robot running method and device and robot
CN112183285A (en) * 2020-09-22 2021-01-05 合肥科大智能机器人技术有限公司 3D point cloud map fusion method and system for transformer substation inspection robot
CN112729321A (en) * 2020-12-28 2021-04-30 上海有个机器人有限公司 Robot map scanning method and device, storage medium and robot
CN112947424A (en) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 Distribution network operation robot autonomous operation path planning method and distribution network operation system
CN113050649A (en) * 2021-03-24 2021-06-29 西安科技大学 Remote control system and method for inspection robot driven by digital twin
CN113240031A (en) * 2021-05-25 2021-08-10 中德(珠海)人工智能研究院有限公司 Panoramic image feature point matching model training method and device and server
CN113263497A (en) * 2021-04-07 2021-08-17 新兴际华科技发展有限公司 Remote intelligent man-machine interaction method for fire-fighting robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4047011B2 (en) * 2002-01-10 2008-02-13 三菱電機株式会社 Server, transmission system, walking direction prediction method, and movement direction prediction method
US9805271B2 (en) * 2009-08-18 2017-10-31 Omni Ai, Inc. Scene preset identification using quadtree decomposition analysis
CN114371632A (en) * 2021-12-29 2022-04-19 达闼机器人有限公司 Intelligent equipment control method, device, server and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109906435A (en) * 2016-11-08 2019-06-18 夏普株式会社 Mobile member control apparatus and moving body control program
CN106933227A (en) * 2017-03-31 2017-07-07 联想(北京)有限公司 The method and electronic equipment of a kind of guiding intelligent robot
CN110457406A (en) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 Map constructing method, device and computer readable storage medium
CN108638062A (en) * 2018-05-09 2018-10-12 科沃斯商用机器人有限公司 Robot localization method, apparatus, positioning device and storage medium
CN110533553A (en) * 2018-05-25 2019-12-03 阿里巴巴集团控股有限公司 Service providing method and device
CN110569913A (en) * 2019-09-11 2019-12-13 北京云迹科技有限公司 Scene classifier training method and device, scene recognition method and robot
CN110765525A (en) * 2019-10-18 2020-02-07 Oppo广东移动通信有限公司 Method, device, electronic equipment and medium for generating scene picture
CN110889871A (en) * 2019-12-03 2020-03-17 广东利元亨智能装备股份有限公司 Robot running method and device and robot
CN112183285A (en) * 2020-09-22 2021-01-05 合肥科大智能机器人技术有限公司 3D point cloud map fusion method and system for transformer substation inspection robot
CN112729321A (en) * 2020-12-28 2021-04-30 上海有个机器人有限公司 Robot map scanning method and device, storage medium and robot
CN112947424A (en) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 Distribution network operation robot autonomous operation path planning method and distribution network operation system
CN113050649A (en) * 2021-03-24 2021-06-29 西安科技大学 Remote control system and method for inspection robot driven by digital twin
CN113263497A (en) * 2021-04-07 2021-08-17 新兴际华科技发展有限公司 Remote intelligent man-machine interaction method for fire-fighting robot
CN113240031A (en) * 2021-05-25 2021-08-10 中德(珠海)人工智能研究院有限公司 Panoramic image feature point matching model training method and device and server

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124017A1 (en) * 2021-12-29 2023-07-06 达闼机器人股份有限公司 Intelligent device control method and apparatus, and server and storage medium
CN115847488A (en) * 2023-02-07 2023-03-28 成都秦川物联网科技股份有限公司 Industrial Internet of things system for cooperative robot monitoring and control method
CN115847488B (en) * 2023-02-07 2023-05-02 成都秦川物联网科技股份有限公司 Industrial Internet of things system for collaborative robot monitoring and control method
US11919166B2 (en) 2023-02-07 2024-03-05 Chengdu Qinchuan Iot Technology Co., Ltd. Industrial internet of things for monitoring collaborative robots and control methods, storage media thereof

Also Published As

Publication number Publication date
WO2023124017A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
US10834317B2 (en) Connecting and using building data acquired from mobile devices
CN113284240B (en) Map construction method and device, electronic equipment and storage medium
Turner et al. Fast, automated, scalable generation of textured 3D models of indoor environments
WO2023124017A1 (en) Intelligent device control method and apparatus, and server and storage medium
EP3032369B1 (en) Methods for clearing garbage and devices for the same
WO2019233445A1 (en) Data collection and model generation method for house
CN108234918B (en) Exploration and communication architecture method and system of indoor unmanned aerial vehicle with privacy awareness
Sankar et al. Capturing indoor scenes with smartphones
Gao et al. Robust RGB-D simultaneous localization and mapping using planar point features
US20070100498A1 (en) Mobile robot
US11729511B2 (en) Method for wall line determination, method, apparatus, and device for spatial modeling
US11269350B2 (en) Method for creating an environment map for a processing unit
JP2022539420A (en) Methods, systems and non-transitory computer readable media for supporting experience sharing between users
Mojtahedzadeh Robot obstacle avoidance using the Kinect
Chow Multi-sensor integration for indoor 3D reconstruction
CN112015187A (en) Semantic map construction method and system for intelligent mobile robot
JP5552069B2 (en) Moving object tracking device
des Bouvrie Improving rgbd indoor mapping with imu data
CN116762090A (en) Method, system, and non-transitory computer-readable recording medium for supporting experience sharing between users
CN112053415B (en) Map construction method and self-walking equipment
US20180350216A1 (en) Generating Representations of Interior Space
CN116310918B (en) Indoor key object identification and positioning method, device and equipment based on mixed reality
KR101686797B1 (en) Method for analyzing a visible area of a closed circuit television considering the three dimensional features
RU2679200C1 (en) Data from the video camera displaying method and system
KR20200128827A (en) Method and system for managing on-site based on 3D laser scanning data and 360VR image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20220419

RJ01 Rejection of invention patent application after publication