CN111604898A - Livestock retrieval method, robot, terminal equipment and storage medium - Google Patents

Livestock retrieval method, robot, terminal equipment and storage medium Download PDF

Info

Publication number
CN111604898A
CN111604898A CN202010358932.9A CN202010358932A CN111604898A CN 111604898 A CN111604898 A CN 111604898A CN 202010358932 A CN202010358932 A CN 202010358932A CN 111604898 A CN111604898 A CN 111604898A
Authority
CN
China
Prior art keywords
robot
livestock
target
lost
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010358932.9A
Other languages
Chinese (zh)
Other versions
CN111604898B (en
Inventor
刘大志
孙其民
顾震江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youdi Robot (Wuxi) Co.,Ltd.
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202010358932.9A priority Critical patent/CN111604898B/en
Publication of CN111604898A publication Critical patent/CN111604898A/en
Application granted granted Critical
Publication of CN111604898B publication Critical patent/CN111604898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The application is applicable to the technical field of robots and provides a livestock retrieval method, a robot, terminal equipment and a storage medium, wherein the livestock retrieval method comprises the following steps: acquiring a search instruction, and searching livestock in a target area according to the search instruction; if the lost livestock is identified in the livestock searching process, sending a target position to indicate a second robot to go to the target position, wherein the target position is the position of the lost livestock identified by the first robot; and if the second robot is detected to reach the target position, the second robot cooperates with the second robot to drive the lost livestock to the destination. The embodiment of the application can realize automation and convenience of finding livestock, and efficiently and accurately find back lost livestock.

Description

Livestock retrieval method, robot, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of robots, and particularly relates to a livestock retrieval method, a robot, terminal equipment and a storage medium.
Background
In the current agricultural production, the animal husbandry is an important part of agriculture, which is mainly achieved by keeping and grazing livestock in farms. The farm for livestock raising generally occupies a large area, and once the livestock is lost, a large amount of manpower and material resources are needed to be searched, so that the farm is very inconvenient.
Disclosure of Invention
In view of this, embodiments of the present application provide a livestock retrieval method, a robot, a terminal device and a storage medium, so as to solve the problem of how to conveniently and effectively retrieve lost livestock in the prior art.
A first aspect of an embodiment of the present application provides a livestock retrieval method, which is applied to a first robot, and includes:
acquiring a search instruction, and searching livestock in a target area according to the search instruction;
if a lost animal is identified in the animal searching process, sending a target position to instruct a second robot to go to the target position, wherein the target position is a position where the first robot identifies the lost animal;
and if the second robot is detected to reach the target position, the second robot cooperates with the second robot to drive the lost livestock to the destination.
A second aspect of the embodiments of the present application provides a livestock retrieval method, which is applied to a server, and includes:
acquiring a target position sent by a first robot, wherein the target position is a position where the first robot identifies a lost livestock;
notifying a second robot of the arrival at the target location;
after the second robot is detected to reach the target position, planning the current target behavior and determining the target action corresponding to the target behavior according to the first state characteristic information and the destination; the first state characteristic information is state characteristic information before a target action is executed currently, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and the target action comprises a formation keeping action, a driving action and an obstacle avoidance action;
and sending action instructions corresponding to the target actions to the first robot and the second robot so as to instruct the first robot and the second robot to execute the corresponding target actions to cooperatively drive the lost livestock.
A third aspect of embodiments of the present application provides a robot, including:
the search instruction acquisition unit is used for acquiring a search instruction and searching livestock in a target area according to the search instruction;
a target position sending unit, configured to send a target position to instruct other robots to go to the target position if a lost livestock is identified in the livestock searching process, where the target position is a position where the lost livestock is identified by the robot;
and the first cooperation unit is used for cooperating with the other robots to drive the lost livestock to a destination if the other robots are detected to reach the target positions.
A fourth aspect of an embodiment of the present application provides a server, including:
a target position acquiring unit, configured to acquire a target position sent by a first robot, where the target position is a position where the first robot recognizes a lost livestock;
a notification unit configured to notify the second robot of the arrival at the target position;
the planning unit is used for planning the current target behavior and determining the target action corresponding to the target behavior according to the first state characteristic information and the destination after detecting that the second robot reaches the target position; the first state characteristic information is state characteristic information before a target action is executed currently, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and the target action comprises a formation keeping action, a driving action and an obstacle avoidance action;
and the action instruction sending unit is used for sending action instructions corresponding to the target actions to the first robot and the second robot so as to instruct the first robot and the second robot to execute the corresponding target actions to cooperatively drive the lost livestock.
A fifth aspect of embodiments of the present application provides a terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, which when executed by said processor causes the terminal device to carry out the steps of the livestock retrieval method as described.
A sixth aspect of embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, causes a terminal device to carry out the steps of the animal recovery method as described.
A seventh aspect of embodiments of the present application provides a computer program product for causing a terminal device to perform the steps of the livestock retrieval method as described, when the computer program product is run on the terminal device.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, livestock searching is automatically carried out through the first robot, lost livestock is found out, the identified target position of the lost livestock is sent to the second robot, and the lost livestock is accurately and effectively driven to a destination through cooperation with the second robot. Because the livestock can be retrieved by the robot, the automation and the convenience of the livestock searching can be realized, and the labor cost and the time cost are saved; moreover, as the lost livestock is driven by the cooperation of the robots, the livestock can reach the destination more efficiently and accurately, and the effectiveness of retrieving the livestock is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a livestock retrieval method provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a first livestock retrieval method provided by an embodiment of the present application;
fig. 3 is a scene schematic diagram of a preset formation, specifically a straight formation, according to an embodiment of the present application;
fig. 4 is a schematic view of a preset formation, specifically a V-type formation, according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a livestock retrieval system according to an embodiment of the present application;
fig. 6 is a schematic flow chart of an implementation of a second livestock retrieval method provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a driving task model provided by an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a robot according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
The first embodiment is as follows:
fig. 1 shows a schematic application scenario diagram of a livestock retrieval method provided by an embodiment of the application. The application scenario comprises a lost animal 11, at least two robots (3 robots 121-123 are exemplified in fig. 1) and a destination 13. In the application scene, each robot starts to search livestock after acquiring a search instruction; if any robot identifies the lost livestock 11 in the livestock searching process, the position of the lost livestock is identified to other robots, so that the other robots go to the position to cooperate with the robots, and the lost livestock 11 is driven to the destination 13, thereby efficiently and accurately searching the lost livestock.
Fig. 2 shows a schematic flow chart of a first livestock retrieval method provided by an embodiment of the present application, where an execution subject of the livestock retrieval method is a robot, and the robot is specifically referred to as a first robot for the sake of distinction and description, but not limitation. The livestock retrieval method shown in fig. 2 is detailed as follows:
in S201, a search instruction is obtained, and livestock search is carried out in a target area according to the search instruction.
The first robot obtains a search instruction from a server or a user terminal, wherein the search instruction can comprise information of a target area, a preset route, images of lost livestock and the like. Specifically, the server or the user terminal respectively defines an area responsible for searching for each robot, and the server or the user terminal simultaneously sends a search instruction to each robot to instruct each robot to search in the area responsible for searching for each robot. And after the first robot obtains the search instruction, searching livestock in the target area according to the search instruction. Optionally, the first robot moves within the target area and monitors its surrounding environment information via a camera for livestock search.
Optionally, the step S201 includes:
acquiring a search instruction, moving along a preset route in a target area according to the search instruction, and searching livestock through an infrared induction and/or odor search algorithm.
In the embodiment of the application, the preset route can be obtained by planning in advance by a server or a user terminal, and the search instruction obtained by the first robot from the server or the user terminal comprises information of the preset route; or the preset route is specifically a route which is obtained by the first robot according to the information planning of the target area after the first robot obtains the search instruction containing the information of the target area. After the first robot obtains the search instruction, the first robot moves along the preset route in the target area to search the livestock according to the information of the preset route and an inertial navigation system of the first robot. Optionally, in the process of moving along the preset route, the first robot carries out autonomous obstacle avoidance through visual information acquired by the camera and information of surrounding obstacles detected by the obstacle measuring device, so that the first robot can safely and efficiently search livestock.
Optionally, the first robot searches for livestock by infrared induction while moving along a preset route within the target area. The first robot is provided with an infrared sensor, the first robot carries out real-time infrared induction through the infrared sensor in the moving process, and whether livestock exists around is judged according to information of the infrared induction. When this infrared sensor detects the infrared radiation who is produced by the livestock, trigger the camera switch, the image that obtains this livestock is shot, later compares the image of this livestock with the image of the livestock lost that first robot prestores, if the similarity of the two is higher than the default, then judges the livestock lost of current discernment.
Optionally, the first robot searches for livestock by means of a scent search algorithm while moving along a preset route within the target area. Specifically, the first robot is provided with an odor sensor, and when the first robot detects the livestock odor with the concentration higher than a preset threshold value in the moving search process, the position of an odor source corresponding to the livestock odor is determined by adopting a preset odor search algorithm, and the position of the odor source is the position of the lost livestock. After the position of the odor source is determined, the first robot moves to the position of the odor source to approach, and meanwhile, a camera is started to detect whether livestock exists in the process of moving to approach; if the livestock is detected, shooting the images of the livestock and comparing and judging the pre-stored images of the lost livestock. Optionally, the odor search algorithm comprises an odor packet density path estimation algorithm based on the arrival probability, and an odor source route of an odor packet corresponding to the current livestock odor is estimated in real time according to the odor packet density path estimation algorithm so as to instruct the first robot to search for the odor source according to the odor source route and approach the odor source. Optionally, the odor search algorithm may further comprise a particle filter-based odor source location estimation algorithm, which is parallel to the search process for the odor source, i.e. the odor source is searched while estimating the posterior probability distribution of the odor source location by using the odor concentration, the wind speed/wind direction information, thereby finding the possible locations of the odor source and thus realizing the fast search for the livestock.
According to the embodiment of the application, the livestock can be efficiently and accurately searched in the target area due to the fact that the livestock moves along the preset route according to the searching instruction and further detection and analysis are carried out by combining an infrared induction algorithm and/or an odor searching algorithm in the moving process.
In S202, if a lost animal is identified in the animal search process, a target position is sent to instruct a second robot to go to the target position, the target position being a position where the first robot identifies the lost animal.
If the first robot identifies the lost livestock in the livestock searching process, for example, after the first robot shoots an image to be identified through a camera, it is detected that information of the image to be identified is consistent with preset image information of the lost livestock, or after the image to be identified is sent to a user terminal, confirmation information returned by the user terminal is received, it is determined that the first robot identifies the lost livestock, and a target position is sent, wherein the target position is a position where the first robot identifies the lost livestock. Specifically, when the first robot recognizes the position of the lost livestock, the position of the first robot is located by a Global Positioning System (GPS) and transmitted as a target position; or, besides positioning the position of the livestock per se, the distance between the lost livestock and the livestock per se is further detected, and then the target position is calculated and sent according to the position of the livestock per se and the distance. Specifically, the first robot establishes communication directly with other robots (referred to as second robots for the sake of distinction) in the farm, or indirectly with the second robot through the server, thereby directly or indirectly transmitting information of the target position to the second robot to instruct the second robot to go to the target position.
In S203, if it is detected that the second robot reaches the target position, the lost livestock is driven to a destination in cooperation with the second robot.
After the first robot acquires the indication information which is directly sent by the second robot or indirectly sent by the server and indicates that the second robot reaches the target position, the first robot cooperates with the second robot to drive the lost livestock to the destination, and therefore the livestock can be retrieved.
Optionally, if the first robot is a designated monitoring robot, the cooperating with the second robot to drive the lost livestock to a destination comprises:
detecting the motion information of the lost livestock in real time, and indicating or cooperating with the second robot to drive the lost livestock to a destination according to the motion information of the lost livestock.
In an embodiment of the application, each cooperating robot comprises a monitoring robot for detecting motion information of the lost livestock in real time, and a driving robot for driving the lost livestock.
Optionally, the monitoring robot may be a robot independent from the driving robot (i.e. the monitoring robot itself does not participate in the action of driving livestock), and in one cooperation, each cooperating robot specifically includes one monitoring robot and a plurality of driving robots; if the first robot is the designated monitoring robot, the second robot is the driving robot. At this time, the first robot detects the motion information of the lost livestock in real time and instructs the second robot to drive the lost livestock to the destination according to the motion information of the lost livestock. Illustratively, this first robot is unmanned aerial vehicle, through carrying out real time monitoring shooting above the livestock that loses and obtain the motion information of the livestock that loses to convey this motion information to the second robot, instruct the second robot to drive the livestock that loses to the destination.
Alternatively, the monitoring robot may be one designated from the driving robots, i.e. the monitoring robot is used both for detecting the movement information of the lost animal in real time and for driving the lost animal. At this time, if the first robot is a designated monitoring robot, after the first robot transmits the motion information to the second robot, the first robot is used as a part of the driving robot, and drives the lost livestock to the destination together with the second robot in cooperation according to the motion information.
Optionally, the movement information comprises position information, movement speed, movement direction, etc. of the lost livestock. The first robot and the second robot adjust the position of the first robot and the second robot according to the position information of the lost livestock, and adjust the movement speed and the movement direction of the first robot according to the movement speed and the movement direction of the lost livestock, so that the lost livestock can be accurately and efficiently monitored and driven in real time.
Optionally, in this embodiment of the application, when the first robot is monitoring the robot, detecting the motion information of the lost livestock through a moving object detection algorithm specifically includes the following steps:
(1) shooting original video frame images of lost livestock with continuous preset frame numbers through a camera to serve as first images;
(2) smoothing and denoising each first image, and enhancing the image to enhance the edge information and detail information of the first image to obtain a second image;
(3) carrying out gray scale information region segmentation and livestock target extraction on each second image, and realizing target association and target detection through target time connectivity between each second image so as to obtain the information of the movement speed and the movement direction of the lost livestock;
(4) accurately obtaining a target image corresponding to the lost livestock according to the second image and a self-adaptive background difference algorithm based on grading;
(5) calculating the coordinates of the mass center of the target image, and calculating the coordinates (x1, y1, z1) of the lost livestock in a camera coordinate system according to the mapping relation between the camera and the target image and the radar ranging result (as depth information); and then, according to the relative position relationship between the camera and the monitoring robot, performing coordinate conversion calculation through a coordinate conversion matrix T to obtain the actual physical coordinates (x2, y2, z2) of the lost livestock in the farm.
Through the steps (1) - (5) above, the motion information of the lost livestock can be accurately acquired, and the lost livestock can be positioned so as to accurately indicate or cooperate with the second robot to drive the lost livestock.
In the embodiment of the application, because when the first robot is the appointed monitoring robot, the motion information of the lost livestock is detected in real time, so that the lost livestock is driven according to the motion information indication or the second robot in cooperation, and the lost livestock can be driven to the destination more accurately and efficiently.
Optionally, said driving the lost livestock to a destination in cooperation with the second robot comprises:
instructing or coordinating with the second robot to drive the lost livestock to the destination by maintaining a preset formation around the lost livestock and moving towards the destination.
In the embodiment of the application, when the first robot is the monitoring robot, and the monitoring robot is the robot other than the driving robot, then the quantity of second robot is two or more, and this monitoring robot instructs this two or more than two second robot to form the robot and drives the cooperation group, and this robot drives the cooperation group and surrounds lost livestock and removes to the destination through keeping predetermineeing the formation of a formation, in order to with lost livestock drives extremely the destination. When the first robot is a part of the driving robots (namely the first robot participates in the driving action of the livestock), the number of the second robots is one or more, and the first robot and the second robot jointly form a robot driving cooperative group, so that the second robot is cooperated to keep a preset formation to surround the lost livestock and move towards the destination, and the lost livestock is driven to the destination. That is, the robot driving cooperative group includes two or more robots. Alternatively, when the robot driving cooperative group includes two robots, the preset formation is a straight formation, the two robots are respectively located in front and behind the lost livestock, arranged in a straight line, and then the preset formation is maintained, and the lost livestock located in the middle is driven to the destination by surrounding and controlling in the front and rear direction. Exemplarily, as shown in fig. 3, after the first robot 121 identifies the missing animal 11, the second robot 122 is instructed to go to the target position, after which the first robot 121 and the second robot 122 are arranged in a straight formation cooperating to drive the missing animal 11 to the destination 13. Alternatively, when the cooperative group of robots includes two or more robots, the preset formation is a V-formation, and the plurality of robots arranged in the V-formation surround the lost livestock in the middle, maintain the preset formation, and move toward the destination to drive the lost livestock to the destination. Illustratively, as shown in fig. 4, after the first robot 121 identifies a missing animal 11, the second robot 122 and the second robot 123 are instructed to travel to the target position, after which the first robot 121, the second robot 122 and the second robot 123 are arranged in a V-formation, cooperating together to surround and drive the missing animal 11 to the destination 13.
In the embodiment of the application, the robot surrounds the lost livestock to move to the destination by keeping the preset formation, and can effectively surround and control the movement of the lost livestock, so that the lost livestock can be efficiently driven to the destination.
Optionally, the sending a target location to instruct a second robot to go to the target location if a lost animal is identified in the animal searching process comprises:
if a lost livestock is identified in the livestock searching process, sending a target position to a server to instruct the server to inform the second robot to go to the target position;
correspondingly, said driving the lost livestock to a destination in cooperation with the second robot comprises:
acquiring an action instruction which is sent by a server and corresponds to a target action;
and executing the target action according to the action instruction so as to realize the cooperation with the second robot and drive the lost livestock to the destination.
In the embodiment of the present application, further, the livestock retrieval method is specifically implemented by a livestock retrieval system, and the livestock retrieval system at least includes a plurality of robots (two or more robots) and a server establishing communication connection with the plurality of robots. The system structure of the livestock retrieval system is schematically shown in fig. 5. Optionally, the animal recovery system may further comprise a user terminal establishing a communication connection with the plurality of robots and/or servers for sending indication information during the animal search process, detecting the search process, and the like.
Correspondingly, the target position is sent to the server in step S202, and the server notifies the second robot to go to the target position.
Correspondingly, in step S203, the cooperative driving operation of the first robot and the second robot is specifically realized by receiving an operation command sent by the server. Specifically, after acquiring the target position and the position information of the first robot and the second robot, the server plans the behavior and actions of the robots, determines the target action to be executed by each robot at present, generates an action instruction, and controls each robot to accurately and efficiently cooperate and drive the lost livestock. And after the first robot obtains the action command which is sent after the server operation planning and corresponds to the target action, the first robot executes the target action according to the action command, so that the cooperation with the second robot can be realized, and the lost livestock can be driven to the destination.
In the embodiment of the application, information transfer and action planning are specifically carried out through the server, and the operation processing capacity of the server is usually strong, and information of each robot can be comprehensively mastered, so that the information transfer and accurate action planning can be efficiently carried out, further, the first robot can be more efficiently cooperated with the second robot, and lost livestock can be accurately and effectively driven to a destination.
Optionally, after the obtaining of the search instruction and the performing of the livestock search in the target area according to the search instruction, the method further includes:
and if the target positions sent by other robots are acquired in the livestock searching process, the livestock searching robot goes to the target positions to cooperate with the other robots to drive the lost livestock to the destination.
In the embodiment of the application, when the first robot acquires the target position sent by the other robot in the searching process, it is indicated that the other robot has identified the lost livestock, and the target position sent by the other robot is specifically the position where the other robot identifies the lost livestock. At this time, the first robot goes to the target position to cooperate with the other robots to drive the lost livestock to the destination.
In the embodiment of the application, livestock searching is automatically carried out through the first robot, lost livestock is found out, the identified target position of the lost livestock is sent to the second robot, and the lost livestock is accurately and effectively driven to a destination through cooperation with the second robot. Because the livestock can be retrieved by the robot, the automation and the convenience of the livestock searching can be realized, and the labor cost and the time cost are saved; moreover, as the lost livestock is driven by the cooperation of the robots, the livestock can reach the destination more efficiently and accurately, and the effectiveness of retrieving the livestock is improved.
Example two:
fig. 6 shows a schematic flow chart of a second livestock retrieval method provided by an embodiment of the present application, where the livestock retrieval method is executed by a server shown in fig. 5, and the server may be a cloud server, which is detailed as follows:
in S601, a target position sent by the first robot is obtained, where the first robot recognizes the lost livestock.
The server obtains a target position sent by the first robot, wherein the target position is specifically the position of the first robot located when the first robot identifies the lost livestock, or specifically the position of the lost livestock estimated according to the position of the first robot and the detected distance between the first robot and the lost livestock when the first robot identifies the lost livestock.
Optionally, before the step S601, the method further includes:
and sending a search instruction to each robot so as to instruct each robot to search livestock in the corresponding target area.
In the embodiment of the application, the robot which identifies the lost livestock in the livestock searching process is the first robot according to the searching instruction sent by the server.
In S602, the second robot is notified of the arrival at the target position.
In the embodiment of the present application, the second robot is a robot other than the first robot. After the target position sent by the first robot is obtained, the server sends an instruction to a preset number of second robots to inform the second robots of reaching the target position to cooperate with the first robot. Wherein the predetermined number is greater than or equal to 1.
In S603, after it is detected that the second robot reaches the target position, planning a current target behavior and determining a target action corresponding to the target behavior according to the first state feature information and the destination; the first state characteristic information is state characteristic information before a target action is executed at present, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and the target action comprises a formation keeping action, a driving action and an obstacle avoidance action.
When the server detects that the second robot reaches the target position, the target behavior is planned through the driving task model, and corresponding target actions are determined for all robots (the first robot and the second robot) which cooperate to drive livestock at present according to the target behavior. Specifically, the structure diagram of the driving task model is shown in fig. 7 and includes a general task layer, a behavior layer and an action layer, where the general task layer sets a general task of a current server, "driving livestock to a destination", the general task layer obtains information of the preset destination and first state characteristic information reported by each robot, and determines a current target behavior to be executed from each target behavior prestored in the behavior layer according to the first state characteristic information and historical experience information of the prestored historical experience information, and the historical experience information may be a mapping relationship between state characteristic information input by a user or collected or learned by the server itself and the target behavior. In the embodiment of the application, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and also comprises environment information monitored by the robots, wherein the motion information can comprise position information, motion speed information, motion direction information and the like, and the first state characteristic information is state characteristic information of each current robot before executing target actions; the target behaviors in the behavior layer include a formation holding behavior for controlling the robots to hold the formation to surround the livestock, a driving behavior for instructing the respective robots to drive the lost livestock to move toward a destination, and an obstacle avoidance behavior for instructing the robots to avoid the obstacle. And in the action layer, a series of target actions corresponding to each target action in the action layer are prestored, and the target action required to be executed by each robot at present is further determined from the action layer according to the current first state information and the target actions determined in the action layer. The target action may specifically include an acceleration action, a deceleration action, a forward action, a reverse action, a left turn action, a right turn action, a close-to-livestock action, a far-from-livestock action, and the like.
In S604, an action instruction corresponding to the target action is sent to the first robot and the second robot to instruct the first robot and the second robot to execute the corresponding target action to cooperatively drive the lost livestock.
After the target action required to be executed by each robot currently is determined, the action instruction carrying the action identification information of the target action is sent to the first robot and the second robot, so that the first robot and the second robot are instructed to execute the respective target action, mutual cooperation is realized, and the lost livestock is driven to the destination.
Optionally, the planning a current target behavior and determining a target action corresponding to the target behavior according to the first state feature information and the destination includes:
inputting the current first state characteristic information and the destination information into a preset neural network for processing to obtain a target behavior and a corresponding target action;
correspondingly, after the sending of the action command corresponding to the target action to the first robot and the second robot to instruct the first robot and the second robot to cooperate to drive the lost livestock, the method further includes:
if the lost livestock is detected to reach the destination, completing the livestock retrieval task; otherwise, the following steps are executed:
a1: acquiring second state characteristic information obtained after the first robot and the second robot execute the target action;
a2: determining current reward information according to the first state characteristic information, the second state characteristic information and a target reward function;
a3: and updating the preset neural network according to the reward information, taking the second state characteristic information as the updated first state characteristic information, returning to execute the step of inputting the current first state characteristic information and the destination information into the preset neural network for processing to obtain the target behavior and the corresponding target action and the subsequent steps.
In the embodiment of the application, the current target behavior and the corresponding target action are determined by a preset neural network and a deep learning algorithm.
Specifically, in step S603, the current first state feature information and the destination information are input to a preset neural network for processing, the current target behavior and the corresponding target action are determined, and the information of the target action is output. The preset neural network is obtained by training in advance according to a training sample library, and the training sample library comprises sample data marked with state characteristic information and corresponding target action information.
Specifically, the preset neural network comprises a global data storage unit, a decision unit, a behavior library and an evaluation unit, wherein the global data storage unit is used for storing acquired global data such as task information, destination information, first state feature information and sample data; the behavior library is used for storing the network parameters obtained by training; the decision unit is composed of multiple network layers and used for acquiring network parameters obtained by training from a behavior library, and performing deep learning operation processing on the first state characteristic information and the destination information to obtain corresponding target behaviors and target action information; the evaluation unit is used for acquiring second state characteristic information after each robot executes the target action, and calculating a reward value according to the first state characteristic information, the second state characteristic information and the target reward function so as to adjust the network parameters according to the reward value.
Correspondingly, after step S604, i.e. if it is detected that the lost livestock reaches the destination, the current livestock retrieval task is completed. Otherwise, deep reinforcement learning is performed through the steps A1-A3 to continuously adjust the preset neural network to optimize the target action plan of each time.
In a1, after detecting that the first robot and the second robot perform the target motions, the server obtains second state feature information, which is updated after each robot performs the target motions this time.
At A2, based on the first status characteristic information before each robot performs the target action and the second status characteristic information after each robot performs the target action, determining the variation relationship of the distance between the lost animal and the destination to determine a first reward value R1And/or determining the deviation of the travel direction of the respective robot from the expected planned travel direction for determining a second reward value R2Final reward information is then determined based on the target reward function, the first reward value and/or the second reward value.
Specifically, the distance d1 between the lost animal and the destination before each robot executes the target action is determined according to the position information of the lost animal contained in the first state characteristic information; according to the secondThe position information of the lost animal contained in the status characteristic information determines the distance d2 between the lost animal and the destination after each robot performs the target action. If d1 is larger than d2, the distance between the lost livestock and the destination is closer after the robot executes the target action, the current target action is taken as an accurate and effective action, and the first reward value R is used1Assigning a positive first preset value; if d1 is less than d2, it indicates that the distance between the lost livestock and the destination is rather longer after the robot executes the target action, and indicates that the current target action plan is not accurate enough, and the first reward value R is used1Assigning a negative second preset value; if d1 equals d2, the target action has no effect on the process of approaching the lost livestock to the destination, and the first reward value R is directly set1=0。
Specifically, the traveling direction of the robot formation is determined according to the information of the motion directions of the first robot and the second robot contained in the first state characteristic information; taking an included angle between the advancing direction of the robot formation and a target advancing direction planned by the server as a target detection included angle; determining a second reward value R according to the target detection included angle2Specifically, the smaller the target detection angle is, the smaller the second reward value R is2The larger.
Optionally, the final reward information is a target reward value R, which is in particular dependent on the first reward value R1And a second award value R2And target reward function determination. Illustratively, the target reward function is: r ═ R1*K1+R2*K2Wherein ". alpha" is a multiplication number, K1、K2Is a preset weight value.
At a3, the network parameters of the updated neural network are propagated backward according to the determined reward information, so as to obtain the updated preset neural network. And then, taking the current second state characteristic information as the first state characteristic information before the next round of target action planning, and returning to execute the step of inputting the current first state characteristic information and the destination information into a preset neural network for processing to obtain the target action and the corresponding target action and the subsequent steps until the lost livestock reaches the destination.
In the embodiment of the application, the current target action can be efficiently and accurately determined through the processing of the preset neural network; and the preset neural network is optimized according to the reward information determined by the target action execution result, namely the performance of the preset neural network is further optimized through a reinforcement learning algorithm, the accuracy of subsequent target action determination is further improved, the accuracy and the adaptability of target action planning are improved, and then the livestock retrieval task can be accurately and efficiently completed.
In the embodiment of the application, the server is used as a medium for information transmission of each robot and a behavior and action planning center, after the target position sent by the first robot is obtained and the second robot is indicated to reach the target position, the target behavior and the corresponding target action which are required to be executed by the current robot are accurately planned according to the obtained first state characteristic information and the destination information, and the server sends action instructions to indicate each robot to execute the target action so as to drive the lost livestock to the destination. Because the server can comprehensively master the information of each robot, the information forwarding and accurate action planning can be efficiently carried out, the first robot can more efficiently cooperate with the second robot, and the lost livestock can be accurately and effectively driven to the destination.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example three:
fig. 8 shows a schematic structural diagram of a robot provided in an embodiment of the present application, and for convenience of explanation, only parts related to the embodiment of the present application are shown:
the robot includes: a search instruction acquisition unit 81, a target position transmission unit 82, and a cooperation unit 83. Wherein:
and the searching instruction acquisition unit 81 is used for acquiring a searching instruction and searching livestock in the target area according to the searching instruction.
A target position sending unit 82, configured to send a target position to instruct other robots to go to the target position if a lost animal is identified in the animal searching process, where the lost animal is identified by the robot.
A first cooperation unit 83, configured to cooperate with the other robot to drive the lost livestock to a destination if it is detected that the other robot reaches the target position.
Optionally, if the robot is a monitoring robot, the cooperation unit 83 is specifically configured to detect the motion information of the lost livestock in real time, and instruct or cooperate with the other robots to drive the lost livestock to the destination according to the motion information of the lost livestock.
Optionally, the cooperation unit 83 is specifically configured to instruct or cooperate with the other robots to surround the lost livestock by maintaining a preset formation and move towards the destination to drive the lost livestock to the destination.
Optionally, the target position sending unit 82 is specifically configured to, if a lost livestock is identified in the livestock searching process, send a target position to a server, so as to instruct the server to notify the other robots to go to the target position;
correspondingly, the cooperation unit 83 is specifically configured to obtain an action instruction corresponding to the target action sent by the server; and executing the target action according to the action instruction so as to realize the cooperation with other robots and drive the lost livestock to the destination.
Optionally, the search instruction obtaining unit 81 is specifically configured to obtain a search instruction, move along a preset route in a target area according to the search instruction, and search for livestock through infrared sensing and/or an odor search algorithm.
Optionally, the robot further comprises:
and the second cooperation unit is used for going to the target position to cooperate with other robots to drive the lost livestock to a destination if the target position sent by other robots is acquired in the livestock searching process.
Fig. 9 shows a schematic structural diagram of a server provided in an embodiment of the present application, and for convenience of explanation, only a part related to the embodiment of the present application is shown:
the server includes: target position acquisition section 91, notification section 92, planning section 93, and action instruction transmission section 94. Wherein:
a target position acquiring unit 91 configured to acquire a target position sent by a first robot, where the first robot recognizes a lost livestock;
a notification unit 92 for notifying the second robot of the arrival at the target position;
the planning unit 93 is configured to plan a current target behavior and determine a target action corresponding to the target behavior according to first state feature information and a destination after it is detected that the second robot reaches the target position; the first state characteristic information is state characteristic information before a target action is executed currently, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and the target action comprises a formation keeping action, a driving action and an obstacle avoidance action;
an action command sending unit 94, configured to send an action command corresponding to the target action to the first robot and the second robot, so as to instruct the first robot and the second robot to execute the corresponding target action to cooperatively drive the lost livestock.
Optionally, the planning unit 93 is specifically configured to input the current first state feature information and the destination information into a preset neural network for processing, so as to obtain a target behavior and a corresponding target action;
correspondingly, the server further comprises a reward unit, and the reward unit is specifically used for acquiring second state characteristic information obtained after the first robot and the second robot execute the target action; determining current reward information according to the first state characteristic information, the second state characteristic information and a target reward function; and updating the preset neural network according to the reward information, taking the second state characteristic information as the updated first state characteristic information, returning to execute the step of inputting the current first state characteristic information and the destination information into the preset neural network for processing to obtain the target behavior and the corresponding target action and the subsequent steps.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Example four:
fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 10, the terminal device 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102, such as a livestock retrieval program, stored in said memory 101 and executable on said processor 100. The processor 100, when executing the computer program 102, implements the steps in the above-described respective animal recovery method embodiments, such as the steps S201 to S203 shown in fig. 2 or such as the steps S601 to S604 shown in fig. 6. Alternatively, the processor 100, when executing the computer program 102, implements the functions of the modules/units in the device embodiments, such as the functions of the units 81 to 83 shown in fig. 8 or the functions of the units 91 to 94 shown in fig. 9.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 102 in the terminal device 10. For example, the computer program 102 may be divided into a search instruction obtaining unit, a target position sending unit, and a first cooperation unit, and the specific functions of each unit are as follows:
and the searching instruction acquisition unit is used for acquiring a searching instruction and searching livestock in the target area according to the searching instruction.
And the target position sending unit is used for sending a target position to indicate other robots to go to the target position if the lost livestock is identified in the livestock searching process, and the target position is the position where the lost livestock is identified by the robots.
And the first cooperation unit is used for cooperating with the other robots to drive the lost livestock to a destination if the other robots are detected to reach the target positions.
Alternatively, the computer program 102 may be divided into a target position acquiring unit, a notification unit, a planning unit, and an action instruction transmitting unit, and the specific functions of each unit are as follows:
and the target position acquiring unit is used for acquiring a target position sent by the first robot, and the target position is a position where the first robot identifies the lost livestock.
A notification unit for notifying the second robot of the arrival at the target position.
The planning unit is used for planning the current target behavior and determining the target action corresponding to the target behavior according to the first state characteristic information and the destination after detecting that the second robot reaches the target position; the first state characteristic information is state characteristic information before a target action is executed at present, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and the target action comprises a formation keeping action, a driving action and an obstacle avoidance action.
And the action instruction sending unit is used for sending action instructions corresponding to the target actions to the first robot and the second robot so as to instruct the first robot and the second robot to execute the corresponding target actions to cooperatively drive the lost livestock.
The terminal device 10 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. The terminal device may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal device 10 and does not constitute a limitation of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the terminal device 10. The memory 101 is used for storing the computer program and other programs and data required by the terminal device. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for livestock retrieval, said method being applied to a first robot, comprising:
acquiring a search instruction, and searching livestock in a target area according to the search instruction;
if a lost animal is identified in the animal searching process, sending a target position to instruct a second robot to go to the target position, wherein the target position is a position where the first robot identifies the lost animal;
and if the second robot is detected to reach the target position, the second robot cooperates with the second robot to drive the lost livestock to the destination.
2. The livestock retrieval method of claim 1, wherein said cooperating with said second robot to drive said lost livestock to a destination if said first robot is a designated monitoring robot, comprises:
detecting the motion information of the lost livestock in real time, and indicating or cooperating with the second robot to drive the lost livestock to a destination according to the motion information of the lost livestock.
3. The livestock retrieval method of claim 1, said cooperating with said second robot to drive said lost livestock to a destination comprising:
instructing or coordinating with the second robot to drive the lost livestock to the destination by maintaining a preset formation around the lost livestock and moving towards the destination.
4. The animal recovery method of claim 1, wherein said sending a target location to instruct a second robot to go to said target location if a missing animal is identified during said animal search comprises:
if a lost livestock is identified in the livestock searching process, sending a target position to a server to instruct the server to inform the second robot to go to the target position;
correspondingly, said driving the lost livestock to a destination in cooperation with the second robot comprises:
acquiring an action instruction which is sent by a server and corresponds to a target action;
and executing the target action according to the action instruction so as to realize the cooperation with the second robot and drive the lost livestock to the destination.
5. The animal recovery method of any one of claims 1 to 4, wherein said obtaining a search command, performing an animal search in a target area according to said search command comprises:
acquiring a search instruction, moving along a preset route in a target area according to the search instruction, and searching livestock through an infrared induction and/or odor search algorithm.
6. A livestock retrieval method, which is applied to a server, is characterized by comprising the following steps:
acquiring a target position sent by a first robot, wherein the target position is a position where the first robot identifies a lost livestock;
notifying a second robot of the arrival at the target location;
after the second robot is detected to reach the target position, planning the current target behavior and determining the target action corresponding to the target behavior according to the first state characteristic information and the destination; the first state characteristic information is state characteristic information before a target action is executed currently, the state characteristic information at least comprises motion information of the first robot, the second robot and the lost livestock, and the target action comprises a formation keeping action, a driving action and an obstacle avoidance action;
and sending action instructions corresponding to the target actions to the first robot and the second robot so as to instruct the first robot and the second robot to execute the corresponding target actions to cooperatively drive the lost livestock.
7. The livestock retrieval method of claim 6, wherein said planning a current target behavior and determining a target action corresponding to said target behavior based on said first status characteristic information and said destination comprises:
inputting the current first state characteristic information and the destination information into a preset neural network for processing to obtain a target behavior and a corresponding target action;
correspondingly, after the sending of the action command corresponding to the target action to the first robot and the second robot to instruct the first robot and the second robot to cooperate to drive the lost livestock, the method further includes:
if the lost livestock is detected to reach the destination, completing the livestock retrieval task; otherwise, the following steps are executed:
acquiring second state characteristic information obtained after the first robot and the second robot execute the target action;
determining current reward information according to the first state characteristic information, the second state characteristic information and a target reward function;
and updating the preset neural network according to the reward information, taking the second state characteristic information as the updated first state characteristic information, returning to execute the step of inputting the current first state characteristic information and the destination information into the preset neural network for processing to obtain the target behavior and the corresponding target action and the subsequent steps.
8. A robot, comprising:
the search instruction acquisition unit is used for acquiring a search instruction and searching livestock in a target area according to the search instruction;
a target position sending unit, configured to send a target position to instruct other robots to go to the target position if a lost livestock is identified in the livestock searching process, where the target position is a position where the lost livestock is identified by the robot;
and the first cooperation unit is used for cooperating with the other robots to drive the lost livestock to a destination if the other robots are detected to reach the target positions.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the computer program, when executed by the processor, causes the terminal device to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes a terminal device to carry out the steps of the method according to any one of claims 1 to 7.
CN202010358932.9A 2020-04-29 2020-04-29 Livestock retrieval method, robot, terminal equipment and storage medium Active CN111604898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010358932.9A CN111604898B (en) 2020-04-29 2020-04-29 Livestock retrieval method, robot, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010358932.9A CN111604898B (en) 2020-04-29 2020-04-29 Livestock retrieval method, robot, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111604898A true CN111604898A (en) 2020-09-01
CN111604898B CN111604898B (en) 2021-08-24

Family

ID=72194354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010358932.9A Active CN111604898B (en) 2020-04-29 2020-04-29 Livestock retrieval method, robot, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111604898B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034862A (en) * 2020-09-17 2020-12-04 成都卓越纵联科技有限公司 Machine and system for automatically searching for cordyceps sinensis on plateau
CN112580482A (en) * 2020-12-14 2021-03-30 深圳优地科技有限公司 Animal monitoring method, terminal and storage medium
CN113146624A (en) * 2021-03-25 2021-07-23 重庆大学 Multi-agent control method based on maximum angle aggregation strategy
CN114955455A (en) * 2022-06-14 2022-08-30 乐聚(深圳)机器人技术有限公司 Robot control method, server, robot, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105182973A (en) * 2015-09-08 2015-12-23 郑州大学 Self-adaptive hunting device using multiple robot pursuers to hunt single moving target and method
CN105425815A (en) * 2015-11-27 2016-03-23 杨珊珊 Intelligent pasture management system and method by using unmanned aerial vehicle
CN107291102A (en) * 2017-07-31 2017-10-24 内蒙古智牧溯源技术开发有限公司 A kind of unmanned plane grazing system
CN108375379A (en) * 2018-02-01 2018-08-07 上海理工大学 The fast path planing method and mobile robot of dual DQN based on variation
CN109034380A (en) * 2018-06-08 2018-12-18 四川斐讯信息技术有限公司 A kind of distributed image identification system and its method
CN109960272A (en) * 2017-12-22 2019-07-02 翔升(上海)电子技术有限公司 Grazing method and system based on unmanned plane
JP2019208470A (en) * 2018-06-07 2019-12-12 ソフトバンク株式会社 Grazing livestock monitoring system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105182973A (en) * 2015-09-08 2015-12-23 郑州大学 Self-adaptive hunting device using multiple robot pursuers to hunt single moving target and method
CN105425815A (en) * 2015-11-27 2016-03-23 杨珊珊 Intelligent pasture management system and method by using unmanned aerial vehicle
CN107291102A (en) * 2017-07-31 2017-10-24 内蒙古智牧溯源技术开发有限公司 A kind of unmanned plane grazing system
CN109960272A (en) * 2017-12-22 2019-07-02 翔升(上海)电子技术有限公司 Grazing method and system based on unmanned plane
CN108375379A (en) * 2018-02-01 2018-08-07 上海理工大学 The fast path planing method and mobile robot of dual DQN based on variation
JP2019208470A (en) * 2018-06-07 2019-12-12 ソフトバンク株式会社 Grazing livestock monitoring system
CN109034380A (en) * 2018-06-08 2018-12-18 四川斐讯信息技术有限公司 A kind of distributed image identification system and its method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034862A (en) * 2020-09-17 2020-12-04 成都卓越纵联科技有限公司 Machine and system for automatically searching for cordyceps sinensis on plateau
CN112580482A (en) * 2020-12-14 2021-03-30 深圳优地科技有限公司 Animal monitoring method, terminal and storage medium
CN113146624A (en) * 2021-03-25 2021-07-23 重庆大学 Multi-agent control method based on maximum angle aggregation strategy
CN113146624B (en) * 2021-03-25 2022-04-29 重庆大学 Multi-agent control method based on maximum angle aggregation strategy
CN114955455A (en) * 2022-06-14 2022-08-30 乐聚(深圳)机器人技术有限公司 Robot control method, server, robot, and storage medium
CN114955455B (en) * 2022-06-14 2024-06-11 乐聚(深圳)机器人技术有限公司 Robot control method, server, robot, and storage medium

Also Published As

Publication number Publication date
CN111604898B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN111604898B (en) Livestock retrieval method, robot, terminal equipment and storage medium
US10102429B2 (en) Systems and methods for capturing images and annotating the captured images with information
US11714423B2 (en) Voxel based ground plane estimation and object segmentation
CN106774345B (en) Method and equipment for multi-robot cooperation
EP3384360B1 (en) Simultaneous mapping and planning by a robot
JP2021089724A (en) 3d auto-labeling with structural and physical constraints
CN109109863B (en) Intelligent device and control method and device thereof
Premebida et al. Intelligent robotic perception systems
US20170072563A1 (en) Using object observations of mobile robots to generate a spatio-temporal object inventory, and using the inventory to determine monitoring parameters for the mobile robots
KR102043142B1 (en) Method and apparatus for learning artificial neural network for driving control of automated guided vehicle
CN111015656A (en) Control method and device for robot to actively avoid obstacle and storage medium
EP4180895B1 (en) Autonomous mobile robots for coverage path planning
CN112356027B (en) Obstacle avoidance method and device for agriculture and forestry robot, computer equipment and storage medium
WO2024146339A1 (en) Path planning method and apparatus, and crane
US12011837B2 (en) Intent based control of a robotic device
US20220012494A1 (en) Intelligent multi-visual camera system and method
CN113907663A (en) Obstacle map construction method, cleaning robot and storage medium
CN114091515A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN114812539A (en) Map search method, map using method, map searching device, map using device, robot and storage medium
CN113125795A (en) Obstacle speed detection method, device, equipment and storage medium
CN112162561A (en) Map construction optimization method, device, medium and equipment
CN112987713A (en) Control method and device for automatic driving equipment and storage medium
JP7416219B2 (en) Video distribution device, video distribution method and program
WO2019202878A1 (en) Recording medium, information processing apparatus, and information processing method
CN116409565A (en) Robot positioning method, apparatus, scheduling device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Unit 7-11, 6th Floor, Building B2, No. 999-8 Gaolang East Road, Wuxi Economic Development Zone, Wuxi City, Jiangsu Province, China 214000

Patentee after: Youdi Robot (Wuxi) Co.,Ltd.

Country or region after: China

Address before: 5D, Building 1, Tingwei Industrial Park, No. 6 Liufang Road, Xingdong Community, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province

Patentee before: UDITECH Co.,Ltd.

Country or region before: China