CN113894788A - Visual servo method and visual servo system - Google Patents

Visual servo method and visual servo system Download PDF

Info

Publication number
CN113894788A
CN113894788A CN202111292814.3A CN202111292814A CN113894788A CN 113894788 A CN113894788 A CN 113894788A CN 202111292814 A CN202111292814 A CN 202111292814A CN 113894788 A CN113894788 A CN 113894788A
Authority
CN
China
Prior art keywords
visual
information
processing
servo
mec
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111292814.3A
Other languages
Chinese (zh)
Other versions
CN113894788B (en
Inventor
李希金
李红五
安岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111292814.3A priority Critical patent/CN113894788B/en
Publication of CN113894788A publication Critical patent/CN113894788A/en
Application granted granted Critical
Publication of CN113894788B publication Critical patent/CN113894788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention discloses a visual servo method and a visual servo system, wherein the method comprises the following steps: receiving, by a mobile edge computing, MEC, system, visually relevant information broadcast by distributed collection devices within a predetermined area over a machine-to-machine M2M channel; wherein the M2M channel is a broadcast-based channel previously deployed in the communication network; performing visual processing on the received visual related information through the MEC servo middleware to obtain first visual feedback information, and processing the first visual feedback information according to an implementation strategy of a predetermined task to obtain first behavior control information; broadcasting the first behavior control information through an M2M channel to make the robot equipment in the predetermined area execute the action corresponding to the first behavior control information, thereby performing servo control on the robot equipment; by the method, the performance and the speed of the visual servo control system for visual processing can be improved.

Description

Visual servo method and visual servo system
Technical Field
The invention relates to the technical field of communication and visual servo, in particular to a visual servo method and a visual servo system.
Background
Visual servoing (visual servoing) refers to the action of automatically receiving and processing an image of a real object through an optical device and a non-contact sensor, and enabling a system to perform further control or corresponding adaptive adjustment on a machine through information fed back by the image.
With the rapid development of the robot technology, the tasks undertaken by the robot are more complex and diversified, and the traditional detection means often face the defects of detection range limitation and detection means singleness. The vision servo control utilizes the vision related information as feedback to carry out non-contact measurement on the environment, has larger information quantity, improves the flexibility and the accuracy of the robot system, and plays an increasingly important role in the robot control. Therefore, there is a need to improve the performance and speed of vision processing by a vision servo control system.
Disclosure of Invention
Therefore, the invention provides a visual servo method and a visual servo system, which aim to solve the problem of how to improve the performance and speed of visual processing in the prior art.
In order to achieve the above object, a first aspect of the present invention provides a visual servoing method, comprising: receiving, by a mobile edge computing, MEC, system, visually relevant information broadcast by distributed collection devices within a predetermined area over a machine-to-machine M2M channel; wherein the M2M channel is a broadcast-based channel previously deployed in a communication network; performing visual processing on the received visual related information through an MEC servo middleware to obtain first visual feedback information, and processing the first visual feedback information according to an implementation strategy of a predetermined task to obtain first behavior control information; broadcasting the first behavior control information through the M2M channel to enable the robot devices in the predetermined area to execute the actions corresponding to the first behavior control information, thereby performing servo control on the robot devices.
Wherein prior to the receiving, by the moving edge computing MEC system, visually relevant information broadcast by distributed acquisition devices within a predetermined area over a machine-to-machine M2M channel, the method further comprises: task information of a preset task is received through the MEC servo middleware, task requirement information corresponding to the task information is generated, and a robot implementation strategy corresponding to the task requirement information is generated.
Wherein, under the condition that the predetermined area is a preset simple scene field area, the method further comprises: receiving, by at least one front-end processing server deployed within a predetermined area, the visually relevant information; performing visual processing on the visual related information according to a visual processing algorithm acquired in advance from the MEC system to obtain second visual feedback information; wherein the visual processing algorithms used by the plurality of front-end processing servers are different; processing the second visual feedback information according to an implementation strategy of the predetermined task acquired in advance from the MEC system to obtain second behavior control information; broadcasting, by the front-end processing server, the second behavior control information over the M2M channel to cause robotic devices within the predetermined area to perform actions corresponding to the second behavior control information.
Wherein the visually relevant information comprises: visual information collected by each collection device of the distributed type; the visually processing the received visual related information through the MEC servo middleware to obtain first visual feedback information includes: acquiring a visual processing instance in the MEC system associated with each acquisition device in a distributed manner; in the visual processing example associated with each acquisition device, visual processing is performed on the visual information acquired by the respective associated acquisition device through a servo middleware to obtain visual feedback information corresponding to each acquisition device; and generating the first visual feedback information according to the visual feedback information respectively corresponding to each acquisition device.
Under the condition that the total number of the distributed acquisition equipment is greater than a threshold value of the number of the preset dense equipment, the vision related information comprises vision characteristic information corresponding to each distributed acquisition equipment, and the vision characteristic information is obtained through a first processing flow; the first processing flow comprises the following steps: performing feature extraction on the visual information acquired by the corresponding acquisition equipment through a front-end processing module corresponding to each acquisition equipment to obtain visual feature information corresponding to each acquisition equipment; wherein the front-end processing module is a processing module provided by at least one front-end processing server deployed in the predetermined area and used for performing feature extraction of visual information; broadcasting, by each of the acquisition devices, respective corresponding visual characteristic information over the M2M channel.
Wherein, the visual processing is performed on the received visual related information through the MEC servo middleware to obtain first visual feedback information, and the method comprises the following steps: performing feature-based fusion processing on the visual feature information corresponding to each acquisition device through an MEC servo middleware by adopting a task unloading mode in the MEC system to obtain fused visual feature information; processing the fused visual characteristic information to obtain the first visual feedback information;
or, the processing the first visual feedback information according to the implementation strategy of the predetermined task to obtain first behavior control information includes: processing the visual characteristic information corresponding to each acquisition device according to an implementation strategy of a preset task through the MEC servo middleware to obtain behavior control information corresponding to each acquisition device; and performing fusion processing based on decision on the behavior control information corresponding to each acquisition device to obtain fused behavior control information which is used as the first behavior control information.
Wherein, the visual processing is performed on the received visual related information through the MEC servo middleware to obtain first visual feedback information, and the method comprises the following steps: performing data-based fusion processing on the received vision-related information through the servo middleware to obtain fused vision-related information; and performing visual processing on the fused visual related information to obtain the first visual feedback information.
Wherein, the collection equipment includes: a first harvesting device deployed on the robotic device and a second harvesting device deployed in a predetermined regional field environment; the visually relevant information includes: the method comprises the steps of acquiring overall visual data in an eye-to-hand mode by using the first acquisition equipment and acquiring local visual data in an eye-to-hand mode by using the second acquisition equipment; in the case that the implementation policy of the predetermined task is a robot autonomous control policy, the broadcasting the first behavior control information through the M2M channel to cause the robot devices within the predetermined area to perform an action corresponding to the first behavior control information includes: selecting a machine learning algorithm from a machine learning algorithm library configured in the MEC system in advance through an MEC servo middleware, and performing data fusion on the overall visual data and the local visual data by using the selected machine learning algorithm to obtain fused visual data; positioning the robot equipment according to the fused visual data by using a synchronous positioning and mapping algorithm acquired in advance from the MEC system, and constructing a map according to a positioning result; and broadcasting the first behavior control information and the constructed map to the robot device through the M2M channel so that the robot device executes the action corresponding to the first behavior control information in the process of autonomous navigation according to the constructed map.
The acquisition equipment comprises distributed instruction acquisition equipment; in the case that the implementation policy of the predetermined task is a human-machine cooperation policy, the method further includes: receiving, by the MEC system, control instruction information that is acquired by the distributed instruction acquisition device and broadcast through the M2M channel; the processing the first visual feedback information according to the implementation strategy of the predetermined task to obtain first behavior control information includes: processing the control instruction information and the first visual feedback information according to a control servo strategy acquired from the MEC system in advance to obtain first behavior control information; and, in a case where the robot device within the predetermined area performs an action corresponding to the first behavior control information, the method further includes: the robot device broadcasts the action execution result to a predetermined operation device through the M2M information, and displays the working state of the robot device in the operation device.
Wherein the acquisition device comprises an image acquisition device and a predetermined sensor; the visually relevant information includes: in the case of visual information collected by each collection device of the distributed type, the visual information includes: image information collected by the image collection device and sensing information collected by the sensor.
The invention has the following advantages: by means of a 5G architecture and an efficient B-M2M broadcasting system, a distributed visual servo architecture is constructed by using a distributed visual acquisition technology, so that the problems of the existing single visual servo system are effectively solved, uncertainty is eliminated, and a more reliable and accurate result is obtained; by means of the advantages of large MEC coverage and strong data processing capability in 5G, the dynamic mapping relation between various implementation strategies of visual processing and robot motion is realized by constructing a servo middleware in the MEC, a more flexible visual servo mechanism is realized, and the performance and the speed of the visual processing of a visual servo control system are improved; the diversity and application scene of the robot functions are greatly expanded, and the method has positive significance for improving the development of the complex robot and enriching the technical ecology of the 5G network and the B-M2M.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart illustrating a visual servoing method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a vision services system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a visual servo architecture system according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a visual servoing method according to an exemplary embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
When the terms "comprises" and/or "comprising … …" are used in this specification, the presence of features, integers, steps, operations, elements, and/or components are specified, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Embodiments of the invention may be described with reference to plan and/or cross-sectional views in light of idealized schematic illustrations of the invention. Accordingly, the example illustrations can be modified in accordance with manufacturing techniques and/or tolerances.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In some application scenarios, the visual servo control system has the following classification modes: for example, the system can be divided into a monocular vision servo system, a binocular vision servo system and a monocular vision servo system according to the number of cameras; according to the different positions of the cameras, the camera system can be divided into an Eye-In Hand system (Eye In Hand) and a fixed camera system (Eye to Hand or stand alone); the vision servo system is classified into a position-based vision servo system and an image-based vision servo system according to a spatial position or an image feature of the robot. In some embodiments, the hand-eye system may be referred to as an eye-on-hand system and the fixed camera system may be referred to as an eye-to-hand system.
Conventional vision servo systems typically consist of a vision system, control strategies, and an executive robotic system. Firstly, the system generates a corresponding implementation strategy according to task requirements, then in the execution process of the strategy, the vision system continuously processes the acquired images to generate corresponding vision feedback information, and then the vision feedback information is output to the robot by the controller to control the robot to complete corresponding actions, and the composition is continuously carried out until the task is completed.
In some embodiments, the fifth Generation Communication Technology (5th Generation Mobile Communication Technology, 5G) not only has high-speed and low-latency data transmission capability, and provides a reliable Communication method for a distributed architecture, but also a Mobile Edge Computing (MEC) in a base station is close to a user end, and user data can be directly transmitted to the MEC without traversing the entire network, and the MEC has strong data processing and storage capabilities, and these features of the 5G network provide a good basis for constructing a novel architecture for various applications. The combination of the 5G technology and the traditional visual servo technology also brings a new development space for operators, fully utilizes the resources of the 5G network, not only provides data transmission service for users, but also provides various value-added services based on intelligent manufacturing of robots for the users, fully exerts the potential of the 5G, constructs different value-added service strategies, establishes a new operation mode, helps production enterprises to realize the upgrade of the intelligent manufacturing, reduces the upgrade cost of the industry, and endows the enterprises with greater development prospect and competitiveness.
The visual servo control system in the prior mode faces at least one of the following problems: the speed requirement of image processing and the difference of actual processing speed of calculation processing are important difficulties of image servo; a single video source cannot ensure that the system is stable in a large range during working due to the influence of environment, illumination and background; and after the image processing is finished, the strategy of converting the image characteristics into the robot motion cannot be flexibly realized along with different tasks, so that the vision servo function is single and difficult to upgrade.
In a first aspect, an embodiment of the present invention provides a visual servoing method.
Fig. 1 is a flow chart illustrating a visual servoing method according to an embodiment of the present invention. As shown in fig. 1, the visual servoing method may include the following steps.
S110, receiving visual related information broadcasted by distributed acquisition equipment in a preset area through a machine-to-machine M2M channel through a mobile edge computing MEC system; wherein the M2M channel is a broadcast-based channel previously deployed in the communication network;
s120, performing visual processing on the received visual related information through the MEC servo middleware to obtain first visual feedback information, and processing the first visual feedback information according to an implementation strategy of a predetermined task to obtain first behavior control information;
s130, the first behavior control information is broadcasted through the M2M channel, so that the robot apparatus in the predetermined area executes the operation corresponding to the first behavior control information, thereby performing servo control on the robot apparatus.
According to the visual servo method provided by the embodiment of the invention, based on a 5G and broadcast-based distributed visual servo architecture from Machine to Machine (B-M2M), by means of the 5G architecture and a high-efficiency B-M2M broadcast system, a distributed visual servo architecture is constructed by using a distributed visual acquisition technology, so that the problems of the existing single visual servo system are effectively overcome, the uncertainty is eliminated, and a more reliable and accurate result is obtained; by means of the advantages of large MEC coverage and strong data processing capability in 5G, the dynamic mapping relation between various implementation strategies of visual processing and robot motion is realized by constructing a servo middleware in the MEC, a more flexible visual servo mechanism is realized, and the performance and the speed of the visual processing of a visual servo control system are improved; the diversity and application scene of the robot functions are greatly expanded, and the method has positive significance for improving the development of the complex robot and enriching the technical ecology of the 5G network and the B-M2M.
In some embodiments, the acquisition device comprises an image acquisition device and a predetermined sensor; the visually relevant information includes: in the case of visual information collected by each collection device of the distributed type, the visual information includes: image information collected by the image collecting device and sensing information collected by the sensor.
In some embodiments, the middleware may be a server or a separate system software or service program. The present invention is not particularly limited in the embodiments.
In the embodiment, a novel distributed visual servo architecture is constructed by means of a 5G architecture and an efficient B-M2M broadcast system and by using distributed visual acquisition and multi-sensor fusion technologies, so that the problems of the existing single visual servo system are effectively overcome, uncertainty is eliminated, and a more reliable and accurate result is obtained.
In some embodiments, before step S110, the visual servoing method further includes: task information of a preset task is received through the MEC servo middleware, task requirement information corresponding to the task information is generated, and a robot implementation strategy corresponding to the task requirement information is generated.
In this embodiment, a specific task may be input to the servo middleware of the MEC in advance to generate a corresponding task requirement; and the servo middleware generates an implementation strategy of the visual servo system corresponding to the task requirement.
In some embodiments, in the case that the predetermined area is a preset simple scene field area, the visual service method further includes the steps of:
s11, receiving the visual-related information through at least one front-end processing server deployed within the predetermined area.
S12, performing visual processing on the visual related information according to a visual processing algorithm acquired in advance from the MEC system to obtain second visual feedback information; wherein, the visual processing algorithm used is different among a plurality of front-end processing servers.
And S13, processing the second visual feedback information according to the implementation strategy of the preset task acquired in advance from the MEC system to obtain second behavior control information.
S14, broadcasting, by the front-end processing server, the second behavior control information through the M2M channel to cause the robot devices within the predetermined area to perform an action corresponding to the second behavior control information.
Through the above steps S11-S14, after the front-end camera captures the image, the image is broadcasted through a B-M2M channel, and for a simple scene (for example, a production scene with a simple fixed page), the image is processed by one or more front-end processing servers, which are deployed on site and equipped with B-M2M modules, and the image broadcasted by the camera is received in real time, and the result is broadcasted after the processing. Due to the adoption of the B-M2M broadcast channel, a plurality of front-end processing servers can adopt different visual processing strategies, so that the performance and the speed of visual processing are increased;
in some embodiments, the visually relevant information comprises: the step of visually processing the received visual related information through the MEC servo middleware in step S120 to obtain the first visual feedback information according to the visual information acquired by each distributed acquisition device may specifically include the following steps.
S21, a visual process instance associated with each acquisition device of the distribution in the MEC system is obtained.
And S22, in the visual processing example associated with each acquisition device, performing visual processing on the visual information acquired by the associated acquisition device through the servo middleware to obtain visual feedback information corresponding to each acquisition device.
And S23, generating first visual feedback information according to the visual feedback information respectively corresponding to each acquisition device.
Through the steps S21-S23, after the front-end camera acquires an image, the image can be broadcast through a B-M2M channel, a visual processing instance is created in the MEC and is associated with the camera, then the image stream acquired by the camera is processed in the associated instance, and the result is broadcast through a B-M2M module, so that mapping between the front-end camera and the visual processing module in the MEC is established, and the unloading operation of the front-end camera visual processing task in the MEC is completed. Due to the powerful data processing capability of the MEC and the efficient broadcast regime of the B-M2M channel, a high performance front-end vision system can be implemented in a software-defined manner. The software definition brings the advantages of hardware resource virtualization, system software platformization and application diversification, and can flexibly construct visual systems with different functions and performance requirements.
In some embodiments, in the case that the total number of the distributed acquisition devices is greater than the threshold value of the number of the predetermined dense devices, the visual related information includes visual characteristic information corresponding to each of the distributed acquisition devices, and the visual characteristic information is obtained through the first processing flow; the first processing flow comprises the following steps: s31, performing feature extraction on the visual information acquired by the corresponding acquisition equipment through the front-end processing module corresponding to each acquisition equipment to obtain the visual feature information corresponding to each acquisition equipment; the front-end processing module is provided by at least one front-end processing server deployed in a predetermined area and is used for extracting the characteristics of the visual information; and S32, broadcasting the corresponding visual characteristic information through the M2M channel by each acquisition device.
In some embodiments, the functionality of the front-end processing modules is provided by the front-end processor, and each front-end processing module may run in the robotic device and/or the front-end processor and may transmit the processing results to the distributed image capture devices over the B-M2M channel.
In this embodiment, after the front-end camera collects an image, the front-end camera first performs image compression and image feature extraction through the front-end processing module to generate a feature description of the image, then broadcasts through the B-M2M module, and the MEC completes processing of the image feature description in a task unloading manner. By the method, high performance and flexibility of MEC task unloading are achieved, and meanwhile, the quantity of image data broadcast through a B-M2M channel is greatly reduced, so that the method is suitable for application scenes with dense nodes.
In some embodiments, the step of visually processing the received visual related information through the MEC servo middleware in step S120 to obtain the first visual feedback information may specifically include the following steps: s41, performing feature-based fusion processing on the visual feature information corresponding to each acquisition device through an MEC servo middleware in a task unloading mode in the MEC system to obtain fused visual feature information; and S42, processing the fused visual characteristic information to obtain first visual feedback information.
Alternatively, in some embodiments, in step S120, processing the first visual feedback information according to an implementation policy of a predetermined task to obtain first behavior control information includes: s43, processing the visual characteristic information corresponding to each acquisition device according to the implementation strategy of the preset task through the MEC servo middleware to obtain behavior control information corresponding to each acquisition device; and S44, performing fusion processing based on decision on the behavior control information corresponding to each acquisition device to obtain fused behavior control information serving as first behavior control information.
In the embodiment of the invention, the visual processing is mainly to complete the visual feedback. The visual processing algorithm library in the MEC deploys a variety of visual processing algorithms, such as position, image feature and multi-view geometry based algorithms, flexibly invoked by the servo middleware according to specific tasks. The visual processing is greatly affected by image noise and object occlusion, and when scale and rotation change occurs, the repeatability and precision of extracted feature extraction are difficult to achieve satisfactory effects. Therefore, based on the B-M2M efficient broadcasting system and the powerful processing capability of the MEC, the embodiment adopts a distributed cooperative vision processing strategy, and through a plurality of local vision acquisition points and global vision acquisition points on site, in combination with other sensor information acquisition points such as radar, the embodiment broadcasts to the MEC through the B-M2M, and performs cooperative processing through the servo middleware, thereby enhancing the performance and robustness of the vision system.
In some embodiments, step S120 may specifically include: s51, performing data-based fusion processing on the received visual relevant information through the servo middleware to obtain fused visual relevant information; and performing visual processing on the fused visual related information to obtain first visual feedback information.
In the embodiment of the present invention, the process of the fusion process may be expressed as the following expression (1):
Figure BDA0003335512320000111
in the above expression (1), s*The status of the target is represented by,
Figure BDA0003335512320000112
representing the current state, F is the fusion algorithm, i mi(t) is the image or sensor information acquired at i acquisition points, N is the total number of acquisition points, and a is a preset system parameter.
In the embodiment of the invention, the distributed fusion control strategy calls different fusion strategies, such as pixel-based fusion, feature-based fusion and decision-based fusion, by the servo middleware according to task characteristics. For example, in the node-intensive application scenario described in steps S31 and S32, the target features of each piece of distributed information may be extracted based on feature fusion, and then the target feature quantity is obtained through a fusion algorithm, so that a sufficient amount of target effective information can be maintained, redundant information is removed, the accuracy is improved, and the performance is better. The distributed control strategy is different according to different tasks, and the servo middleware can call different implementation algorithms, such as an adaptive algorithm, a robust algorithm and an intelligent algorithm. And the servo middleware deployed in the MEC provides a flexible distributed fusion control strategy for various different tasks.
In an embodiment of the invention, two types of robot implementation strategies are provided: one is a robot autonomous control strategy; one is a human-machine collaboration strategy.
In some embodiments, the acquisition device comprises: a first harvesting device deployed on the robotic device and a second harvesting device deployed in a predetermined regional field environment; the visually relevant information includes: the method comprises the steps of acquiring overall visual data by eye-to-hand using a first acquisition device, and acquiring local visual data by eye-to-hand using a second acquisition device.
In some embodiments, in the case that the implementation policy of the predetermined task is a robot autonomous control policy, the step S130 may specifically include the following steps.
And S61, selecting a machine learning algorithm from a machine learning algorithm library configured in the MEC system in advance through the MEC servo middleware, and performing data fusion on the whole visual data and the local visual data by using the selected machine learning algorithm to obtain fused visual data.
And S62, positioning the robot equipment according to the fused visual data by using a synchronous positioning and mapping algorithm acquired in advance from the MEC system, and mapping according to the positioning result.
And S63, broadcasting the first behavior control information and the constructed map to the robot equipment through an M2M channel, so that the robot equipment executes the action corresponding to the first behavior control information in the process of autonomous navigation according to the constructed map.
In the embodiment, the robot autonomous control strategy can completely complete the task by the robot according to visual feedback; for an industrial robot executing repetitive tasks in a local space, teaching and machine learning methods are adopted, and a plurality of machine learning algorithm libraries aiming at distributed visual information are configured in an MEC and are used for calling a servo middleware. The distributed visual information is collected into the MEC through the B-M2M channel, and the distributed machine learning algorithm is based on the fusion of multi-dimensional, multi-angle, overall and local visual data and is matched with the powerful data processing capacity of the MEC, so that the MEC has good performance and flexibility.
For a robot moving in a large range, a visual navigation part is needed to be added, cameras are respectively installed on a mechanical arm (eye in hand) and an ambient environment (eye to hand), then visual information is broadcasted through a B-M2M channel, after the MEC receives the visual information in the large range through B-M2M, a Simultaneous Localization and Mapping (SLAM) algorithm library in the MEC is used, the function is that the robot starts to move from an unknown position in the unknown environment, self-positioning is carried out according to the position and a map in the moving process, and meanwhile, an incremental map is built on the basis of the self-positioning, so that the self-positioning and navigation of the robot) are realized.
In some embodiments, the collection devices comprise distributed instruction collection devices; in the case that the implementation strategy of the predetermined task is a human-computer cooperation strategy, the visual servo method further includes: control instruction information collected by distributed instruction collection equipment and broadcast through an M2M channel is received through the MEC system.
In this embodiment, the step of processing the first visual feedback information according to the implementation policy of the predetermined task in S120 to obtain the first behavior control information may specifically include: s71, processing the control instruction information and the first visual feedback information according to a control servo strategy acquired from the MEC system in advance to obtain first behavior control information; further, when the robot device in the predetermined area executes the action corresponding to the first behavior control information, the method further includes: s72, the robot device broadcasts the action execution result to a predetermined operation device through the M2M information, and displays the operation state of the robot device in the operation device.
In the embodiment, for some complex tasks which are difficult to describe mathematically and need intelligent decision making in the process of task completion, a man-machine cooperation strategy is adopted for implementation. At the moment, a person generates a control command through voice, gestures, a manipulator and the like, the control command is broadcasted to the MEC through B-M2M, a control servo strategy is called by a man-machine cooperation part of servo middleware in the MEC, the control command is generated according to visual information, the control command is broadcasted to the robot through B-M2M, the robot is controlled to complete specified cooperation actions, an execution result is broadcasted to operating equipment of the person through B-M2M to display the real-time working state of the robot, the person can complete man-machine switching, increase operating commands such as robot motion constraint and the like according to the real-time working state of the robot, and the commands are broadcasted to each robot or related parts of the robot through a B-M2M channel to better complete the work of the person, for example, a doctor in an operation and a plurality of operation robots with different functions in a cooperation working application scene.
According to the visual servo system provided by the embodiment of the invention, a novel distributed visual servo framework is constructed by means of a 5G framework and an efficient B-M2M broadcasting system and by utilizing a distributed visual acquisition and multi-sensor fusion technology, so that the problems of the existing single visual servo system are effectively overcome, the uncertainty is eliminated, and a more reliable and accurate result is obtained. By means of the advantages of large MEC coverage range and strong data processing capability of the 5G, the servo middleware is constructed in the MEC, so that dynamic mapping relations between various strategies and robot motion are realized, a more flexible visual servo mechanism is realized, the diversity and application scenes of the robot functions are greatly expanded, and the method has positive significance for improving the development of complex robots and enriching the technical ecology of a 5G network and B-M2M.
Fig. 2 is a schematic block diagram of a vision service system provided by an embodiment of the present invention. In fig. 2, the vision servo 200 may include the following modules.
A receiving module 210, configured to receive, by the mobile edge computing MEC system, visual related information broadcast by distributed collection devices within a predetermined area through a machine-to-machine M2M channel; among them, the M2M channel is a broadcast-based channel previously deployed in a communication network.
The processing module 220 is configured to perform visual processing on the received visual related information through the MEC servo middleware to obtain first visual feedback information, and process the first visual feedback information according to an implementation policy of a predetermined task to obtain first behavior control information.
And a control module 230, configured to broadcast the first behavior control information through an M2M channel, so that the robot apparatus in the predetermined area performs an action corresponding to the first behavior control information, thereby performing servo control on the robot apparatus.
In some embodiments, the acquisition device comprises an image acquisition device and a predetermined sensor; the visually relevant information includes: in the case of visual information collected by each collection device of the distributed type, the visual information includes: image information collected by the image collecting device and sensing information collected by the sensor.
In some embodiments, the vision servo system 200 further comprises: the demand generation module is used for receiving task information of a preset task through the MEC servo middleware before receiving visual related information broadcasted by distributed acquisition equipment in a preset area through a machine-to-machine M2M channel through the mobile edge computing MEC system, and generating task demand information corresponding to the task information; and the strategy generating module is used for generating a robot implementation strategy corresponding to the task requirement information.
In some embodiments, in the case that the predetermined area is a preset simple scene field area, the vision servo system 200 further includes: a front-end processing server for receiving the vision-related information through at least one front-end processing server deployed within a predetermined area; performing visual processing on the visual related information according to a visual processing algorithm acquired in advance from the MEC system to obtain second visual feedback information; the vision processing algorithms used among the front-end processing servers are different; processing the second visual feedback information according to an implementation strategy of a preset task acquired in advance from the MEC system to obtain second behavior control information; broadcasting, by the front-end processing server, the second behavior control information through the M2M channel to cause the robotic devices within the predetermined area to perform actions corresponding to the second behavior control information.
In some embodiments, the visually relevant information comprises: visual information collected by each collection device of the distributed type; the processing module 220, when being configured to perform visual processing on the received visual related information through the MEC servo middleware to obtain the first visual feedback information, may specifically be configured to: acquiring a visual processing instance associated with each acquisition device in the MEC system; in the visual processing example associated with each acquisition device, visual processing is carried out on the visual information acquired by the respective associated acquisition device through a servo middleware to obtain visual feedback information respectively corresponding to each acquisition device; and generating first visual feedback information according to the visual feedback information respectively corresponding to each acquisition device.
In some embodiments, in the case that the total number of distributed acquisition devices is greater than the threshold value of the number of dense devices, the vision-related information includes vision characteristic information corresponding to each acquisition device in the distribution, and the vision characteristic information is obtained through the first processing flow.
In this embodiment, the first process flow includes: performing feature extraction on the visual information acquired by the corresponding acquisition equipment through a front-end processing module corresponding to each acquisition equipment to obtain visual feature information corresponding to each acquisition equipment; the front-end processing module is provided by at least one front-end processing server deployed in a predetermined area and is used for extracting the characteristics of the visual information; and broadcasting the corresponding visual characteristic information through an M2M channel by each acquisition device.
In some embodiments, the processing module 220, when configured to perform visual processing on the received visual related information through the MEC servo middleware to obtain the first visual feedback information, may specifically be configured to: performing feature-based fusion processing on the visual feature information corresponding to each acquisition device through an MEC servo middleware by adopting a task unloading mode in an MEC system to obtain fused visual feature information; and processing the fused visual characteristic information to obtain first visual feedback information.
Or, in some embodiments, the processing module 220, when configured to process the first visual feedback information according to an implementation policy of a predetermined task to obtain the first behavior control information, may specifically be configured to: processing the visual characteristic information corresponding to each acquisition device according to an implementation strategy of a preset task through an MEC servo middleware to obtain behavior control information corresponding to each acquisition device; and performing fusion processing based on decision on the behavior control information corresponding to each acquisition device to obtain fused behavior control information which is used as the first behavior control information.
In some embodiments, the processing module 220, when configured to perform visual processing on the received visual related information through the MEC servo middleware to obtain the first visual feedback information, may specifically be configured to: performing data-based fusion processing on the received visual related information through a servo middleware to obtain fused visual related information; and performing visual processing on the fused visual related information to obtain first visual feedback information.
In some embodiments, the acquisition device comprises: a first harvesting device deployed on the robotic device and a second harvesting device deployed in a predetermined regional field environment; the visually relevant information includes: the method comprises the steps of acquiring overall visual data by eye-to-hand using a first acquisition device, and acquiring local visual data by eye-to-hand using a second acquisition device.
In this embodiment, in the case that the implementation policy of the predetermined task is a robot autonomous control policy, the control module 230 may specifically include: the data fusion unit is used for selecting a machine learning algorithm from a machine learning algorithm library configured in the MEC system in advance through the MEC servo middleware, and performing data fusion on the whole visual data and the local visual data by using the selected machine learning algorithm to obtain fused visual data; the positioning and mapping unit is used for positioning the robot equipment according to the fused visual data by using a synchronous positioning and mapping algorithm acquired in advance from the MEC system and mapping according to a positioning result; and the broadcasting unit is used for broadcasting the first behavior control information and the constructed map to the robot equipment through an M2M channel so that the robot equipment executes the action corresponding to the first behavior control information in the process of autonomous navigation according to the constructed map.
In some embodiments, the collection devices comprise distributed instruction collection devices; in the case that the implementation policy of the predetermined task is a human-machine cooperation policy, the receiving module 210 is further configured to receive, through the MEC system, control instruction information that is collected by the distributed instruction collecting device and broadcasted through an M2M channel; the processing module 220, when configured to process the first visual feedback information according to an implementation policy of a predetermined task to obtain first behavior control information, may specifically be configured to: processing the control instruction information and the first visual feedback information according to a control servo strategy acquired from an MEC system in advance to obtain first behavior control information; further, in the case where the robot apparatus in the predetermined area performs the action corresponding to the first behavior control information, the visual servoing system 200 may further include: and the execution result broadcasting module is used for broadcasting the action execution result to a preset operation device through M2M information by the robot device and displaying the working state of the robot device in the operation device.
According to the visual servo system 200 provided by the embodiment of the invention, based on the distributed visual servo framework of 5G and B-M2M, by means of the 5G framework and an efficient B-M2M broadcasting system, a novel distributed visual servo framework is constructed by using the distributed visual acquisition and multi-sensor fusion technology, so that the problems of the existing single visual servo system are effectively overcome, the uncertainty is eliminated, and a more reliable and accurate result is obtained. By means of the advantages of large MEC coverage and strong data processing capability of 5G, the servo middleware is constructed in the MEC, so that dynamic mapping relations between various strategies and robot motion are realized, a more flexible visual servo mechanism is realized, the diversity and application scenes of the robot functions are greatly expanded, and the method has positive significance for improving the development of complex robots and enriching the technical ecology of a 5G network and B-M2M.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
FIG. 3 is a schematic diagram of a visual servo architecture system according to an exemplary embodiment of the present invention. In some embodiments, the visual servo architecture system 300 of the present invention may comprise: a distributed visual servo subsystem 310, a wireless broadcast network subsystem 320, and a servo middleware framework subsystem 330.
In some embodiments, a distributed vision servo subsystem 310 is used to accomplish image acquisition and vision processing. Illustratively, the distributed visual servo subsystem 310 may consist of a camera, a front-end processing module, a B-M2M channel, a front-end processing server, and an MEC visual processing task offload module.
Illustratively, the distributed visual servoing subsystem 310 may be used to perform the visual servoing method described above in connection with FIG. 1.
In some embodiments, the wireless broadcast network subsystem 320 is a B-M2M-based network architecture, the B-M2M network architecture may dynamically partition a dedicated frequency band within the coverage of an industrial field base station by using a 5G authorized frequency band, deploy a broadcast channel in a time division manner, all nodes in the network have a capability of receiving all broadcast time slots, and a terminal may dynamically select an idle time slot to transmit broadcast information, thereby implementing broadcast transmission and reception of all devices, and configuring a dedicated control time slot.
In some embodiments, the distributed vision servo subsystem 310 may include: the method comprises the following steps: a device node, a common broadcast channel resource pool, a B-M2M management unit.
The equipment nodes have wireless broadcast information sending and receiving functions and are arranged at each core part of the industrial production equipment, and all the equipment nodes have the function of receiving all time slots of the public broadcast channel resource pool.
The public broadcast channel resource pool can be managed by a B-M2M management unit in a base station within the coverage of the base station, and has continuous frequency bands and time slots, and the frequency band width and the time slot number of the resource pool are dynamically adjusted by the B-M2M management unit according to the real-time broadcast intensity, so as to ensure that the broadcast transmission delay of each equipment node meets the quality requirement of a production field.
And the B-M2M management unit is deployed in the base station and the mobile edge calculation, and a B-M2M broadcast sending and receiving module is deployed in an access network (5G NG-RAN) of the 5G base station, and has the functions of broadcasting management information, confirmation information and state information, managing a system and receiving all time slots of a public broadcast channel resource pool. The mobile edge computing platform of the base station deploys B-M2M management and control systems, as well as the operation of production application systems.
In some embodiments, the servo middleware framework subsystem 330 may be configured to divide the servo middleware deployed in the MEC and the servo middleware on each node according to different functional requirements, so as to provide specific functional implementation for various policy executions of this embodiment. The servo middleware uses the data processing, algorithm library management, B-M2M communication capability, message processing capability, process scheduling and system management functions provided by the system to provide various universal functional interfaces to link each part of distributed servo application or different applications, thereby achieving the purposes of resource sharing and function sharing.
In some embodiments, the servo middleware framework subsystem 330 may include: the system comprises a distributed network communication management and interface, a node normalization management module, an event and task management module and a data management module.
The distributed network communication management and interface can be used for being responsible for communication among services of each node. The distributed mode in this embodiment, which involves complex applications such as B-M2M communication scheduling of each node and remote offload operation of MEC, needs to handle data communication and exchange of a large number of nodes. The distributed network communication management of the embodiment mainly completes interprocess communication, remote invocation and indirect communication among different tasks, the MEC, the visual acquisition unit and the robot execution unit in the MEC and different nodes. The Remote Procedure Call (RPC Remote Procedure Call Protocol) implements communication in which a process in a node remotely calls each function process in the MEC corresponding to task offload of the node, and an interface of the Remote Procedure Call (RMI) is an interface of an application program (APP) implementing the Remote Call. The indirect communication is to coordinate the difference of the distributed nodes in communication, processing speed, delay and the like, and adopts a distributed message queue mode. Each node can synchronously receive the message broadcast by the B-M2M channel, then the message is put into a local message queue, and then the message queue is called by a local process. This method is suitable for messages with a small data size. For the message with large data volume, the publish/subscribe mode is adopted, and the local message queue only stores the topic subscribed by the node in the B-M2M broadcast message.
The node normalization management can include various nodes such as a visual information acquisition node, a visual processing node and a visual execution node, the servo middleware abstracts attributes and functions of the various nodes and allocates unique identifiers, so that the nodes with different forms are normalized into the functions and attributes with uniform formats and the normalization nodes with uniform identifiers, and various tasks can be conveniently accessed.
The event and task management module has various complex policies, and in order to implement the policies specifically, the present embodiment unifies the policies into an "event-task" mode, that is, each process generates an event at data of different time, and the middleware calls a corresponding task process according to the type of the event. By adopting the mode, the specific implementation of various strategies is realized until the task is completed.
And the data management module can be used for finishing communication data cleaning, data-based event detection and generation and persistent layer data source reading.
Fig. 4 shows a flow diagram of a visual servoing method according to an exemplary embodiment of the present invention. In this embodiment, the vision system method may include the following steps.
S401, inputting the specific task information into the servo middleware of the MEC, and generating a corresponding task requirement.
S402, generating an implementation strategy of the visual servo system by the servo middleware corresponding to the task requirement.
And S403, the field distributed image unit acquires a real-time image through the camera, the distributed sensor auxiliary unit acquires sensing information, the sensing information is transmitted to the MEC through an M2M broadcast channel based on broadcasting, a servo middleware of the MEC creates a corresponding visual processing task unloading example, and visual feedback information is obtained after visual processing.
And S404, inputting the visual feedback information into the implementation strategy of the system to output the control information of the robot, and broadcasting the control information through a B-M2M channel.
And S405, the field robot executes corresponding actions according to the received control information.
In the embodiment of the present invention, the above steps S403-S405 are continuously executed until the task is completed.
According to the visual servo method and the visual servo system provided by the embodiment of the invention, a novel distributed visual servo architecture can be constructed on the basis of a distributed visual servo architecture of 5G and B-M2M by means of a 5G architecture and an efficient B-M2M broadcasting system and by using distributed visual acquisition and multi-sensor fusion technologies, so that the problems of the existing single visual servo system are effectively solved, the uncertainty is eliminated, and a more reliable and accurate result is obtained. By means of the advantages of large MEC coverage and strong data processing capability of 5G, the servo middleware is constructed in the MEC, so that dynamic mapping relations between various strategies and robot motion are realized, a more flexible visual servo mechanism is realized, the diversity and application scenes of the robot functions are greatly expanded, and the method has positive significance for improving the development of complex robots and enriching the technical ecology of a 5G network and B-M2M.
It is to be understood that the invention is not limited to the particular arrangements and instrumentality described in the above embodiments and shown in the drawings. For convenience and brevity of description, detailed description of a known method is omitted here, and for the specific working processes of the system, the module and the unit described above, reference may be made to corresponding processes in the foregoing method embodiments, which are not described herein again.
It will be understood by those of ordinary skill in the art that all or some of the steps of the above inventive method, systems, functional modules/units in the apparatus may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments instead of others, combinations of features of different embodiments are meant to be within the scope of the embodiments and form different embodiments.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (11)

1. A visual servoing method, characterized in that it comprises:
receiving, by a mobile edge computing, MEC, system, visually relevant information broadcast by distributed collection devices within a predetermined area over a machine-to-machine M2M channel; wherein the M2M channel is a broadcast-based channel previously deployed in a communication network;
performing visual processing on the received visual related information through an MEC servo middleware to obtain first visual feedback information, and processing the first visual feedback information according to an implementation strategy of a predetermined task to obtain first behavior control information;
broadcasting the first behavior control information through the M2M channel to enable the robot devices in the predetermined area to execute the actions corresponding to the first behavior control information, thereby performing servo control on the robot devices.
2. The method of claim 1, wherein prior to said receiving, by the mobile edge computing MEC system, visually relevant information broadcast by distributed collection devices within a predetermined area over a machine-to-machine M2M channel, the method further comprises:
task information of a preset task is received through the MEC servo middleware, task requirement information corresponding to the task information is generated, and a robot implementation strategy corresponding to the task requirement information is generated.
3. The method of claim 1, wherein in case the predetermined area is a preset simple scene live area, the method further comprises:
receiving, by at least one front-end processing server deployed within a predetermined area, the visually relevant information;
performing visual processing on the visual related information according to a visual processing algorithm acquired in advance from the MEC system to obtain second visual feedback information; wherein the visual processing algorithms used by the plurality of front-end processing servers are different;
processing the second visual feedback information according to an implementation strategy of the preset task acquired in advance from the MEC system to obtain second behavior control information;
broadcasting, by the front-end processing server, the second behavior control information over the M2M channel to cause robotic devices within the predetermined area to perform actions corresponding to the second behavior control information.
4. The method of claim 1, wherein the visually relevant information comprises: visual information collected by each collection device of the distributed type;
the visually processing the received visual related information through the MEC servo middleware to obtain first visual feedback information includes:
acquiring a visual processing instance in the MEC system associated with each acquisition device in a distributed manner;
in the visual processing example associated with each acquisition device, visual processing is performed on the visual information acquired by the respective associated acquisition device through a servo middleware to obtain visual feedback information corresponding to each acquisition device;
and generating the first visual feedback information according to the visual feedback information respectively corresponding to each acquisition device.
5. The method according to claim 1, wherein in a case that the total number of the distributed acquisition devices is greater than a threshold value of a predetermined dense number of devices, the visual related information includes visual characteristic information corresponding to each acquisition device in the distributed manner, and the visual characteristic information is obtained through a first processing flow; the first processing flow comprises the following steps:
performing feature extraction on the visual information acquired by the corresponding acquisition equipment through a front-end processing module corresponding to each acquisition equipment to obtain visual feature information corresponding to each acquisition equipment; wherein the front-end processing module is a processing module provided by at least one front-end processing server deployed in the predetermined area and used for performing feature extraction of visual information;
broadcasting, by each of the acquisition devices, respective corresponding visual characteristic information over the M2M channel.
6. The method of claim 5, wherein the visually processing the received visually relevant information through MEC servo middleware to obtain first visual feedback information comprises:
performing feature-based fusion processing on the visual feature information corresponding to each acquisition device through an MEC servo middleware by adopting a task unloading mode in the MEC system to obtain fused visual feature information;
processing the fused visual characteristic information to obtain the first visual feedback information;
alternatively, the first and second electrodes may be,
the processing the first visual feedback information according to the implementation strategy of the predetermined task to obtain first behavior control information includes:
processing the visual characteristic information corresponding to each acquisition device according to an implementation strategy of a preset task through the MEC servo middleware to obtain behavior control information corresponding to each acquisition device;
and performing fusion processing based on decision on the behavior control information corresponding to each acquisition device to obtain fused behavior control information which is used as the first behavior control information.
7. The method of claim 1, wherein the visually processing the received visual-related information through MEC servo middleware to obtain first visual feedback information comprises:
performing data-based fusion processing on the received vision-related information through the servo middleware to obtain fused vision-related information;
and performing visual processing on the fused visual related information to obtain the first visual feedback information.
8. The method of claim 1,
the acquisition device includes: a first harvesting device deployed on the robotic device and a second harvesting device deployed in a predetermined regional field environment;
the visually relevant information includes: the method comprises the steps of acquiring overall visual data in an eye-to-hand mode by using the first acquisition equipment and acquiring local visual data in an eye-to-hand mode by using the second acquisition equipment;
in the case that the implementation policy of the predetermined task is a robot autonomous control policy, the broadcasting the first behavior control information through the M2M channel to cause the robot devices within the predetermined area to perform an action corresponding to the first behavior control information includes:
selecting a machine learning algorithm from a machine learning algorithm library configured in the MEC system in advance through an MEC servo middleware, and performing data fusion on the overall visual data and the local visual data by using the selected machine learning algorithm to obtain fused visual data;
positioning the robot equipment according to the fused visual data by using a synchronous positioning and mapping algorithm acquired in advance from the MEC system, and constructing a map according to a positioning result;
and broadcasting the first behavior control information and the constructed map to the robot device through the M2M channel so that the robot device executes the action corresponding to the first behavior control information in the process of autonomous navigation according to the constructed map.
9. The method of claim 1, wherein the collection devices comprise distributed instruction collection devices; in the case that the implementation policy of the predetermined task is a human-machine cooperation policy, the method further includes: receiving, by the MEC system, control instruction information that is acquired by the distributed instruction acquisition device and broadcast through the M2M channel;
the processing the first visual feedback information according to the implementation strategy of the predetermined task to obtain first behavior control information includes:
processing the control instruction information and the first visual feedback information according to a control servo strategy acquired from the MEC system in advance to obtain first behavior control information;
and, in a case where the robot device within the predetermined area performs an action corresponding to the first behavior control information, the method further includes:
the robot device broadcasts the action execution result to a predetermined operation device through the M2M information, and displays the working state of the robot device in the operation device.
10. The method according to any one of claims 1 to 9,
the acquisition equipment comprises image acquisition equipment and a preset sensor;
the visually relevant information includes: in the case of visual information collected by each collection device of the distributed type, the visual information includes: image information collected by the image collection device and sensing information collected by the sensor.
11. A visual servoing system, comprising:
a receiving module for receiving, by the mobile edge computing MEC system, visual related information broadcast by distributed collection devices within a predetermined area through a machine-to-machine M2M channel; wherein the M2M channel is a broadcast-based channel previously deployed in a communication network;
the processing module is used for performing visual processing on the received visual related information through the MEC servo middleware to obtain first visual feedback information, and processing the first visual feedback information according to an implementation strategy of a preset task to obtain first behavior control information;
a control module, configured to broadcast the first behavior control information through the M2M channel, so that the robot device in the predetermined area executes an action corresponding to the first behavior control information, thereby performing servo control on the robot device.
CN202111292814.3A 2021-11-03 2021-11-03 Visual servo method and visual servo system Active CN113894788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111292814.3A CN113894788B (en) 2021-11-03 2021-11-03 Visual servo method and visual servo system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111292814.3A CN113894788B (en) 2021-11-03 2021-11-03 Visual servo method and visual servo system

Publications (2)

Publication Number Publication Date
CN113894788A true CN113894788A (en) 2022-01-07
CN113894788B CN113894788B (en) 2024-02-23

Family

ID=79028267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111292814.3A Active CN113894788B (en) 2021-11-03 2021-11-03 Visual servo method and visual servo system

Country Status (1)

Country Link
CN (1) CN113894788B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190025801A1 (en) * 2015-09-03 2019-01-24 Nec Corporation Monitoring server, distributed-processing determination method, and non-transitory computer-readable medium storing program
CN112067624A (en) * 2020-09-18 2020-12-11 杭州汇萃智能科技有限公司 Machine vision distributed detection method, device and system based on 5G network
CN112822447A (en) * 2021-01-07 2021-05-18 云南电网有限责任公司电力科学研究院 Robot remote monitoring video transmission method and system based on 5G network
US20210153041A1 (en) * 2019-11-18 2021-05-20 Verizon Patent And Licensing Inc. Systems and methods for monitoring performance in distributed edge computing networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190025801A1 (en) * 2015-09-03 2019-01-24 Nec Corporation Monitoring server, distributed-processing determination method, and non-transitory computer-readable medium storing program
US20210153041A1 (en) * 2019-11-18 2021-05-20 Verizon Patent And Licensing Inc. Systems and methods for monitoring performance in distributed edge computing networks
CN112067624A (en) * 2020-09-18 2020-12-11 杭州汇萃智能科技有限公司 Machine vision distributed detection method, device and system based on 5G network
CN112822447A (en) * 2021-01-07 2021-05-18 云南电网有限责任公司电力科学研究院 Robot remote monitoring video transmission method and system based on 5G network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
山东中兴教育咨询有限公司 组编,崔海滨 杜永生 陈巩 主编: "5G移动通信技术", 西安电子科技大学出版社, pages: 13 - 14 *
虞湘宾 等: ""未来移动通信网络中移动边缘计算技术"", 《南京航空航天大学学报》, pages 586 - 594 *

Also Published As

Publication number Publication date
CN113894788B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN114025399B (en) Low-orbit satellite switching control method, core network equipment, device and storage medium
US10834198B2 (en) Edge side dynamic response with context propagation for IoT
CN104333498A (en) Method and device for controlling intelligent home devices
CN110524531A (en) A kind of robot control system and its workflow based on Internet of Things cloud service
CN105183469A (en) Method and apparatus for focus location during application switching
CN114373148A (en) Cloud robot mapping method, system, equipment and storage medium
CN113894788B (en) Visual servo method and visual servo system
US10277869B2 (en) Efficient process for camera call-up
CN104038798A (en) Image processing method, device and system
CN112925416A (en) User sight tracking method, device and system
Malanchini et al. Leveraging machine learning for industrial wireless communications
KR20200045058A (en) smart sensing and monitoring system of risk detection for small business to switch to smart factories, and method thereof
CN104469249A (en) Information processing method and first electronic equipment
US20220210248A1 (en) Information interaction methods, apparatuses, devices, and systems and storage media
CN114659450A (en) Robot following method, device, robot and storage medium
KR102181584B1 (en) Ship Design Inspection System Using Coexistence Reality Based 3D Design Drawing, and Method thereof
CN114170373A (en) Target object labeling method, processor, device and mixing station
CN113776491A (en) Multi-dimensional distance measurement method, MEC and distance measurement unit based on B-M2M
CN108040202B (en) Camera and method and device for executing instructions thereof
CN117255212B (en) Remote emergency live broadcast control method and related equipment
CN113890592B (en) Communication satellite switching method, integrated sky-ground information network system, device and medium
CN114697219B (en) Network control method, equipment and system for live network
US20230189193A1 (en) Method and system for communication in service management and orchestration
CN114815844A (en) Control method, device, server, medium and equipment of intelligent robot
CN114326721A (en) Drawing establishing method and device, cloud server and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant