CN116408807B - Robot control system based on machine vision and track planning - Google Patents

Robot control system based on machine vision and track planning Download PDF

Info

Publication number
CN116408807B
CN116408807B CN202310658632.6A CN202310658632A CN116408807B CN 116408807 B CN116408807 B CN 116408807B CN 202310658632 A CN202310658632 A CN 202310658632A CN 116408807 B CN116408807 B CN 116408807B
Authority
CN
China
Prior art keywords
video
track
object scene
planning
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310658632.6A
Other languages
Chinese (zh)
Other versions
CN116408807A (en
Inventor
巫飞彪
张少华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Donghan Intelligent Equipment Co ltd
Original Assignee
Guangzhou Donghan Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Donghan Intelligent Equipment Co ltd filed Critical Guangzhou Donghan Intelligent Equipment Co ltd
Priority to CN202310658632.6A priority Critical patent/CN116408807B/en
Publication of CN116408807A publication Critical patent/CN116408807A/en
Application granted granted Critical
Publication of CN116408807B publication Critical patent/CN116408807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot control system based on machine vision and track planning, which comprises: the real-time acquisition module is used for controlling the robot to move according to the initial planned track and controlling the robot to acquire the object scene video of the planned track in real time in the moving process; the video screening module is used for screening real track object and scene videos from the currently acquired planning track object and scene videos based on space-time coincidence analysis results of initial planning tracks of all robots in a preset period and video frame continuity of the planning track object and scene videos; the track updating module is used for updating the initial planning track based on the real track object scene video and the initial planning track to obtain a re-planning track of the robot; the movement control module is used for continuously carrying out movement control on the robot based on the re-planning track to obtain a movement control result of the robot; the method is used for enabling the updating of the track object scene video to be more timely and accurate, and enabling the planning and updating of the movement control track of the robot to be more reasonable and timely.

Description

Robot control system based on machine vision and track planning
Technical Field
The invention relates to the technical field of robot control, in particular to a robot control system based on machine vision and track planning.
Background
In industrial production, there are two general modes of movement control of robots, firstly, a robot is controlled to move according to a preset planned track based on a preprogrammed movement control program, and although the control program of a robot control system of the mode is simple, the application effect is poor under the condition of changeable track environments; secondly, the positioning recognition of the robot is realized based on a recognition positioning algorithm based on machine vision, namely a template matching (template matching) method or a descriptor-based method and the like, the existing recognition positioning algorithm based on machine vision can flexibly recognize changeable track scenes and combine with track planning rules, so that the movement control of the robot is still effective under the circumstance of changeable track environments.
However, the robot movement control based on machine vision and track planning needs to acquire real-time object scene information in a changeable track environment in advance for positioning identification and track planning based on methods such as template matching or descriptors, but the object scene image information of the track object scene in the actual application environment of the existing robot control system is mostly acquired manually or automatically by a robot, and the two modes result in insufficient accuracy and timeliness of the image information of the track object scene, so that the track planning effect of the robot is directly influenced.
Therefore, the invention provides a robot control system based on machine vision and track planning.
Disclosure of Invention
The invention provides a robot control system based on machine vision and track planning, which is used for screening out real track object video from the track object video obtained in real time by analyzing the space-time coincidence analysis of the initial track of all robots in a preset period and the video frame continuity of the track object video obtained in the moving process, updating the initial track of the robot based on the real track object video, ensuring the accuracy and timeliness of the object image information of the track object in the actual application environment of the robot control system, ensuring the reasonable and timely planning and updating of the movement control track of the robot, and further obtaining better robot movement control effect.
The invention provides a robot control system based on machine vision and track planning, comprising:
the real-time acquisition module is used for controlling the robot to move according to the initial planned track and simultaneously controlling the robot to acquire the object scene video of the planned track in real time in the moving process;
the video screening module is used for screening real track object and scene videos from the currently acquired planning track object and scene videos based on space-time coincidence analysis results of initial planning tracks of all robots in a preset period and video frame continuity of the planning track object and scene videos;
The track updating module is used for updating the initial planning track of the robot based on the real track object scene video and the initial planning track to obtain a re-planning track of the robot;
and the movement control module is used for continuously carrying out movement control on the robot based on the re-planning track to obtain a movement control result of the robot.
Preferably, the video filtering module includes:
the video dividing sub-module is used for dividing the currently acquired planning track object scene video based on the space-time coincidence analysis result of the initial planning tracks of all robots in a preset period to obtain a plurality of object scene video segments;
the first computing sub-module is used for computing the first fidelity of each object scene video segment based on the object scene matching degree among different object scene video segments in the planning track object scene video currently acquired by all robots in a preset period;
the second computing sub-module is used for computing the second fidelity of each object scene video segment based on the video frame consistency of the planning track object scene video;
and the video screening sub-module is used for determining the comprehensive reality based on the first reality and the second reality and screening out the real track object scene video from the currently acquired planning track object scene video based on the comprehensive reality.
Preferably, the video dividing sub-module includes:
the real range determining unit is used for determining the real three-dimensional shooting range of each position point in all tracks based on a preset three-dimensional shooting range of the shooting device for acquiring the object scene video of the planned track and a three-dimensional space model of a preset moving environment;
the space coincidence degree calculating unit is used for taking two position points with coincident actual three-dimensional shooting ranges in initial planning tracks of different robots in a preset period as a primary screening position point combination and calculating the space coincidence degree of the actual three-dimensional shooting ranges of the two position points in the primary screening position point combination;
the time deviation degree calculation unit is used for calculating the path time deviation degree of two position points in the primary screening position point combination based on the preset path time of each position point in the initial planning track;
the space-time coincidence analysis unit is used for calculating the space-time coincidence of the primary screening position point combination based on the space coincidence of the primary screening position point combination and the path time deviation, and taking all the space-time coincidence as a space-time coincidence analysis result of the initial planning track of all the robots;
and the object scene video dividing unit is used for dividing the currently acquired planning track object scene video based on the coincidence analysis result to obtain a plurality of object scene video segments.
Preferably, the object scene video dividing unit includes:
the coincidence screening subunit is used for judging two position points in the primary screening position point combination with the space-time coincidence degree exceeding the coincidence degree threshold value in the space-time coincidence degree analysis result as video coincidence position points;
the track segment screening subunit is used for taking a track segment formed by a plurality of video overlapping position points, which are continuously consistent with each other, of the overlapping track objects in each initial planning track as a video overlapping track segment;
the dividing moment determining subunit is used for taking the preset path moment of the starting point of the video overlapping track section in the initial planning track as the first video dividing moment corresponding to the initial planning track, and taking the preset path moment of the end point of the video overlapping track section in the initial planning track as the second video dividing moment corresponding to the initial planning track;
the dividing limit determining subunit is used for determining dividing limits between video frames of which the acquiring time is the first video dividing time corresponding to the initial planning track and adjacent previous video frames in the currently acquired planning track object video, and determining dividing limits between video frames of which the acquiring time is the second video dividing time corresponding to the initial planning track and adjacent next video frames in the currently acquired planning track object video;
And the object scene video dividing subunit is used for dividing the video based on all dividing limits in the currently acquired planning track object scene video to obtain a plurality of object scene video segments.
Preferably, the first calculation sub-module includes:
the track segment matching unit is used for matching two video overlapping track segments which contain position points which are continuous video overlapping position points in different initial planning tracks to obtain a track segment matching result;
the video segment matching unit is used for carrying out corresponding matching on the corresponding object video segments based on the track segment matching result to obtain a plurality of video segment matching combinations;
the first calculation unit is used for calculating the first reality of each object scene video segment in the planning track object scene video currently acquired by all robots in a preset period based on the object scene matching degree between two object scene video segments in each video segment matching combination.
Preferably, the second calculation sub-module includes:
the continuity calculating unit is used for calculating the video frame continuity of all adjacent video frame combinations in the planning track object scene video and taking the average value of the video continuity of all adjacent video frame combinations of each video frame in the planning track object scene video as the comprehensive video continuity of the corresponding video frame;
And the second calculation unit is used for taking the average value of the comprehensive video consistency of all video frames in each object scene video segment as a second fidelity of the corresponding object scene video segment.
Preferably, the track updating module includes:
the initial acquisition subunit is used for acquiring all track object video in an initial state in a preset mobile environment;
the video updating sub-module is used for updating all track object scene videos based on the real track object scene videos, the corresponding video acquisition time sequences and the initial planning track, and determining the video updating amount of all track object scene videos;
and the re-planning sub-module is used for re-planning the initial planning track of the robot based on the video updating quantity to obtain the re-planning track of the robot.
Preferably, the video update sub-module includes:
the object scene information extraction unit is used for determining at least one object scene information of each position point in the initial planning track based on all real track object scene videos and the corresponding initial planning track;
the latest information determining unit is used for determining the acquisition time sequence of all the object scene information of each position point based on the video acquisition time sequence of the real track object scene video to which the object scene information belongs, and determining the latest object scene information of the corresponding position point in all the object scene information of each position point based on the acquisition time sequence of all the object scene information;
And the object scene information updating unit is used for updating all track object scene videos based on the latest object scene information of all the position points and determining the video updating amount of all the track object scene videos.
Preferably, the scene information updating unit includes:
the object scene information updating subunit is used for determining initial object scene information of each position point in all track object scene videos, when the deviation degree of the latest object scene information of the position point and the initial object scene information exceeds a deviation threshold value, replacing the initial object scene information of the corresponding position point in all track object scene videos with the corresponding latest object scene information, otherwise, reserving the initial object scene information of the corresponding position point in all track object scene videos;
and the video update amount determining subunit is used for regarding updated latest object scene information contained in all track object scene videos as the video update amount of all track object scene videos.
Preferably, the re-planning sub-module comprises:
the fault section determining unit is used for determining a fault track section in all tracks based on the video updating quantity;
and the re-planning unit is used for determining an un-traversed track section in the initial planned track, re-planning the corresponding initial planned track based on the un-traversed track section, the fault track section and all tracks, and obtaining a re-planned track.
The invention has the beneficial effects different from the prior art: the real track object scene video is screened out from the planning track object scene video acquired by the robot in real time by analyzing the space-time coincidence analysis of the initial planning tracks of all robots in a preset period and the video frame continuity of the planning track object scene video acquired in the moving process, the initial planning tracks of the robot are updated based on the real track object scene video, the accuracy and timeliness of object scene image information of the track object scene in the actual application environment of a robot control system are ensured, and the planning and updating of the moving control tracks of the robot are more reasonable and timely, so that a better moving control effect of the robot is obtained.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of specific modules and execution process of a robot control system based on machine vision and trajectory planning in an embodiment of the present invention;
fig. 2 is a schematic diagram of a specific module and an implementation process of a video filtering module according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a specific unit and an execution process of a video segmentation sub-module according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1
The invention provides a robot control system based on machine vision and track planning, referring to fig. 1, comprising:
the real-time acquisition module is used for controlling the robot to move according to the initial planned track and simultaneously controlling the robot to acquire the object scene video of the planned track in real time in the moving process;
the video screening module is used for screening real track object and scene videos from the currently acquired planning track object and scene videos based on space-time coincidence analysis results of initial planning tracks of all robots in a preset period and video frame continuity of the planning track object and scene videos;
The track updating module is used for updating the initial planning track of the robot based on the real track object scene video and the initial planning track to obtain a re-planning track of the robot;
and the movement control module is used for continuously carrying out movement control on the robot based on the re-planning track to obtain a movement control result of the robot.
In this embodiment, the initial planned track is a movement track of the robot planned by a programming controller according to the latest track information in the actual movement environment of the robot, where the initial planned track may be a track directly reaching the final destination of the robot, or may be a partial track before reaching the destination, for example: or part of the track of the robot in the current straight track section, and the corresponding movement control program is to control the robot to move straight according to the current direction (straight to the corner position of the execution track section or the preset straight distance, etc.).
In this embodiment, the control of the movement of the robot according to the initial planned trajectory is performed according to a preprogrammed movement control program (i.e. a program that controls the movement of the robot according to the corresponding initial planned trajectory).
In this embodiment, the planned trajectory object scene video, i.e. the video acquired in real time by the robot during movement according to the initial planned trajectory, based on one or more cameras onboard the robot, contains the actual object scene on the path trajectory of the robot.
In this embodiment, the preset period is a period according to which the space-time coincidence analysis result of the initial planned track and the video frame continuity of the object video of the planned track are judged, and all robots in the preset period are robots moving in a preset actual moving environment in the preset period.
In this embodiment, the space-time coincidence analysis result is a result obtained by analyzing the space-time coincidence of position points in the initial planned trajectories of different robots in a preset period (i.e., a numerical value representing the coincidence degree of the space and the approach time of two position points in two initial planned trajectories).
In this embodiment, the video frame continuity is a numerical value representing the degree of continuity between each video frame and adjacent video frames in the planned track scene video.
In the embodiment, the real track object scene video is the video of the latest actual object scene condition of the corresponding position point or track segment in the representation track screened from all the currently acquired planning track object scene videos, and the timeliness and the accuracy of the track object scene video can be ensured.
In the embodiment, the initial planning track of the robot is updated based on a real track object scene video and an initial planning track, the track object scene video stored in the cloud is updated based on the real track object scene video, and the initial planning track is updated based on the updated track object scene video and a preset track planning rule (for example, the initial planning track is AB-BC, when the AB track section is analyzed to have an obstacle or worse traffic condition based on manual analysis or machine learning of the real track object scene video, the track can be updated to be AF-FD-DC with better obstacle or traffic condition, or template image information or description characteristics matched with templates in the positioning and identifying process of the robot are updated based on the real track object scene video, and the movement control program of the robot is optimized based on the updated image information, namely the updating of the initial planning track is realized.
In this embodiment, the re-planned track is a new track obtained after updating the initial planned track.
In this embodiment, the robot is continuously controlled to move based on the re-planned trajectory, that is, the robot is controlled to move according to the re-planned trajectory based on a movement control program corresponding to the re-planned trajectory.
In this embodiment, the robot movement control result is a result of controlling the robot to move according to the newly obtained re-planned trajectory.
In the embodiment, in the process of continuously performing movement control on the robot based on the re-planned track, the robot continuously acquires the planned track object scene video, continuously screens the latest real track object scene video from the currently acquired planned track object scene video based on the video screening module and the track updating module, continuously updates the re-planned track of the robot based on the latest real track object scene video, namely, enables the track object scene video to update and the planned track to form a continuous circulation process in the whole movement process before a plurality of robots move in a preset period or before the robots reach a destination, thereby further improving timeliness and accuracy of image information of the track object scene referenced in the movement control process of the robot and ensuring the track planning effect in the whole movement control process of the robot.
The beneficial effects of the technology are as follows: the real track object scene video is screened out from the planning track object scene video acquired by the robot in real time by analyzing the space-time coincidence analysis of the initial planning tracks of all robots in a preset period and the video frame continuity of the planning track object scene video acquired in the moving process, the initial planning tracks of the robot are updated based on the real track object scene video, the accuracy and timeliness of object scene image information of the track object scene in the actual application environment of a robot control system are ensured, and the planning and updating of the moving control tracks of the robot are more reasonable and timely, so that a better moving control effect of the robot is obtained.
Example 2
Based on embodiment 1, the video filtering module, referring to fig. 2, includes:
the video dividing sub-module is used for dividing the currently acquired planning track object scene video based on the space-time coincidence analysis result of the initial planning tracks of all robots in a preset period to obtain a plurality of object scene video segments;
the first computing sub-module is used for computing the first fidelity of each object scene video segment based on the object scene matching degree among different object scene video segments in the planning track object scene video currently acquired by all robots in a preset period;
The second computing sub-module is used for computing the second fidelity of each object scene video segment based on the video frame consistency of the planning track object scene video;
and the video screening sub-module is used for determining the comprehensive reality based on the first reality and the second reality and screening out the real track object scene video from the currently acquired planning track object scene video based on the comprehensive reality.
In this embodiment, the object scene video segment is a video segment obtained by dividing the current acquired planning track object scene video.
In this embodiment, the first fidelity is a value representing the true reliability of the object scene video segment calculated based on the object scene matching degree between different object scene video segments in the planning track object scene video currently acquired by all robots in the preset period.
In this embodiment, the object scene matching degree is the matching degree between object scenes included in video segments representing different object scenes.
In this embodiment, the second fidelity is a value representing the true reliability of the object scene video segment calculated based on the video frame continuity of the planned track object scene video.
In this embodiment, the integrated fidelity is an average of the first fidelity and the second fidelity.
In the embodiment, an object scene video segment with the comprehensive reality not smaller than the reality threshold is taken as a real track object scene video segment, and the real track object scene video segments are spliced according to the space positions to obtain one or more segments of real track object scene videos.
The beneficial effects of the technology are as follows: and analyzing the object scene video of the planned track based on the space-time coincidence analysis results of the initial planned tracks of the robots to obtain object scene video segments with the possible coincidence of the track object scene, so that the follow-up analysis of the authenticity of the divided object scene video segments is facilitated, and the real track object scene video meeting the timeliness and accuracy simultaneously is screened out.
Example 3
On the basis of embodiment 2, the video segmentation sub-module, referring to fig. 3, includes:
the real range determining unit is used for determining the real three-dimensional shooting range of each position point in all tracks based on a preset three-dimensional shooting range of the shooting device for acquiring the object scene video of the planned track and a three-dimensional space model of a preset moving environment;
the space coincidence degree calculating unit is used for taking two position points with coincident actual three-dimensional shooting ranges in initial planning tracks of different robots in a preset period as a primary screening position point combination and calculating the space coincidence degree of the actual three-dimensional shooting ranges of the two position points in the primary screening position point combination;
The time deviation degree calculation unit is used for calculating the path time deviation degree of two position points in the primary screening position point combination based on the preset path time of each position point in the initial planning track;
the space-time coincidence analysis unit is used for calculating the space-time coincidence of the primary screening position point combination based on the space coincidence of the primary screening position point combination and the path time deviation, and taking all the space-time coincidence as a space-time coincidence analysis result of the initial planning track of all the robots;
and the object scene video dividing unit is used for dividing the currently acquired planning track object scene video based on the coincidence analysis result to obtain a plurality of object scene video segments.
In this embodiment, the image pickup device is an image pickup device such as a camera provided on the robot for acquiring a planned track object scene.
In this embodiment, the preset three-dimensional imaging range is a three-dimensional space range that can be imaged by a preset imaging device, for example, a three-dimensional space range formed by taking the imaging device as a center, and within a range of 150 degrees in the lateral direction and within a range of 150 degrees in the longitudinal direction, right in front of the imaging device.
In this embodiment, the preset moving environment is a preset movable range of all robots in the current robot control system.
In this embodiment, the three-dimensional space model is a three-dimensional model representing all three-dimensional space structures in the preset mobile environment.
In this embodiment, all the trajectories are all the movement trajectories that can be passed by the robot in the preset movement environment.
In this embodiment, the position points are coordinate points in all the tracks.
In this embodiment, when the robot is at a certain position point, the actual three-dimensional imaging range is a three-dimensional space range that can be imaged at the position point by the imaging device of the robot, and the imaging range of the imaging device is blocked to different degrees due to the difference of object scenes at two sides of the track, so that the actual three-dimensional imaging range of each position point is different, and the actual three-dimensional imaging range can be determined according to the preset three-dimensional imaging range and the object scene blocking condition near the position point in the three-dimensional space model.
In this embodiment, the primary screening position point combination is a position point combination including position points which respectively belong to two initial planned trajectories and have overlapping actual three-dimensional photographing ranges.
In this embodiment, calculating the spatial overlap ratio of the actual three-dimensional imaging range of two position points in the primary screening position point combination includes:
screening out three-dimensional blocks of the effective shooting range of the corresponding position points in the actual three-dimensional shooting ranges of the two position points in the primary screening position point combination based on the effective distance;
And the overlapping part of the effective shooting range three-dimensional blocks of the two position points in the primary screening position point combination is taken as an overlapping range three-dimensional block;
and taking the average value of the volume ratio of the effective shooting range three-dimensional blocks of the two position points in the combination of the overlapping range three-dimensional blocks and the primary screening position points as the space overlapping ratio of the actual three-dimensional shooting range of the two position points in the combination of the primary screening position points.
In this embodiment, the preset path time is the time of the corresponding position point in the initial planning track of the preset robot path.
In this embodiment, calculating the path time deviation of two position points in the primary screening position point combination based on the preset path time of each position point in the initial planning track includes:
taking the ratio of the difference value of the preset route moments of the two position points in the primary screening position point combination and the (preset) time threshold value as the route moment deviation degree.
In this embodiment, calculating the space-time overlap ratio of the combination of the positions of the preliminary screening based on the space overlap ratio of the combination of the positions of the preliminary screening and the path timing deviation ratio includes:
taking the difference value between the 1 and the approach time deviation as the approach time contact ratio, and taking the average value of the space contact ratio and the approach time contact ratio as the space-time contact ratio of the primary screening position point combination.
The beneficial effects of the technology are as follows: determining actual three-dimensional shooting ranges of all position points in all tracks based on a three-dimensional space model of a preset moving environment, determining initial screening position point combinations based on superposition conditions of the actual three-dimensional shooting ranges of initial planning tracks of different robots, calculating initial screening position point combination hiccup space superposition ratio, calculating path time deviation degree of the initial screening position point combinations based on path time of the position points, further, calculating space-time superposition ratio of the initial screening position point combinations based on the space superposition ratio and the path time deviation degree of the initial screening position point combinations, and further accurately analyzing space-time superposition between the position points in the initial planning tracks of different robots from superposition conditions of the actual three-dimensional shooting ranges of the position points and superposition conditions of the path time, and further dividing object scene video segments convenient for subsequent authenticity analysis.
Example 4
On the basis of embodiment 3, the object scene video dividing unit includes:
the coincidence screening subunit is used for judging two position points in the primary screening position point combination with the space-time coincidence degree exceeding the coincidence degree threshold value in the space-time coincidence degree analysis result as video coincidence position points;
The track segment screening subunit is used for taking a track segment formed by a plurality of video overlapping position points, which are continuously consistent with each other, of the overlapping track objects in each initial planning track as a video overlapping track segment;
the dividing moment determining subunit is used for taking the preset path moment of the starting point of the video overlapping track section in the initial planning track as the first video dividing moment corresponding to the initial planning track, and taking the preset path moment of the end point of the video overlapping track section in the initial planning track as the second video dividing moment corresponding to the initial planning track;
the dividing limit determining subunit is used for determining dividing limits between video frames of which the acquiring time is the first video dividing time corresponding to the initial planning track and adjacent previous video frames in the currently acquired planning track object video, and determining dividing limits between video frames of which the acquiring time is the second video dividing time corresponding to the initial planning track and adjacent next video frames in the currently acquired planning track object video;
and the object scene video dividing subunit is used for dividing the video based on all dividing limits in the currently acquired planning track object scene video to obtain a plurality of object scene video segments.
In this embodiment, the space-time overlap ratio is a numerical value representing the overlap ratio of the space and the approach time of two position points in the two initial planned trajectories.
In this embodiment, the coincidence degree threshold is a preset space-time coincidence degree threshold used for judging whether two position points in the primary screening position point combination are the video coincidence position points or not.
In this embodiment, the mutually video overlapping position points are two position points in the combination of the preliminary screening position points with the space-time overlapping ratio exceeding the overlapping ratio threshold.
In this embodiment, the coincident track object is the initial planned track where the position points that are the video coincident position points with the corresponding position points are located.
In this embodiment, the video overlapping track segment is a track segment formed by a plurality of video overlapping position points where overlapping track objects are continuously consistent in the initial planned track.
In this embodiment, the track segment formed by a plurality of video overlapping position points where the overlapping track objects are continuously coincident in the initial planned track, for example:
when the superposition track objects from the a-th position point to the a+b-th position point in the initial planning track A are all the initial planning track B (namely, the a-th position point to the a+b-th position point in the initial planning track A and the continuous (b+1) position point (such as the c-th position point value c+b-th position point) in the initial planning track B are mutually video superposition position points), a track section formed from the a-th position point to the a+b-th position point in the initial planning track A is regarded as a video superposition track section, wherein a, B and c are all more than 0.
In this embodiment, the initial planned trajectory is the initial planned trajectory where the video overlapping trajectory segment is located.
In this embodiment, the first video dividing moment is a preset path moment of a corresponding position point of the start point of the video overlapping track segment in the initial planning track.
In this embodiment, the second video dividing moment is a preset path moment of a corresponding position point of the end point of the video overlapping track segment in the initial planning track.
The beneficial effects of the technology are as follows: the space-time coincidence ratio of the primary screening position point combination is compared with a coincidence ratio threshold value, position points which can be mutually coincident in an effective shooting space range are screened out, namely, position point combinations which are mutually video coincidence position points are screened out, track segments which are continuously mutually video coincidence position points are further screened out to serve as video coincidence track segments, video dividing limits are further determined based on video dividing moments determined by the video coincidence track segments, so that a scene video segment divided based on the dividing limits comprises a plurality of position points which are continuously and mutually video coincidence position points, and further the divided scene video segment is convenient for subsequent checksum calculation of the reality of the scene video segment.
Example 5
On the basis of embodiment 4, the first calculation sub-module includes:
the track segment matching unit is used for matching two video overlapping track segments which contain position points which are continuous video overlapping position points in different initial planning tracks to obtain a track segment matching result;
the video segment matching unit is used for carrying out corresponding matching on the corresponding object video segments based on the track segment matching result to obtain a plurality of video segment matching combinations;
the first calculation unit is used for calculating the first reality of each object scene video segment in the planning track object scene video currently acquired by all robots in a preset period based on the object scene matching degree between two object scene video segments in each video segment matching combination.
In this embodiment, two video overlapping track segments including position points that are continuous video overlapping position points in different initial planning tracks are matched, so as to obtain a track segment matching result, for example:
when the superposition track objects from the a-th position point to the a+b-th position point in the initial planning track A are all initial planning track B (namely, the a-th position point to the a+b-th position point in the initial planning track A and the continuous (b+1) position point (such as the c-th position point value c+b-th position point) in the initial planning track B are mutually video superposition position points), a track section formed from the a-th position point to the a+b-th position point in the initial planning track A is regarded as a video superposition track section, wherein a, B and c are all more than 0;
And matching the video overlapping track segment A1 from the a-th position point to the a+b-th position point in the initial planning track A with the video overlapping track segment of the c+b-th position point in the c-th position point value in the initial planning track B to obtain a track segment matching result.
In this embodiment, the track segment matching result is two mutually matched video overlapping track segments, and the mutually matched video overlapping track segments meet the following conditions: the two contained position points are video superposition position points.
In this embodiment, the video segment matching combination is a combination obtained after matching object scene video segments corresponding to two mutually matched video overlapping track segments included in the track segment matching result (the combination includes two mutually matched object scene video segments).
In this embodiment, the calculation method of the matching degree between two object video segments is, for example:
performing contour recognition and edge detection (for example, binary image-based or OpenCV contour detection algorithm) on video frames contained in the object scene video segment to obtain all recognition contours;
taking the coordinate deviation degree between different recognition contours as the contour shape deviation degree between corresponding different contours;
Judging that two recognition contours with the contour shape deviation degree not exceeding a deviation degree threshold belong to the same object;
summarizing a combination of two recognition profiles which are mutually judged to belong to the same object, so as to obtain a recognition profile set belonging to the same object (for example, when a recognition profile a is judged to belong to the same object with a recognition profile b and a recognition profile c, and a recognition profile b is judged to belong to the same object with a recognition profile f, and a recognition profile c is judged to belong to the same object with a recognition profile e and a recognition profile d, then summarizing a recognition profile a, a recognition profile b, a recognition profile c, a recognition profile d, a recognition profile e and a recognition profile f, so as to obtain a recognition profile set belonging to the same object);
calculating the matching degree of any two identification contour sets belonging to two object and scene video segments; ( Any combination of the two recognition contours in the two recognition contour sets is carried out, namely the combination comprises two recognition contours belonging to the two recognition contour sets, then the matching degree between the two recognition contours in the combination is calculated, and the average value of the matching degree of all the two recognition contour sets is taken as the matching degree of the two recognition contour sets which are currently calculated, wherein the matching degree between the two recognition contours is calculated by the following steps: calculating the average value of the coordinate differences of all points with the same ordinal numbers in the two recognition contours, taking the ratio of the average value of the coordinate differences to a difference threshold value as the deviation degree of the two recognition contours, and taking the difference value of the deviation degree of 1 and the two recognition contours as the matching degree of the two recognition contours )
Judging whether each recognition contour set in the two object scene video segments has a recognition contour set with the matching degree not smaller than a matching degree threshold value in the other object scene video segments except the belonging object scene video segment, if so, judging the corresponding two recognition contour sets as the same-genus recognition contour set;
otherwise, the corresponding recognition contour set is judged to be a difference recognition contour set, and the area surrounded by all recognition contours in the difference recognition contour set in the affiliated video frame is taken as a difference area;
the ratio of the total area of all difference areas in the object scene video segment to the total area of all video frames of the corresponding object scene video segment is taken as the difference degree of the corresponding object scene video segment;
when the sum of the difference degrees of the two currently calculated object view video segments is larger than 1, the difference degree of the 1 and the object view video segment is taken as the matching degree of the object view video segment and the other currently calculated object view video segment, and the average value of the matching degrees of the two currently calculated object view video segments is taken as the object view matching degree of the two currently calculated object view video segments;
when the sum of the difference degrees of the two currently calculated object-scene video segments is not more than 1, taking the average value of the difference degrees of the two currently calculated object-scene video segments as the comprehensive difference degree, and taking the difference value of 1 and the comprehensive difference degree as the object-scene matching degree of the two currently calculated object-scene video segments.
In this embodiment, based on the object scene matching degree between two object scene video segments in each video segment matching combination, a first reality degree of each object scene video segment in the planning track object scene video currently acquired by all robots in a preset period is calculated, including:
and carrying out depolarization treatment (namely removing the maximum value of the abnormality and the minimum value of the abnormality, for example, the scene matching degree far greater than (or smaller than) the matching degree of other scenes) on all the scene matching degrees of the scene video segment, and taking the average value of all the scene matching degrees remained after the depolarization treatment as the first fidelity of the corresponding scene video segment.
The beneficial effects of the technology are as follows: based on the position points which are judged to be the video overlapping position points in different initial planning tracks, track segments with continuous overlapping shooting space ranges, namely track segment matching results, are obtained, matching of the video segments is achieved based on the track segment matching results, further calculation of the authenticity of the object scene video segments is achieved based on the object scene matching degree between the matched video segments, namely analysis based on the shooting ranges is achieved, the video overlapping tracks and the video are analyzed, and further mutual verification of a plurality of object scene video segments (namely planning track object scene videos) obtained at present is achieved based on the object scene matching degree between the obtained videos.
Example 6
On the basis of embodiment 2, the second calculation sub-module includes:
the continuity calculating unit is used for calculating the video frame continuity of all adjacent video frame combinations in the planning track object scene video and taking the average value of the video continuity of all adjacent video frame combinations of each video frame in the planning track object scene video as the comprehensive video continuity of the corresponding video frame;
and the second calculation unit is used for taking the average value of the comprehensive video consistency of all video frames in each object scene video segment as a second fidelity of the corresponding object scene video segment.
In this embodiment, the adjacent video frame combination is a combination of adjacent video frames in the planned track scene video.
In this embodiment, calculating the video frame continuity of all adjacent video frame combinations in the planned track scene video includes:
determining characteristic points in video frames in adjacent video frame combinations; ( For example, extracting by a gray threshold method, namely extracting pixel points with gray values larger than a preset threshold as feature points; for example, the edge points detected by the edge detection algorithm are regarded as feature points )
Matching feature points in two video frames in adjacent video frame combinations based on the relative positions to obtain matched feature point combinations (for example, the video frame x in the two video frames comprises four feature points respectively distributed on the upper left, the lower left, the upper right and the lower right of the video frames, and the video frame y comprises four feature points respectively positioned on the upper left, the lower left, the upper right and the lower right of the video frames, then the feature points positioned on the upper left of the video frame x and the feature points positioned on the lower left of the video frame y are matched, and the feature points positioned on the upper right of the video frame y are matched);
Calculating the deviation degree of coordinate values of two feature points in the matched feature point combination (the ratio of the coordinate value difference value of the two feature points to the preset coordinate difference value);
taking the difference value between 1 and the deviation degree as the matching degree of the matching feature point combination;
and taking the average value of the matching degree of all the matching characteristic point combinations in the two video frames in the adjacent video frame combination as the video frame consistency of the adjacent video frame combination.
In this embodiment, the integrated video continuity is a value indicating the degree of continuity between the video frame and two adjacent video frames.
The beneficial effects of the technology are as follows: by calculating the video frame continuity of each video frame and two adjacent video frames in the object scene video, the situation that the object scene video is spliced by malicious intrusion is avoided greatly.
Example 7
On the basis of embodiment 1, the track updating module includes:
the initial acquisition subunit is used for acquiring all track object video in an initial state in a preset mobile environment;
the video updating sub-module is used for updating all track object scene videos based on the real track object scene videos, the corresponding video acquisition time sequences and the initial planning track, and determining the video updating amount of all track object scene videos;
And the re-planning sub-module is used for re-planning the initial planning track of the robot based on the video updating quantity to obtain the re-planning track of the robot.
In this embodiment, the initial state is the latest track scene video in the preset mobile environment obtained before the start of the preset period of the current calculation.
In this embodiment, all the track scene videos are videos including track scene states before the current preset period starts in the preset mobile environment.
In this embodiment, the video acquisition time sequence is the sequence in which the robot acquires the real track object scene video, and also includes the acquisition time of each video frame in the real track object scene video.
In this embodiment, the video update amount is the updated local area of the video frame in the all track scene video.
The beneficial effects of the technology are as follows: all track object scene videos are updated based on the screened real track object scene videos, and the initial planning track of the robot is correspondingly updated, so that the track object scene videos are updated timely and accurately, and the track planning of the robot is more reasonable.
Example 8
On the basis of embodiment 7, the video update sub-module includes:
The object scene information extraction unit is used for determining at least one object scene information of each position point in the initial planning track based on all real track object scene videos and the corresponding initial planning track;
the latest information determining unit is used for determining the acquisition time sequence of all the object scene information of each position point based on the video acquisition time sequence of the real track object scene video to which the object scene information belongs, and determining the latest object scene information of the corresponding position point in all the object scene information of each position point based on the acquisition time sequence of all the object scene information;
and the object scene information updating unit is used for updating all track object scene videos based on the latest object scene information of all the position points and determining the video updating amount of all the track object scene videos.
In this embodiment, the object scene information is local video frame information (i.e., image information of a local area of a video frame) determined in the real track object scene video and including the object scene in a space within a preset range centered on the corresponding position point.
In this embodiment, based on the video acquisition timing sequence of the real track object scene video to which the object scene information belongs, the acquisition timing sequence of all the object scene information of each position point is determined, namely:
Determining the acquisition time of the local video frame corresponding to the object scene information based on the video acquisition time sequence of the real track object scene video to which the local video frame corresponding to the object scene information belongs;
and sequencing the acquisition moments of all the object scene information of the position points to obtain the acquisition time sequence of all the object scene information of the corresponding position points.
In this embodiment, based on the acquisition timing of all the object scene information, determining the latest object scene information of the corresponding position point from all the object scene information of each position point includes:
and taking the object scene information corresponding to the latest acquisition moment in the acquisition time sequence of all the object scene information as the latest object scene information of the corresponding position point.
In this embodiment, the latest scene information is the scene information that is latest at the moment obtained from all the scene information of the location point.
The beneficial effects of the technology are as follows: and determining the acquisition time sequence of all the object scene information of the position point based on the video acquisition time sequence, further determining the latest object scene information, updating all the track object scene videos based on the latest object scene information, and further ensuring the updating timeliness of the track object scene videos.
Example 9
On the basis of embodiment 8, the scene information updating unit includes:
the object scene information updating subunit is used for determining initial object scene information of each position point in all track object scene videos, when the deviation degree of the latest object scene information of the position point and the initial object scene information exceeds a deviation threshold value, replacing the initial object scene information of the corresponding position point in all track object scene videos with the corresponding latest object scene information, otherwise, reserving the initial object scene information of the corresponding position point in all track object scene videos;
and the video update amount determining subunit is used for regarding updated latest object scene information contained in all track object scene videos as the video update amount of all track object scene videos.
In this embodiment, the initial object information is local video frame information (i.e., image information of a local area of a video frame) determined in the object video of all the tracks and including objects in a space within a preset range centered on the corresponding position point.
In this embodiment, the calculation method of the deviation degree between the latest object scene information and the initial object scene information of the position point is as follows:
calculating pixel difference values of pixel points at the same positions in a local area of the video frame corresponding to the latest object scene information and a local area of the video frame corresponding to the initial object scene information;
And taking the average value of the ratio of the pixel difference value to the pixel value of the pixel point of the corresponding position of the two video frames as the local deviation degree of the corresponding position;
and taking the average value of the local deviation degrees of all the positions of the video frame as the deviation degree of the corresponding latest object scene information and the initial object scene information.
In this embodiment, the deviation threshold is a preset deviation threshold for determining whether the corresponding latest scene information needs to be updated.
In this embodiment, replacing initial object scene information of corresponding position points in all track object scene videos with corresponding latest object scene information (how to replace object scene information in videos) includes:
and replacing the local area of the video frame corresponding to the initial object scene information of the corresponding position point in all the track object scene videos with the local area of the video frame corresponding to the latest object scene information.
The beneficial effects of the technology are as follows: by judging whether the deviation degree between the latest object scene information and the initial object scene information of each position point exceeds the deviation threshold value, the judgment on whether the object scene information in all track object scene videos needs to be updated is realized, and corresponding updating is carried out, so that the accurate updating of all track object scene videos is realized.
Example 10:
on the basis of embodiment 7, the re-planning sub-module includes:
the fault section determining unit is used for determining a fault track section in all tracks based on the video updating quantity;
and the re-planning unit is used for determining an un-traversed track section in the initial planned track, re-planning the corresponding initial planned track based on the un-traversed track section, the fault track section and all tracks, and obtaining a re-planned track.
In this embodiment, the fault track segment is a track segment that is currently unperctable in all tracks determined based on the video frame update amount.
In this embodiment, determining a fault track segment in all tracks based on the video update amount includes:
inputting a complete video frame in which an updated video frame local area and a complete video frame in which an updated video frame local area are located, which are contained in the video update amount, into a pre-trained fault track section judging model (a model obtained by training a large number of updated video frames and updated video frames which are calibrated to belong to a fault track section and updated video frames which are calibrated to not belong to the fault track section) to judge whether the corresponding position point is a fault track section;
And based on the judging result, connecting and summarizing the position points judged to belong to the fault track section to obtain the fault track section.
In this embodiment, the non-traversed track segment is the track segment of the initial planned track that the robot did not traverse.
In this embodiment, re-planning the corresponding initial planned track based on the non-traversed track segment and the fault track segment and all tracks, to obtain a re-planned track, including:
determining a current position point and a destination position point of the robot based on the non-track section;
and removing the latest determined fault track segment from all tracks, and determining a re-planning track between the current position point and the destination position point in the rest track segments.
The beneficial effects of the technology are as follows: the state judgment of the track segments is realized based on the video update quantity, the fault track segments are determined, and the non-passed track of the robot is re-planned after the fault track segments are removed from all tracks based on the non-passed track in the initial planning track, so that the smooth movement control in the movement control process of the robot is ensured.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A robot control system based on machine vision and trajectory planning, comprising:
the real-time acquisition module is used for controlling the robot to move according to the initial planned track and simultaneously controlling the robot to acquire the object scene video of the planned track in real time in the moving process;
the video screening module is used for screening real track object and scene videos from the currently acquired planning track object and scene videos based on space-time coincidence analysis results of initial planning tracks of all robots in a preset period and video frame continuity of the planning track object and scene videos;
the track updating module is used for updating the initial planning track of the robot based on the real track object scene video and the initial planning track to obtain a re-planning track of the robot;
and the movement control module is used for continuously carrying out movement control on the robot based on the re-planning track to obtain a movement control result of the robot.
2. The robot control system based on machine vision and trajectory planning of claim 1, wherein the video screening module comprises:
the video dividing sub-module is used for dividing the currently acquired planning track object scene video based on the space-time coincidence analysis result of the initial planning tracks of all robots in a preset period to obtain a plurality of object scene video segments;
The first computing sub-module is used for computing the first fidelity of each object scene video segment based on the object scene matching degree among different object scene video segments in the planning track object scene video currently acquired by all robots in a preset period;
the second computing sub-module is used for computing the second fidelity of each object scene video segment based on the video frame consistency of the planning track object scene video;
and the video screening sub-module is used for determining the comprehensive reality based on the first reality and the second reality and screening out the real track object scene video from the currently acquired planning track object scene video based on the comprehensive reality.
3. The robot control system based on machine vision and trajectory planning of claim 2, wherein the video scoring sub-module comprises:
the real range determining unit is used for determining the real three-dimensional shooting range of each position point in all tracks based on a preset three-dimensional shooting range of the shooting device for acquiring the object scene video of the planned track and a three-dimensional space model of a preset moving environment;
the space coincidence degree calculating unit is used for taking two position points with coincident actual three-dimensional shooting ranges in initial planning tracks of different robots in a preset period as a primary screening position point combination and calculating the space coincidence degree of the actual three-dimensional shooting ranges of the two position points in the primary screening position point combination;
The time deviation degree calculation unit is used for calculating the path time deviation degree of two position points in the primary screening position point combination based on the preset path time of each position point in the initial planning track;
the space-time coincidence analysis unit is used for calculating the space-time coincidence of the primary screening position point combination based on the space coincidence of the primary screening position point combination and the path time deviation, and taking all the space-time coincidence as a space-time coincidence analysis result of the initial planning track of all the robots;
and the object scene video dividing unit is used for dividing the currently acquired planning track object scene video based on the coincidence analysis result to obtain a plurality of object scene video segments.
4. A robot control system based on machine vision and trajectory planning as claimed in claim 3, characterized by a scene video dividing unit comprising:
the coincidence screening subunit is used for judging two position points in the primary screening position point combination with the space-time coincidence degree exceeding the coincidence degree threshold value in the space-time coincidence degree analysis result as video coincidence position points;
the track segment screening subunit is used for taking a track segment formed by a plurality of video overlapping position points, which are continuously consistent with each other, of the overlapping track objects in each initial planning track as a video overlapping track segment;
The dividing moment determining subunit is used for taking the preset path moment of the starting point of the video overlapping track section in the initial planning track as the first video dividing moment corresponding to the initial planning track, and taking the preset path moment of the end point of the video overlapping track section in the initial planning track as the second video dividing moment corresponding to the initial planning track;
the dividing limit determining subunit is used for determining dividing limits between video frames of which the acquiring time is the first video dividing time corresponding to the initial planning track and adjacent previous video frames in the currently acquired planning track object video, and determining dividing limits between video frames of which the acquiring time is the second video dividing time corresponding to the initial planning track and adjacent next video frames in the currently acquired planning track object video;
and the object scene video dividing subunit is used for dividing the video based on all dividing limits in the currently acquired planning track object scene video to obtain a plurality of object scene video segments.
5. The robot control system of claim 4, wherein the first computing sub-module comprises:
The track segment matching unit is used for matching two video overlapping track segments which contain position points which are continuous video overlapping position points in different initial planning tracks to obtain a track segment matching result;
the video segment matching unit is used for carrying out corresponding matching on the corresponding object video segments based on the track segment matching result to obtain a plurality of video segment matching combinations;
the first calculation unit is used for calculating the first reality of each object scene video segment in the planning track object scene video currently acquired by all robots in a preset period based on the object scene matching degree between two object scene video segments in each video segment matching combination.
6. The robot control system of claim 2, wherein the second computing sub-module comprises:
the continuity calculating unit is used for calculating the video frame continuity of all adjacent video frame combinations in the planning track object scene video and taking the average value of the video continuity of all adjacent video frame combinations of each video frame in the planning track object scene video as the comprehensive video continuity of the corresponding video frame;
and the second calculation unit is used for taking the average value of the comprehensive video consistency of all video frames in each object scene video segment as a second fidelity of the corresponding object scene video segment.
7. The robot control system of claim 1, wherein the trajectory update module comprises:
the initial acquisition subunit is used for acquiring all track object video in an initial state in a preset mobile environment;
the video updating sub-module is used for updating all track object scene videos based on the real track object scene videos, the corresponding video acquisition time sequences and the initial planning track, and determining the video updating amount of all track object scene videos;
and the re-planning sub-module is used for re-planning the initial planning track of the robot based on the video updating quantity to obtain the re-planning track of the robot.
8. The robot control system based on machine vision and trajectory planning of claim 7, wherein the video update sub-module comprises:
the object scene information extraction unit is used for determining at least one object scene information of each position point in the initial planning track based on all real track object scene videos and the corresponding initial planning track;
the latest information determining unit is used for determining the acquisition time sequence of all the object scene information of each position point based on the video acquisition time sequence of the real track object scene video to which the object scene information belongs, and determining the latest object scene information of the corresponding position point in all the object scene information of each position point based on the acquisition time sequence of all the object scene information;
And the object scene information updating unit is used for updating all track object scene videos based on the latest object scene information of all the position points and determining the video updating amount of all the track object scene videos.
9. The robot control system based on machine vision and trajectory planning of claim 8, wherein the item scene information updating unit comprises:
the object scene information updating subunit is used for determining initial object scene information of each position point in all track object scene videos, when the deviation degree of the latest object scene information of the position point and the initial object scene information exceeds a deviation threshold value, replacing the initial object scene information of the corresponding position point in all track object scene videos with the corresponding latest object scene information, otherwise, reserving the initial object scene information of the corresponding position point in all track object scene videos;
and the video update amount determining subunit is used for regarding updated latest object scene information contained in all track object scene videos as the video update amount of all track object scene videos.
10. The robot control system based on machine vision and trajectory planning of claim 7, wherein the re-planning sub-module comprises:
the fault section determining unit is used for determining a fault track section in all tracks based on the video updating quantity;
And the re-planning unit is used for determining an un-traversed track section in the initial planned track, re-planning the corresponding initial planned track based on the un-traversed track section, the fault track section and all tracks, and obtaining a re-planned track.
CN202310658632.6A 2023-06-06 2023-06-06 Robot control system based on machine vision and track planning Active CN116408807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310658632.6A CN116408807B (en) 2023-06-06 2023-06-06 Robot control system based on machine vision and track planning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310658632.6A CN116408807B (en) 2023-06-06 2023-06-06 Robot control system based on machine vision and track planning

Publications (2)

Publication Number Publication Date
CN116408807A CN116408807A (en) 2023-07-11
CN116408807B true CN116408807B (en) 2023-08-15

Family

ID=87059661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310658632.6A Active CN116408807B (en) 2023-06-06 2023-06-06 Robot control system based on machine vision and track planning

Country Status (1)

Country Link
CN (1) CN116408807B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117226843B (en) * 2023-09-27 2024-02-27 盐城工学院 Robot movement track control method and system based on visual servo

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108362294A (en) * 2018-03-05 2018-08-03 中山大学 Drawing method is built in a kind of more vehicles collaboration applied to automatic Pilot
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN109579843A (en) * 2018-11-29 2019-04-05 浙江工业大学 Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method
CN111121753A (en) * 2019-12-30 2020-05-08 炬星科技(深圳)有限公司 Robot joint graph building method and device and computer readable storage medium
CN111609848A (en) * 2020-05-21 2020-09-01 北京洛必德科技有限公司 Intelligent optimization method and system for multi-robot cooperation mapping
CN113570716A (en) * 2021-07-28 2021-10-29 视辰信息科技(上海)有限公司 Cloud three-dimensional map construction method, system and equipment
CN113701741A (en) * 2021-08-03 2021-11-26 哈尔滨工程大学 Heuristic multi-robot SLAM map fusion method
CN115597580A (en) * 2022-09-21 2023-01-13 山东新一代信息产业技术研究院有限公司(Cn) Cloud-based robot joint graph building method and system based on cloud cooperation
CN115729231A (en) * 2021-08-30 2023-03-03 睿普育塔机器人株式会社 Multi-robot route planning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108362294A (en) * 2018-03-05 2018-08-03 中山大学 Drawing method is built in a kind of more vehicles collaboration applied to automatic Pilot
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN109579843A (en) * 2018-11-29 2019-04-05 浙江工业大学 Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method
CN111121753A (en) * 2019-12-30 2020-05-08 炬星科技(深圳)有限公司 Robot joint graph building method and device and computer readable storage medium
CN111609848A (en) * 2020-05-21 2020-09-01 北京洛必德科技有限公司 Intelligent optimization method and system for multi-robot cooperation mapping
CN113570716A (en) * 2021-07-28 2021-10-29 视辰信息科技(上海)有限公司 Cloud three-dimensional map construction method, system and equipment
CN113701741A (en) * 2021-08-03 2021-11-26 哈尔滨工程大学 Heuristic multi-robot SLAM map fusion method
CN115729231A (en) * 2021-08-30 2023-03-03 睿普育塔机器人株式会社 Multi-robot route planning
CN115597580A (en) * 2022-09-21 2023-01-13 山东新一代信息产业技术研究院有限公司(Cn) Cloud-based robot joint graph building method and system based on cloud cooperation

Also Published As

Publication number Publication date
CN116408807A (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN105405154B (en) Target object tracking based on color-structure feature
CN102646343B (en) Vehicle detection apparatus
CN116408807B (en) Robot control system based on machine vision and track planning
CN111037552B (en) Inspection configuration and implementation method of wheel type inspection robot for power distribution room
CN105574543B (en) A kind of vehicle brand type identifier method and system based on deep learning
CN107992819B (en) Method and device for determining vehicle attribute structural features
CN104517275A (en) Object detection method and system
CN112862704B (en) Glue spraying and glue spraying quality detection system based on 3D vision
CN110281231B (en) Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing
Nassu et al. A vision-based approach for rail extraction and its application in a camera pan–tilt control system
DE102005025470B4 (en) Method and system for determining the position and orientation of a camera relative to a real object
CN109116846A (en) A kind of automatic Pilot method, apparatus, computer equipment and storage medium
CN112154445A (en) Method and device for determining lane line in high-precision map
CN108734172B (en) Target identification method and system based on linear edge characteristics
CN110570456A (en) Motor vehicle track extraction method based on fusion of YOLO target detection algorithm and optical flow tracking algorithm
CN109584299B (en) Positioning method, positioning device, terminal and storage medium
CN111376895A (en) Around-looking parking sensing method and device, automatic parking system and vehicle
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN109685062A (en) A kind of object detection method, device, equipment and storage medium
CN112288815A (en) Target mode position measuring method, system, storage medium and equipment
JP3629935B2 (en) Speed measurement method for moving body and speed measurement device using the method
CN112966638A (en) Transformer station operator identification and positioning method based on multiple characteristics
CN116958927A (en) Method and device for identifying short column based on BEV (binary image) graph
CN109817009A (en) A method of obtaining unmanned required dynamic information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant