CN116740103A - Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium - Google Patents

Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium Download PDF

Info

Publication number
CN116740103A
CN116740103A CN202310769394.6A CN202310769394A CN116740103A CN 116740103 A CN116740103 A CN 116740103A CN 202310769394 A CN202310769394 A CN 202310769394A CN 116740103 A CN116740103 A CN 116740103A
Authority
CN
China
Prior art keywords
water surface
coordinates
floater
moment
centroid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310769394.6A
Other languages
Chinese (zh)
Other versions
CN116740103B (en
Inventor
谢成磊
房爱印
尹曦萌
杨晓瑞
袁康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Intelligent Technology Co Ltd
Original Assignee
Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Intelligent Technology Co Ltd filed Critical Inspur Intelligent Technology Co Ltd
Priority to CN202310769394.6A priority Critical patent/CN116740103B/en
Publication of CN116740103A publication Critical patent/CN116740103A/en
Application granted granted Critical
Publication of CN116740103B publication Critical patent/CN116740103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a monocular camera-based water surface floating object collision prediction method, monocular camera-based water surface floating object collision prediction equipment and a monocular camera-based water surface floating object collision prediction medium, which are used for solving the problems that the existing water surface floating object has an unstable motion path, different specifications, different shapes and different appearances, cannot accurately position the water surface floating object and has collision risks. Comprising the following steps: acquiring the profile of the floater and the image coordinates of each profile point, and calculating the relative position of the current profile point and the monocular camera; determining geographic coordinates and projection coordinates of the profile of the floater according to the relative position and the geographic coordinates of the camera; calculating the area of the floater, centroid coordinates of the current moment and the next moment according to the projection coordinates, and calculating the displacement of the floater at the next moment; calculating a movement direction and a speed according to the displacement of the floating object, and calculating a predicted centroid coordinate at the moment to be predicted according to the centroid coordinate at the current moment, the movement direction and the speed; and determining whether collision risk exists between the water surface floater and the water surface unmanned equipment according to the predicted centroid coordinates and the predicted outline of the water surface floater and the water surface unmanned equipment.

Description

Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium
Technical Field
The application relates to the technical field of computers, in particular to a monocular camera-based method, monocular camera-based equipment and a monocular camera-based medium for predicting collision of water surface floaters.
Background
In recent years, with the continuous popularization of technology, autonomous operation of unmanned equipment on water surface is becoming more and more common. However, with the increasing serious environmental problems, the water surface is polluted and damaged to different degrees, water surface floaters with different specifications often appear on the water surface, and the water surface floaters float on the water surface to prevent the normal operation of unmanned equipment on the water surface.
Although the unmanned equipment on the water surface is commonly provided with various sensors for acquiring running information on the water surface in real time, the path of the floating object on the water is not fixed when moving in the water, and the specification, the shape and the appearance of the floating object on the water are different, and the unmanned equipment on the water can not be accurately positioned only by the sensors, so that a certain collision risk can exist in the running process of the unmanned equipment on the water surface.
Disclosure of Invention
The embodiment of the application provides a monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and a monocular camera-based water surface floater collision prediction medium, which are used for solving the technical problems that in the prior art, when a water surface floater moves, paths are unstable, specifications, shapes and appearances are different, the water surface floater cannot be accurately positioned only by a sensor of water surface unmanned equipment, and collision risks exist between the two water surface floaters.
In one aspect, an embodiment of the present application provides a method for predicting a collision of a water surface float based on a monocular camera, including:
acquiring a floater profile of a water surface floater and image coordinates corresponding to each profile point, and calculating the relative position of the current profile point and a monocular camera based on the image coordinates of each profile point and the predetermined geographic coordinates of the camera of the monocular camera based on a pre-constructed water surface digital elevation model;
determining geographic coordinates and projection coordinates of the profile of the floater according to the relative positions of the current profile point and the monocular camera and the geographic coordinates of the camera;
calculating the floating object area of the water surface floating object, the centroid coordinate at the current moment and the centroid coordinate at the next moment according to the projection coordinates of the profile of the floating object, and calculating the floating object displacement of the water surface floating object at the next moment according to the centroid coordinate at the current moment and the centroid coordinate at the next moment;
calculating the movement direction and speed of the water surface floating object according to the displacement of the floating object at the next moment, and calculating the predicted centroid coordinate at the moment to be predicted according to the centroid coordinate, the movement direction and the speed at the current moment;
and determining whether collision risks exist between the water surface floaters and the water surface unmanned equipment according to the predicted centroid coordinates and the floaters outline of the water surface floaters and the predicted centroid coordinates and the equipment outline of the water surface unmanned equipment.
In one implementation manner of the present application, the calculating, according to the projection coordinates of the float profile, the float area of the water surface float, the centroid coordinates at the current moment and the centroid coordinates at the next moment specifically includes:
determining the number of polygon sides corresponding to the floater outline and the vertex coordinates corresponding to polygon vertexes of the floater outline, and determining the floater area of the water surface floater according to the number of polygon sides and the vertex coordinates corresponding to each polygon vertex;
determining the vertex coordinates of each polygon vertex at the current moment, and determining the centroid coordinates of the current moment according to the vertex coordinates of each polygon vertex at the current moment;
and determining the vertex coordinates of each polygon vertex at the next moment, and determining the centroid coordinates of the next moment according to the vertex coordinates of each polygon vertex at the next moment.
In one implementation manner of the present application, the determining whether the collision risk exists between the surface float and the surface unmanned device according to the predicted centroid coordinates and the float profile of the surface float and the predicted centroid coordinates and the device profile of the surface unmanned device specifically includes:
acquiring an equipment contour of unmanned equipment on a water surface, and determining the number of polygon sides corresponding to the equipment contour and vertex coordinates corresponding to polygon vertexes of the equipment contour;
determining the centroid coordinates of the unmanned equipment on the water surface according to the polygon edge number and the vertex coordinates of the equipment outline;
determining the distance between the water surface unmanned equipment and the water surface floater at the current moment according to the equipment outline of the water surface unmanned equipment, the floater outline of the water surface floater, the centroid coordinates of the water surface unmanned equipment at the current moment and the centroid coordinates of the water surface floater at the current moment, and determining the distance between the water surface unmanned equipment and the water surface floater at the moment to be predicted;
and determining the change of the distance between the water surface unmanned equipment and the water surface floater from the current moment to the moment to be predicted, and determining whether the water surface unmanned equipment is overlapped with the water surface floater according to the change of the distance, wherein if so, the water surface unmanned equipment and the water surface floater have collision risks.
In one implementation manner of the present application, the determining the geographic coordinates and the projection coordinates of the float profile according to the relative position of the current profile point and the monocular camera and the geographic coordinates of the camera specifically includes:
converting the geographic coordinates of the camera of the monocular camera into space rectangular coordinates taking the mass center of the earth as the center, and determining the space rectangular coordinates corresponding to the current contour point according to the space rectangular coordinates of the monocular camera and the relative positions of the current contour point and the monocular camera;
and converting the space rectangular coordinates of the current contour point into corresponding geographic coordinates, and converting the geographic coordinates of the current contour point into corresponding projection coordinates.
In one implementation manner of the present application, the calculating the movement direction and speed of the surface float according to the displacement of the float at the next moment specifically includes:
determining a first time difference between the current moment and the next moment, and determining the float displacement of the water surface float in the X direction and the float displacement in the Y direction at the next moment;
and determining the speed of the water surface floating object in the X direction according to the first time difference value and the floating object displacement in the X direction, and determining the speed of the water surface floating object in the Y direction according to the first time difference value and the floating object displacement in the Y direction.
In one implementation manner of the present application, the calculating the predicted centroid coordinate of the time to be predicted according to the centroid coordinate, the motion direction and the speed of the current time specifically includes:
determining a second time difference value between the moment to be predicted and the current moment, and determining the centroid abscissa of the water surface floater at the moment to be predicted according to the second time difference value, the speed in the X direction and the centroid abscissa of the current moment;
and determining the centroid ordinate of the water surface floater at the moment to be predicted according to the second time difference value, the speed in the Y direction and the centroid ordinate of the current moment, and determining the centroid coordinate at the moment to be predicted according to the centroid abscissa and the centroid ordinate at the moment to be predicted.
In one implementation manner of the present application, the calculating, based on the pre-constructed water surface digital elevation model, the relative position of the current contour point and the monocular camera according to the image coordinates of each contour point and the pre-determined camera geographic coordinates of the monocular camera specifically includes:
acquiring a predetermined camera geographic coordinate of a monocular camera, and inputting image coordinate information of each contour point into a water surface digital elevation model based on the constructed water surface digital elevation model;
and calculating a difference value between the image coordinates of the current contour point and the geographic coordinates of the camera of the monocular camera, and determining the relative position of the current contour point and the monocular camera according to the difference value.
In one implementation manner of the present application, the acquiring the profile of the floating object on the water surface and the image coordinates corresponding to each profile point specifically includes:
acquiring a real-time video stream of the water surface through monitoring equipment, and determining a video frame image of the water surface in the real-time video stream;
based on an AI image segmentation technology, extracting a floater outline of a water surface floater under an image coordinate system from a video frame image of the water surface, and respectively determining each outline point on the floater outline;
and respectively acquiring image coordinates of each contour point on the profile of the floater under an image coordinate system, and acquiring a water level value of the water surface so as to construct a water surface digital elevation model according to the water level value of the water surface.
On the other hand, the embodiment of the application also provides a monocular camera-based water surface floater collision prediction device, which comprises:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a monocular camera-based water surface float collision prediction method as described above.
In another aspect, embodiments of the present application also provide a non-volatile computer storage medium storing computer-executable instructions configured to:
the method for predicting the collision of the water surface floaters based on the monocular camera is described above.
The embodiment of the application provides a monocular camera-based method, equipment and medium for predicting collision of water surface floaters, which at least comprise the following beneficial effects:
the relative position of the water surface floaters and the monocular camera can be determined through the image coordinates corresponding to the floater outline of the obtained water surface floaters and the camera geographical coordinates of the monocular camera, so that the geographical coordinates and the projection coordinates of the floater outline are determined according to the relative position and the camera geographical position, the monocular camera is generally deployed in hardware equipment, the requirement on hardware is low, the used water surface digital elevation model is not required to be measured in a traditional mode, and the model can be constructed according to the water level value; according to the projection coordinates, the floating object area of the water surface floating object, the centroid coordinates of the current moment and the next moment can be calculated, the floating object displacement of the water surface floating object at the next moment and the movement direction and speed of the water surface floating object are obtained, and further the predicted centroid coordinates of the moment to be predicted are obtained; the distance between the water surface floater and the water surface unmanned equipment can be determined through the floater outline and the predicted centroid coordinates of the water surface floater and the equipment outline and the predicted centroid coordinates of the water surface unmanned equipment, so that whether collision risk exists or not is determined.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a monocular camera-based method for predicting collision of water surface floaters according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a water surface digital elevation model generated by dividing grids on a water surface according to an embodiment of the present application;
fig. 3 is a schematic view of projection coordinates of each contour point in a float contour of a water surface float according to an embodiment of the present application;
fig. 4 is a schematic diagram of an internal structure of a monocular camera-based water surface float collision prediction apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a monocular camera-based water surface floater collision prediction method, equipment and medium, wherein the relative position of a water surface floater and a monocular camera can be determined through the image coordinates corresponding to the floater outline of the obtained water surface floater and the geographic coordinates of the camera of the monocular camera, so that the geographic coordinates and projection coordinates of the floater outline are determined according to the relative position and the geographic position of the camera, the monocular camera is generally deployed in hardware equipment, the requirement on hardware is lower, a used water surface digital elevation model is not required to be measured in a traditional mode, and the model can be constructed according to a water level value; according to the projection coordinates, the floating object area of the water surface floating object, the centroid coordinates of the current moment and the next moment can be calculated, the floating object displacement of the water surface floating object at the next moment and the movement direction and speed of the water surface floating object are obtained, and further the predicted centroid coordinates of the moment to be predicted are obtained; the distance between the water surface floater and the water surface unmanned equipment can be determined through the floater outline and the predicted centroid coordinates of the water surface floater and the equipment outline and the predicted centroid coordinates of the water surface unmanned equipment, so that whether collision risk exists or not is determined.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a monocular camera-based method for predicting collision of water surface floaters according to an embodiment of the present application. As shown in fig. 1, the method for predicting the collision of the water surface floating object based on the monocular camera provided by the embodiment of the application comprises the following steps:
101. acquiring a floater profile of the water surface floater and image coordinates corresponding to each profile point, and calculating the relative position of the current profile point and the monocular camera based on the pre-constructed water surface digital elevation model and according to the image coordinates of each profile point and the pre-determined camera geographic coordinates of the monocular camera.
The large-mass water surface floaters in the river channel have great harm to water surface unmanned equipment running on the water surface, and the water surface unmanned equipment can be protected by carrying out area measurement and movement path prejudgment on the water surface floaters. The server firstly needs to acquire the profile of the floater corresponding to the water surface floater and the image coordinates of all profile points in the profile of the floater, so that the relative position between the current profile point in the water surface floater and the monocular camera is calculated according to the acquired geographical coordinates of all profile points of the water surface floater and the camera of the known monocular camera on the basis of a pre-constructed water surface digital elevation model.
Specifically, the server can acquire a real-time video stream of the water surface through the monitoring device, and determines a video frame image of the water surface in the real-time video stream, so that the water surface floaters are segmented in the video frame image of the water surface based on an AI image segmentation technology, and the outline of the floaters under an image coordinate system is extracted, wherein the image coordinate system refers to a coordinate system determined by the video image. The server respectively determines all the contour points on the contour of the floater, and then the server respectively acquires the image coordinates of all the contour points on the contour of the floater under an image coordinate system.
The digital elevation model (Digital Elevation Model, DEM) is a physical floor model that implements a digital simulation of the floor topography (i.e., a digital representation of the topography surface morphology) through limited topography elevation data, and represents the floor elevation in the form of a set of ordered arrays of values. The Water surface digital elevation model (Water Digital Elevation Model, WDEM) is an abbreviation for water+dem used to describe the digital elevation model of the Water surface.
Due to the limited monitoring range of the camera, the water surface in the view of the camera can be considered as the same elevation. The server constructs a water surface digital elevation model WDEM according to the current water level value of the water surface, and replaces a digital elevation model DEM expressed in a monocular camera positioning technology through the water surface digital elevation model WDEM. In addition, the water surface is almost flat in a limited range, and the grid size of the WDEM can be set to be relatively large, so that the data volume of the water surface digital elevation model WDEM can be reduced, and the calculation time can be greatly shortened during calculation.
Fig. 2 is a schematic diagram of a water surface digital elevation model generated by dividing grids on a water surface according to an embodiment of the present application. As shown in fig. 2, the flat part in the figure is the water surface, and the server performs gridding on the water surface, so that the water surface can be divided into a plurality of grids, and further a water surface digital elevation model corresponding to the water level value of the water surface is generated.
The server obtains the camera geographic coordinates of the monocular camera, and inputs the image coordinate information of each contour point into the water surface digital elevation model based on the constructed water surface digital elevation model, so that the difference between the image coordinates of the current contour point and the camera geographic coordinates of the monocular camera can be calculated, and the relative position of the current contour point and the monocular camera is determined according to the difference between the image coordinates of the current contour point and the camera geographic coordinates of the monocular camera.
102. And determining the geographic coordinates and the projection coordinates of the profile of the floater according to the relative positions of the current profile point and the monocular camera and the geographic coordinates of the camera.
Because the relative positions of the current contour point in the floater contour and the monocular camera and the geographic position of the camera of the monocular camera are determined, the server can calculate the geographic coordinates and the projection coordinates of the floater contour according to the relative positions of the current contour point in the floater contour and the monocular camera and the geographic position of the camera of the monocular camera.
It should be noted that, the geographic coordinate system may be considered as longitude and latitude, and the three-dimensional spherical surface is used to define the earth surface position, so as to implement a coordinate system for referencing the earth surface point location through longitude and latitude, and the projection coordinate system is a coordinate system projected onto a plane through the geographic coordinate system, where the unit is meters.
Specifically, the server converts the camera geographic coordinates of the monocular camera into spatial rectangular coordinates (x, y, z) with the earth centroid as the center, determines the spatial rectangular coordinates (x 1, y1, z 1) corresponding to the current contour point according to the spatial rectangular coordinates (x, y, z) of the monocular camera and the relative position between the current contour point and the monocular camera, further converts the spatial rectangular coordinates (x 1, y1, z 1) of the current contour point into corresponding geographic coordinates (L, B, H), and converts the geographic coordinates (L, B, H) of the current contour point into corresponding projection coordinates (proj_x, proj_y).
It should be noted that, in the embodiment of the present application, the conversion from the space rectangular coordinates to the corresponding geographic coordinates and the conversion from the geographic coordinates to the corresponding projection coordinates are simple transformations based on linear algebra in the prior art, which is not specifically described in the present application.
Fig. 3 is a schematic view of projection coordinates of each contour point in a float contour of a water surface float according to an embodiment of the present application. As shown in fig. 3, the server can obtain the relative position of the monocular camera and the quality inspection of the current profile point of the profile of the floater, then, the server can obtain the space rectangular coordinate of the current profile point in the profile of the floater according to the space rectangular coordinate obtained by the geographical coordinates of the camera, the geographical coordinates of the current profile point can be obtained by converting the space rectangular coordinate, and then, the projection coordinates of the current profile point can be obtained by converting the geographical coordinates.
103. According to the projection coordinates of the profile of the floater, calculating the area of the floater of the water surface floater, the centroid coordinates of the current moment and the centroid coordinates of the next moment, and calculating the displacement of the floater of the water surface floater at the next moment according to the centroid coordinates of the current moment and the centroid coordinates of the next moment.
The server can calculate the floating object area of the water surface floating object and the centroid coordinates of the current moment and the next moment according to the projection coordinates of the obtained floating object outline, and further calculate the floating object displacement of the water surface floating object in the period from the current moment to the next moment according to the centroid coordinates of the water surface floating object at the current moment and the next moment.
Specifically, the server needs to determine the number of polygon sides corresponding to the profile of the floater and the vertex coordinates corresponding to the polygon vertices of the profile of the floater, determine the area of the floater on the water surface according to the number of polygon sides and the vertex coordinates corresponding to each polygon vertex, and determine the vertex coordinates of each polygon vertex at the current moment, determine the centroid coordinates at the current moment according to the vertex coordinates of each polygon vertex at the current moment, determine the vertex coordinates of each polygon vertex at the next moment, and determine the centroid coordinates at the next moment according to the vertex coordinates of each polygon vertex at the next moment.
The area of the floater in the application is calculated according to a Shellac formula, which is shown as follows:
by simplifying the above formula, the following formula is obtained:
it should be noted that, in the embodiment of the present application, a represents the float area of the surface float, n represents the number of polygon sides of the polygon corresponding to the outline of the float, i represents the order of polygon vertices in the polygon, i=1, 2, …, n, (x) i ,y i ) Representing the vertex coordinates of the ith polygon vertex.
The calculation formula of the centroid coordinates in the application is as follows:
in the embodiment of the present application, cx represents the centroid abscissa, cy represents the centroid ordinate, and (Cx, cy) represents the centroid coordinate.
The server determines a second time difference value between the moment to be predicted and the current moment, determines the centroid abscissa of the water surface floater at the moment to be predicted according to the second time difference value between the moment to be predicted and the current moment, and determines the centroid ordinate of the water surface floater at the moment to be predicted according to the centroid abscissa and the centroid ordinate of the water surface floater at the moment to be predicted, and determines the centroid ordinate of the water surface floater at the moment to be predicted according to the second time difference value between the moment to be predicted and the current moment, and the speed of the water surface floater at the Y direction and the centroid ordinate of the water surface floater at the current moment.
104. And calculating the movement direction and speed of the water surface floating object according to the displacement of the floating object at the next moment, and calculating the predicted centroid coordinate at the moment to be predicted according to the centroid coordinate, the movement direction and the speed at the current moment.
The server can calculate the movement direction and speed of the water surface floating object in the period from the current moment to the next moment according to the displacement of the floating object at the next moment, and further can calculate the predicted centroid coordinate of the moment to be predicted according to the centroid coordinate, the movement direction and the speed of the water surface floating object at the current moment.
Specifically, the server needs to determine a first time difference between the current time and the next time, and determine a float displacement of the water surface float in the X direction and a float displacement of the water surface float in the Y direction at the next time, so that according to the first time difference between the current time and the next time and the float displacement of the water surface float in the X direction, the speed of the water surface float in the X direction can be determined, and meanwhile, according to the first time difference between the current time and the next time and the float displacement of the water surface float in the Y direction, the speed of the water surface float in the Y direction can be determined by the server.
The speed calculation formula of the water surface floater in the application is as follows:
it should be noted that Vx in the embodiment of the present application indicates the speed of the water surface float in the X direction in the time interval from time t0 to time t1, and Vy indicates the speed of the water surface float in the Y direction in the time interval from time t0 to time t 1.
The calculation formula of the predicted centroid coordinates of the water surface floaters at the moment to be predicted is as follows:
X=X c +(t-t c )×V x
Y=Y c +(t-t c )×V y
it should be noted that, in the embodiment of the present application, X represents the centroid and the abscissa of the water surface float at the moment to be predicted, Y represents the centroid and the ordinate of the water surface float at the moment to be predicted, and X c Represents the centroid abscissa of the water surface floater at the current moment, Y c Representing the ordinate, t of the centroid of the water surface floater at the current moment c And the current time is represented, and t represents the time to be predicted.
105. And determining whether collision risks exist between the water surface floaters and the water surface unmanned equipment according to the predicted centroid coordinates and the floaters outline of the water surface floaters and the predicted centroid coordinates and the equipment outline of the water surface unmanned equipment.
The server can obtain the distance between the water surface floater and the water surface unmanned equipment according to the determined predicted centroid coordinates and the determined floater outline of the water surface floater and the determined predicted centroid coordinates and the determined equipment outline of the water surface unmanned equipment, and further determine whether collision risks exist between the water surface floater and the water surface unmanned equipment.
Specifically, the server needs to acquire the equipment outline of the unmanned equipment on the water surface, determine the polygon edge number corresponding to the equipment outline and the vertex coordinates corresponding to the polygon vertices of the equipment outline, and then determine the centroid coordinates of the unmanned equipment on the water surface according to the polygon edge number and the vertex coordinates of the equipment outline. The server can determine the distance between the water surface unmanned equipment and the water surface floating object at the current moment according to the equipment outline of the water surface unmanned equipment, the floating object outline of the water surface floating object, the centroid coordinates of the water surface unmanned equipment at the current moment and the centroid coordinates of the water surface floating object at the current moment, and determine the distance between the water surface unmanned equipment and the water surface floating object at the moment to be predicted, so that the server can determine the change of the distance between the water surface unmanned equipment and the water surface floating object from the current moment to the moment to be predicted, and determine whether the water surface unmanned equipment and the water surface floating object are overlapped according to the change of the distance between the water surface unmanned equipment and the water surface floating object, and if so, the water surface unmanned equipment and the water surface floating object are in collision risk.
The above is a method embodiment of the present application. Based on the same inventive concept, the embodiment of the application also provides a monocular camera-based water surface float collision prediction device, and the structure of the monocular camera-based water surface float collision prediction device is shown in fig. 2.
Fig. 4 is a schematic diagram of an internal structure of a monocular camera-based water surface float collision prediction apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to:
acquiring a floater profile of a water surface floater and image coordinates corresponding to each profile point, and calculating the relative position of the current profile point and a monocular camera based on the pre-constructed water surface digital elevation model and according to the image coordinates of each profile point and the pre-determined camera geographic coordinates of the monocular camera;
determining geographic coordinates and projection coordinates of the profile of the floater according to the relative positions of the current profile points and the monocular camera and the geographic coordinates of the camera;
calculating the floating object area of the water surface floating object, the centroid coordinate at the current moment and the centroid coordinate at the next moment according to the projection coordinates of the profile of the floating object, and calculating the floating object displacement of the water surface floating object at the next moment according to the centroid coordinate at the current moment and the centroid coordinate at the next moment;
calculating the movement direction and speed of the water surface floating object according to the displacement of the floating object at the next moment, and calculating the predicted centroid coordinate at the moment to be predicted according to the centroid coordinate, the movement direction and the speed at the current moment;
and determining whether collision risks exist between the water surface floaters and the water surface unmanned equipment according to the predicted centroid coordinates and the floaters outline of the water surface floaters and the predicted centroid coordinates and the equipment outline of the water surface unmanned equipment.
The embodiment of the application also provides a nonvolatile computer storage medium, which stores computer executable instructions, wherein the computer executable instructions are configured to:
acquiring a floater profile of a water surface floater and image coordinates corresponding to each profile point, and calculating the relative position of the current profile point and a monocular camera based on the pre-constructed water surface digital elevation model and according to the image coordinates of each profile point and the pre-determined camera geographic coordinates of the monocular camera;
determining geographic coordinates and projection coordinates of the profile of the floater according to the relative positions of the current profile points and the monocular camera and the geographic coordinates of the camera;
calculating the floating object area of the water surface floating object, the centroid coordinate at the current moment and the centroid coordinate at the next moment according to the projection coordinates of the profile of the floating object, and calculating the floating object displacement of the water surface floating object at the next moment according to the centroid coordinate at the current moment and the centroid coordinate at the next moment;
calculating the movement direction and speed of the water surface floating object according to the displacement of the floating object at the next moment, and calculating the predicted centroid coordinate at the moment to be predicted according to the centroid coordinate, the movement direction and the speed at the current moment;
and determining whether collision risks exist between the water surface floaters and the water surface unmanned equipment according to the predicted centroid coordinates and the floaters outline of the water surface floaters and the predicted centroid coordinates and the equipment outline of the water surface unmanned equipment.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes certain embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. The method for predicting the collision of the water surface floating objects based on the monocular camera is characterized by comprising the following steps of:
acquiring a floater profile of a water surface floater and image coordinates corresponding to each profile point, and calculating the relative position of the current profile point and a monocular camera based on the image coordinates of each profile point and the predetermined geographic coordinates of the camera of the monocular camera based on a pre-constructed water surface digital elevation model;
determining geographic coordinates and projection coordinates of the profile of the floater according to the relative positions of the current profile point and the monocular camera and the geographic coordinates of the camera;
calculating the floating object area of the water surface floating object, the centroid coordinate at the current moment and the centroid coordinate at the next moment according to the projection coordinates of the profile of the floating object, and calculating the floating object displacement of the water surface floating object at the next moment according to the centroid coordinate at the current moment and the centroid coordinate at the next moment;
calculating the movement direction and speed of the water surface floating object according to the displacement of the floating object at the next moment, and calculating the predicted centroid coordinate at the moment to be predicted according to the centroid coordinate, the movement direction and the speed at the current moment;
and determining whether collision risks exist between the water surface floaters and the water surface unmanned equipment according to the predicted centroid coordinates and the floaters outline of the water surface floaters and the predicted centroid coordinates and the equipment outline of the water surface unmanned equipment.
2. The monocular camera-based water surface float collision prediction method according to claim 1, wherein the calculating the float area of the water surface float, the centroid coordinate at the current time and the centroid coordinate at the next time according to the projection coordinates of the float profile specifically comprises:
determining the number of polygon sides corresponding to the floater outline and the vertex coordinates corresponding to polygon vertexes of the floater outline, and determining the floater area of the water surface floater according to the number of polygon sides and the vertex coordinates corresponding to each polygon vertex;
determining the vertex coordinates of each polygon vertex at the current moment, and determining the centroid coordinates of the current moment according to the vertex coordinates of each polygon vertex at the current moment;
and determining the vertex coordinates of each polygon vertex at the next moment, and determining the centroid coordinates of the next moment according to the vertex coordinates of each polygon vertex at the next moment.
3. The monocular camera-based water surface float collision prediction method according to claim 1, wherein the determining whether there is a risk of collision between the water surface float and the water surface unmanned device according to the predicted centroid coordinates and the float profile of the water surface float and the predicted centroid coordinates and the device profile of the water surface unmanned device specifically includes:
acquiring an equipment contour of unmanned equipment on a water surface, and determining the number of polygon sides corresponding to the equipment contour and vertex coordinates corresponding to polygon vertexes of the equipment contour;
determining the centroid coordinates of the unmanned equipment on the water surface according to the polygon edge number and the vertex coordinates of the equipment outline;
determining the distance between the water surface unmanned equipment and the water surface floater at the current moment according to the equipment outline of the water surface unmanned equipment, the floater outline of the water surface floater, the centroid coordinates of the water surface unmanned equipment at the current moment and the centroid coordinates of the water surface floater at the current moment, and determining the distance between the water surface unmanned equipment and the water surface floater at the moment to be predicted;
and determining the change of the distance between the water surface unmanned equipment and the water surface floater from the current moment to the moment to be predicted, and determining whether the water surface unmanned equipment is overlapped with the water surface floater according to the change of the distance, wherein if so, the water surface unmanned equipment and the water surface floater have collision risks.
4. The monocular camera-based water surface float collision prediction method according to claim 1, wherein the determining the geographic coordinates and the projection coordinates of the float contour according to the relative position of the current contour point and the monocular camera and the geographic coordinates of the camera specifically includes:
converting the geographic coordinates of the camera of the monocular camera into space rectangular coordinates taking the mass center of the earth as the center, and determining the space rectangular coordinates corresponding to the current contour point according to the space rectangular coordinates of the monocular camera and the relative positions of the current contour point and the monocular camera;
and converting the space rectangular coordinates of the current contour point into corresponding geographic coordinates, and converting the geographic coordinates of the current contour point into corresponding projection coordinates.
5. The monocular camera-based water surface float collision prediction method according to claim 1, wherein the calculating the movement direction and speed of the water surface float according to the float displacement at the next moment specifically includes:
determining a first time difference between the current moment and the next moment, and determining the float displacement of the water surface float in the X direction and the float displacement in the Y direction at the next moment;
and determining the speed of the water surface floating object in the X direction according to the first time difference value and the floating object displacement in the X direction, and determining the speed of the water surface floating object in the Y direction according to the first time difference value and the floating object displacement in the Y direction.
6. The monocular camera-based water surface float collision prediction method according to claim 1, wherein the calculating the predicted centroid coordinates of the moment to be predicted according to the centroid coordinates, the movement direction and the speed of the current moment specifically comprises:
determining a second time difference value between the moment to be predicted and the current moment, and determining the centroid abscissa of the water surface floater at the moment to be predicted according to the second time difference value, the speed in the X direction and the centroid abscissa of the current moment;
and determining the centroid ordinate of the water surface floater at the moment to be predicted according to the second time difference value, the speed in the Y direction and the centroid ordinate of the current moment, and determining the centroid coordinate at the moment to be predicted according to the centroid abscissa and the centroid ordinate at the moment to be predicted.
7. The monocular camera-based water surface float collision prediction method according to claim 1, wherein the calculating the relative position of the current contour point and the monocular camera based on the pre-constructed water surface digital elevation model according to the image coordinates of each contour point and the pre-determined camera geographic coordinates of the monocular camera specifically comprises:
acquiring a predetermined camera geographic coordinate of a monocular camera, and inputting image coordinate information of each contour point into a water surface digital elevation model based on the constructed water surface digital elevation model;
and calculating a difference value between the image coordinates of the current contour point and the geographic coordinates of the camera of the monocular camera, and determining the relative position of the current contour point and the monocular camera according to the difference value.
8. The monocular camera-based method for predicting the collision of the water surface float according to claim 1, wherein the acquiring the float profile of the water surface float and the image coordinates corresponding to each profile point specifically comprises:
acquiring a real-time video stream of the water surface through monitoring equipment, and determining a video frame image of the water surface in the real-time video stream;
based on an AI image segmentation technology, extracting a floater outline of a water surface floater under an image coordinate system from a video frame image of the water surface, and respectively determining each outline point on the floater outline;
and respectively acquiring image coordinates of each contour point on the profile of the floater under an image coordinate system, and acquiring a water level value of the water surface so as to construct a water surface digital elevation model according to the water level value of the water surface.
9. Water surface floater collision prediction equipment based on monocular camera, its characterized in that, the equipment includes:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the monocular camera-based water surface float collision prediction method of any one of claims 1 to 8.
10. A non-transitory computer storage medium storing computer-executable instructions, the computer-executable instructions configured to:
a monocular camera-based method of predicting a surface float collision as claimed in any one of claims 1 to 8.
CN202310769394.6A 2023-06-27 2023-06-27 Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium Active CN116740103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310769394.6A CN116740103B (en) 2023-06-27 2023-06-27 Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310769394.6A CN116740103B (en) 2023-06-27 2023-06-27 Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium

Publications (2)

Publication Number Publication Date
CN116740103A true CN116740103A (en) 2023-09-12
CN116740103B CN116740103B (en) 2024-04-26

Family

ID=87911334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310769394.6A Active CN116740103B (en) 2023-06-27 2023-06-27 Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium

Country Status (1)

Country Link
CN (1) CN116740103B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2318187C1 (en) * 2006-06-15 2008-02-27 Открытое акционерное общество "ЦНИИ "Курс" Device for producing and displaying information for guiding a vessel across narrow sections of fairwaters
KR101911756B1 (en) * 2018-05-04 2018-10-25 (주)에디넷 The system for real-time remote monitoring buoys on the sea
CN113450597A (en) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 Ship auxiliary navigation method and system based on deep learning
CN113658250A (en) * 2021-08-25 2021-11-16 中冶京诚工程技术有限公司 Floater position prediction method and device
CN114358411A (en) * 2021-12-30 2022-04-15 上海上实龙创智能科技股份有限公司 Method and system for positioning floating object
CN115457807A (en) * 2022-10-25 2022-12-09 安徽慧软智能科技有限公司 Ship collision avoidance early warning system based on navigation radar
CN115909816A (en) * 2022-10-24 2023-04-04 天津天元海科技开发有限公司 Buoy collision early warning and recording system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2318187C1 (en) * 2006-06-15 2008-02-27 Открытое акционерное общество "ЦНИИ "Курс" Device for producing and displaying information for guiding a vessel across narrow sections of fairwaters
KR101911756B1 (en) * 2018-05-04 2018-10-25 (주)에디넷 The system for real-time remote monitoring buoys on the sea
CN113450597A (en) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 Ship auxiliary navigation method and system based on deep learning
CN113658250A (en) * 2021-08-25 2021-11-16 中冶京诚工程技术有限公司 Floater position prediction method and device
CN114358411A (en) * 2021-12-30 2022-04-15 上海上实龙创智能科技股份有限公司 Method and system for positioning floating object
CN115909816A (en) * 2022-10-24 2023-04-04 天津天元海科技开发有限公司 Buoy collision early warning and recording system
CN115457807A (en) * 2022-10-25 2022-12-09 安徽慧软智能科技有限公司 Ship collision avoidance early warning system based on navigation radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
袁理;任虹;鲁佩仪;王梦佳;: "海上浮标漂移轨迹预测分析系统研究", 航海, no. 03, 25 May 2020 (2020-05-25) *
郑元洲;吴卫国;张文涛;徐海祥;: "基于图像信息检测的船-桥智能避碰系统研究", 武汉理工大学学报(交通科学与工程版), no. 04, 15 August 2012 (2012-08-15) *

Also Published As

Publication number Publication date
CN116740103B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN108717710B (en) Positioning method, device and system in indoor environment
CN110826357B (en) Method, device, medium and equipment for three-dimensional detection and intelligent driving control of object
US9435911B2 (en) Visual-based obstacle detection method and apparatus for mobile robot
US11300964B2 (en) Method and system for updating occupancy map for a robotic system
CN113506370B (en) Three-dimensional geographic scene model construction method and device based on three-dimensional remote sensing image
CN113819890B (en) Distance measuring method, distance measuring device, electronic equipment and storage medium
CN113516769B (en) Virtual reality three-dimensional scene loading and rendering method and device and terminal equipment
KR20200075727A (en) Method and apparatus for calculating depth map
CN111986472B (en) Vehicle speed determining method and vehicle
CN110111413A (en) A kind of sparse cloud three-dimension modeling method based on land and water coexistence scenario
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN116012428A (en) Method, device and storage medium for combining and positioning thunder and vision
CN115439571A (en) Method and device suitable for generating linear array push-broom satellite image epipolar image
CN113449692A (en) Map lane information updating method and system based on unmanned aerial vehicle
CN115265519A (en) Online point cloud map construction method and device
CN113269147B (en) Three-dimensional detection method and system based on space and shape, and storage and processing device
CN112907745B (en) Method and device for generating digital orthophoto map
CN116740103B (en) Monocular camera-based water surface floater collision prediction method, monocular camera-based water surface floater collision prediction equipment and monocular camera-based water surface floater collision prediction medium
CN111260714B (en) Flood disaster recovery assessment method, device and equipment and computer storage medium
CN112907625A (en) Target following method and system applied to four-footed bionic robot
CN114631124A (en) Three-dimensional point cloud segmentation method and device and movable platform
CN113436309A (en) Scene reconstruction method, system and device and sweeping robot
CN114494398B (en) Processing method and device of inclined target, storage medium and processor
CN112183378A (en) Road slope estimation method and device based on color and depth image
KR20230029981A (en) Systems and methods for pose determination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant