CN113191974A - Method and system for obtaining ship panoramic image based on machine vision - Google Patents

Method and system for obtaining ship panoramic image based on machine vision Download PDF

Info

Publication number
CN113191974A
CN113191974A CN202110476679.1A CN202110476679A CN113191974A CN 113191974 A CN113191974 A CN 113191974A CN 202110476679 A CN202110476679 A CN 202110476679A CN 113191974 A CN113191974 A CN 113191974A
Authority
CN
China
Prior art keywords
camera
ship
view
color correction
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110476679.1A
Other languages
Chinese (zh)
Other versions
CN113191974B (en
Inventor
王晓原
何国文
张鹏元
王全政
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Science and Technology
Original Assignee
Qingdao University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Science and Technology filed Critical Qingdao University of Science and Technology
Priority to CN202110476679.1A priority Critical patent/CN113191974B/en
Publication of CN113191974A publication Critical patent/CN113191974A/en
Application granted granted Critical
Publication of CN113191974B publication Critical patent/CN113191974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/80
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention relates to a method and a system for acquiring a ship panoramic image based on machine vision, wherein the method comprises the following steps: the bow, the stern, the hull both sides of boats and ships are provided with the camera, include: acquiring image data acquired by a camera on a ship; carrying out distortion correction processing on the image data collected by each camera based on the image data and the pre-acquired internal reference matrix and distortion coefficient of each camera to acquire second image data corresponding to each camera; the second image data is image data after distortion correction processing; converting second image data corresponding to a camera on the ship into a corresponding top view according to a homography matrix mapped from a world coordinate system to a pixel coordinate system of the camera, wherein the homography matrix is acquired in advance; mapping the top views of the cameras to pre-acquired registration maps to acquire spliced images; and carrying out post-processing on the spliced images to obtain a panoramic image of the ship.

Description

Method and system for obtaining ship panoramic image based on machine vision
Technical Field
The invention relates to the technical field of machine vision, in particular to a method and a system for acquiring a ship panoramic image based on machine vision.
Background
With the rapid development of the marine transportation industry, two thirds of the international freight volume is transported by the ocean, and marine transportation safety accidents happen occasionally, which causes great loss to the life safety and social economy of people. The ship collision accident causes great harm to the lives and properties of people and the marine environment. The reasons for collision are classified into two categories, namely, human factors and objective factors. According to investigation, the occurrence of most of the marine safety accidents is caused by human factors, and most of the reasons of the occurrence of the collision accidents are caused by that crews do not make correct and reasonable avoidance measures in time before the occurrence of the collision accidents.
At present, panoramic image systems are relatively rarely and immaturally applied in the field of intelligent ships, and the main reason is that the ships are influenced by wind wave surge in the navigation process, so that a camera cannot stably acquire image data at a fixed visual angle, and a large error is caused in the aspect of image data acquisition; secondly, the weather of the sea and the inland navigation channel is changeable, so that the camera is often influenced by weather factors when acquiring image data.
The requirement of improving the navigation safety of the ship is particularly urgent. In the aspects of reducing human errors and ensuring collision avoidance and navigation safety, the ship panoramic image system monitors the safety conditions of the surrounding water areas in real time and plays a key role in navigation safety. The ship carries the fisheye camera to realize the real-time monitoring of the sea condition of the sea area around the ship to assist the safe navigation.
Disclosure of Invention
Technical problem to be solved
In view of the above drawbacks and deficiencies of the prior art, the present invention provides a method and system for acquiring a panoramic image of a ship based on machine vision.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
in a first aspect, an embodiment of the present invention provides a method for obtaining a panoramic image of a ship based on machine vision, where cameras are disposed at two sides of a bow, a stern, and a hull of the ship, and the method includes:
s1, acquiring image data acquired by a camera on the ship;
s2, carrying out distortion correction processing on the image data collected by each camera based on the image data and the pre-acquired internal reference matrix and distortion coefficient of each camera to acquire second image data corresponding to each camera;
the second image data is image data after distortion correction processing;
s3, converting second image data corresponding to the camera on the ship into a corresponding top view according to a homography matrix mapped from a world coordinate system to a pixel coordinate system of the camera, wherein the homography matrix is acquired in advance;
s4, mapping the top view of each camera to a pre-acquired registration map to acquire a spliced image;
s5, post-processing the spliced images to obtain a panoramic image of the ship;
the post-processing comprises: edge blending processing and brightness homogenization processing.
Preferably, the first and second liquid crystal materials are,
the camera is a wide-angle 180-degree fisheye network camera.
Wherein, the fish-eye network camera accessible IP of wide angle 180 degrees independently visits, possesses infrared night vision function, can gather the picture in the environment of night, and waterproof grade is higher, can be applied to marine environment, has the net twine power supply function, inserts the POE switch and can realize power supply and signal transmission, can work for a long time in succession.
Preferably, the first and second liquid crystal materials are,
the internal reference matrix and the distortion coefficient of each camera are obtained by adopting a Zhang-Yongyou calibration method in advance.
Preferably, step S1 is preceded by:
s0, acquiring a homography matrix mapped from a world coordinate system to a pixel coordinate system of the camera according to coordinate information of four groups of corner points acquired in advance;
the angular point is the point of the most boundary of the floating bridge within the visual angle range collected in the camera picture when the ship is close to the floating bridge;
the coordinate information of each group of corner points comprises: when the ship berths close to the floating bridge, the pixel coordinates of the angular points in the picture of the camera, the actual position coordinates of the floating bridge corresponding to the angular points in the camera and the coordinates of a preset calibration plate corresponding to the angular points.
Preferably, the first and second liquid crystal materials are,
the pre-acquired registration map is a registration map generated by a uniform coordinate system with a hull center as an origin of coordinates and based on a horizontal plane;
the size of the registration graph is calculated according to the size of the ship and coordinates of a preset calibration plate corresponding to the four groups of corner points respectively.
Preferably, the first and second liquid crystal materials are,
the edge blending process specifically includes:
establishing a corresponding rectangular coordinate system in the overlapped area of the two top views and acquiring the size of the overlapped area;
the overlapped area is rectangular;
the coordinates of all pixel points in the overlapped area are (x, y);
wherein x is a column where the pixel is located; y is the row in which the pixel is located;
calculating a first linear slope according to the size of the overlapped area;
determining an expression of the first line based on the slope of the first line;
the first straight line is a diagonal line of the overlapped area;
acquiring the weight of any pixel point in the overlapped area in different top views based on the linear expression;
then, acquiring an RGB component value of the image after any pixel point in the overlapped area is fused in the overlapped area according to a formula (1);
the formula (1) is:
P=P11+P2*(1-ω1);
p represents the RGB component value of the image fused in the overlapping area by any pixel point in the overlapping area;
P1、P2RGB value components of any pixel point in different top views in the respectively represented overlapped areas;
ω1、(1-ω1) Respectively representing the weight of any pixel point in the overlapped area in different top views;
traversing the whole overlapping area to realize smooth transition.
Preferably, the first and second liquid crystal materials are,
the brightness uniformization processing specifically includes: image histogram equalization processing or Gamma color correction processing.
Preferably, the first and second liquid crystal materials are,
the brightness uniformization processing specifically includes:
acquiring a value of a color correction coefficient based on a preset target function;
the color correction coefficients include: forward view color correction factor fF(ii) a Left view color correction factor fL(ii) a Right view color correction factor fR(ii) a Backward view color correction factor fB
The forward view is a top view corresponding to a camera arranged at the bow;
the backward view is a top view corresponding to a camera arranged at the stern;
the right view is a top view corresponding to a camera arranged on the right side of the ship body;
the left view is a top view corresponding to a camera arranged on the left side of the ship body;
wherein the preset objective function is:
F=(fFAF-fLAL)2+(fFBF-fRBR)2+(fLCL-fBCB)2+(fRDR-fBDB)2
wherein A isFRepresenting the mean value of all pixel points of the forward view in the left front overlapping area A;
ALrepresenting the mean value of all pixel points of the left view in the left front overlapping area A;
BFrepresenting the mean value of all pixel points in a right front side overlapping region B of the forward view;
BRrepresenting the mean value of all pixel points in a right front side overlapping region B of a right-direction view;
CLrepresenting the mean value of all pixel points of the left view in the left rear overlapping area C;
CBrepresenting the mean value of all pixel points of the backward view in the left back overlapping area C;
DRrepresenting the mean value of all pixel points in a right rear overlapping area D of a right view;
DBrepresenting the mean value of all pixel points in a right rear side overlapping region D of the backward view;
and correcting the puzzle colors according to the value of the color correction coefficient and the corresponding gray value of the RGB channel.
Preferably, the obtaining a value of the color correction coefficient based on a preset objective function specifically includes:
respectively correcting the color correction coefficient f of the forward view based on a preset objective functionF(ii) a Left view color correction factor fL(ii) a Right view color correction factor fR(ii) a Backward view color correction factor fBCalculating partial derivatives, and converting the partial derivatives into a matrix form:
Figure BDA0003047629480000051
obtaining the color correction coefficient f of the forward view by a singular value decomposition methodFLeft view color correction factor fLRight view color correction factor fRBackward view color correction factor fBThe value of (c).
On the other hand, this embodiment still provides a system for obtaining boats and ships panoramic image based on machine vision, the system includes:
at least one processor;
and at least one memory communicatively coupled to the processor, wherein the memory stores program instructions executable by the processor, and the processor invokes the program instructions to perform any of the methods for obtaining a panoramic image of a vessel based on machine vision as described above.
(III) advantageous effects
The invention has the beneficial effects that: according to the method and the system for acquiring the ship panoramic image based on the machine vision, the images of the ship all around without dead angles are spliced, so that the panoramic image of the ship can be acquired in real time, and the influence of wind surge current in sea and inland water areas on the images can be reduced to a certain extent. The invention can assist the berthing of the ship in the port, can also play a certain role in the aspects of collision prevention and collision avoidance of the ship, reduces the collision risk degree and greatly improves the safety of the ship in the sailing and berthing processes.
The method and the system for acquiring the panoramic image of the ship based on the machine vision have important significance in the aspects of collision prevention and collision avoidance of the ship, can be provided for a crew as a reference for safe navigation, and helps the crew to quickly master and acquire scene information in a large range around the ship body, so that the collision risk is effectively reduced, the ship can be assisted to berth in a port, the conditions around the water surface can be seen in advance, the dead angle position around the ship can be observed, and the dead zone can be effectively eliminated to avoid scraping. The invention greatly improves the safety of the ship in the navigation and berthing processes, is in the leading position in the field of intelligent ships, has stronger advancement, has larger market demand in the field of intelligent ships and has great market space in China.
Drawings
FIG. 1 is a flow chart of a method for obtaining a panoramic image of a ship based on machine vision according to the present invention;
fig. 2 is a schematic diagram of a method for obtaining a ship panoramic image based on machine vision in an embodiment of the present invention.
FIG. 3 is a schematic view of the position of the camera arrangement of the vessel of the present invention;
FIG. 4 is a schematic diagram of a marine camera image acquisition of the present invention;
FIG. 5 is a flow chart of a distortion corrected image based on Zhangyingyou calibration method according to the present invention;
FIG. 6 is a schematic diagram of top view transformation of an image according to the present invention;
FIG. 7 is a flow chart of a calibration method in an embodiment of the present invention;
FIG. 8 is a schematic view of a corner point in an embodiment of the present invention;
FIG. 9 is a plan view of a unified horizontal-based coordinate system of the present invention;
FIG. 10 is a schematic view of a fusion method in an embodiment of the present invention;
fig. 11 is a schematic diagram of a luminance uniformization method based on RGB three-channel coefficient correction according to an embodiment of the present invention;
fig. 12 is a ship panoramic image effect diagram obtained by the method of the present invention.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Referring to fig. 1 and fig. 2, in this embodiment, a method for obtaining a panoramic image of a ship based on machine vision is provided, where cameras are disposed at two sides of a bow, a stern, and a hull of the ship, and the method includes:
and S1, acquiring image data acquired by the camera on the ship.
In the practical application of this embodiment, due to factors such as the large size of the ship and the working environment, a wide-angle 180-degree fisheye network camera is selected for the selection of the ship camera. Because the fisheye camera has the characteristics of small focal length and large visual angle, compared with the common camera, the fisheye camera overcomes the problems of small visual angle and many dead angles of the common camera, can observe the same area by using fewer fisheye cameras, and can reduce the using number of the cameras. The fish-eye cameras are arranged at the bow, the stern and two sides of the hull as shown in figure 3. Each camera is connected with the ship server through a network cable and a power line, and the schematic layout position of the ship cameras is shown in fig. 3. The acquisition of the image data of the ship camera utilizes an RSTP protocol to access the camera and acquire the video data through ffmpeg, so that the acquisition rate close to real time can be realized, the delay time is less than 0.5s, and the requirement of the video image data acquisition on the delay performance in the embodiment is met. The marine camera image acquisition is shown in fig. 4.
And S2, carrying out distortion correction processing on the image data collected by each camera based on the image data and the pre-acquired internal reference matrix and distortion coefficient of each camera, and acquiring second image data corresponding to each camera.
The second image data is image data after distortion correction processing.
And S3, converting the second image data corresponding to the camera on the ship into a corresponding top view according to the homography matrix mapped from the world coordinate system to the pixel coordinate system of the camera, which is acquired in advance.
Referring to fig. 6, in order to obtain a larger viewing angle in practical application of this embodiment, the camera needs to be arranged at a certain included angle with the horizontal plane, and the image is converted into a top view through perspective transformation. Since the image formed after the camera shoots an object obliquely, if the image is mapped to the shooting object plane, which is equivalent to the situation that the camera is perpendicular to the shooting plane, the real shape of the image can be obtained. Since this mapping is equivalent to re-perspective of the original to another plane, this is called "re-projection". The essence of the top view is to "re-project" the information in the image plane onto the plane.
And S4, mapping the top view of each camera to a pre-acquired registration map, and acquiring a spliced image.
And S5, performing post-processing on the spliced images to acquire a panoramic image of the ship.
The post-processing comprises: edge blending processing and brightness homogenization processing.
Preferably, in this embodiment, the camera is a wide-angle 180-degree fisheye network camera.
In the embodiment, the wide-angle 180-degree fisheye network camera can be independently accessed through IP, has an infrared night vision function, can acquire pictures in a night environment, has a high waterproof grade, and can be applied to a marine environment; have the net twine power supply function, insert the POE switch and can realize power supply and signal transmission, can work for a long time in succession.
Preferably, in this embodiment, the internal reference matrix and the distortion coefficient of each camera are obtained by a Zhang friend calibration method in advance.
In the practical application of the embodiment, a Zhangyingyou calibration method is used for carrying out distortion correction on the ship four-direction fisheye cameras, and referring to fig. 5, firstly, the ship four-way cameras are used for collecting videos containing calibration plates, and a plurality of related template pictures are intercepted and stored in a file; then, acquiring camera parameters through MatLab, inputting a camera calibration tool box into a command window of the camera calibration tool box, adding four groups of template pictures in a saved file, inputting the size of squares in a calibration board, selecting proper calibration parameters, processing and exporting the parameters by related functions, and acquiring camera internal parameters and distortion correction parameters; and finally, inputting the acquired internal parameter matrixes and distortion coefficients into corresponding distortion correction functions by means of OpenCV, and selecting the optimal correction function according to the effect, so that the distortion correction of the ship four-way camera can be realized, and image data with smaller distortion degree can be acquired.
Preferably, step S1 is preceded by:
and S0, acquiring a homography matrix mapped from a world coordinate system to a pixel coordinate system of the camera according to coordinate information of four groups of corner points acquired in advance.
Referring to fig. 8, the angular point is the most boundary point of the pontoon within the range of the view angle collected in the camera when the pontoon is docked against the pontoon.
The coordinate information of each group of corner points comprises: when the ship berths close to the floating bridge, the pixel coordinates of the angular point in the camera, the actual position coordinates of the floating bridge corresponding to the angular point in the camera and the coordinates of a preset calibration plate corresponding to the angular point are obtained.
In the practical application of this embodiment, a process of obtaining a homography matrix mapped from a world coordinate system to a pixel coordinate system of a camera is specifically described with reference to fig. 7, and corner positions are determined according to a shipborne camera for preprocessing before calibration. In the calibration range, calibration plates are arranged at two angular points of the floating bridge at the berth position, and four calibration plates are arranged at each angular point in a rectangular shape; measuring the length and width distances in a rectangular range surrounded by the calibration plates at the single angular point and the distance between two angular points, and estimating the distance from the angular point to the central axis of the ship so as to turn around and turn to the ship to determine the position of the ship; taking the length and the width in a rectangular range surrounded by the measured single angular points as a reference, and adjusting the size distances between calibration plates arranged on the floating bridge and between two angular points; manually selecting coordinates of a calibration plate in a camera image display interface (namely a pixel coordinate system), manually selecting coordinate points to acquire coordinate information of two groups of calibration object coordinate points, and storing the coordinate information in a file; turning the ship around to one hundred eighty degrees by using the distance from the angular point to the central shaft of the ship and stopping at the previous position, and also acquiring coordinate information of two groups of coordinate points of the calibration object to ensureStoring the coordinate information of the four groups of calibration objects in a file at the moment; acquiring a homography matrix mapped from a world coordinate system to a pixel coordinate system by using the acquired calibration object distance information and the artificially selected calibration object coordinate and utilizing a warp Peractive function in OpenCV, wherein point coordinates on an imaging plane (namely the pixel coordinate system) of the camera are expressed as homogeneous coordinates
Figure BDA0003047629480000103
The coordinates of the corresponding points on the horizontal plane (i.e., world coordinate system) are expressed as homogeneous coordinates
Figure BDA0003047629480000104
Namely:
Figure BDA0003047629480000101
wherein, aijRepresenting the elements of the homography, i being the rows in which the matrix is located and j being the columns in which the matrix is located, is a transformation from two-dimensional space to three-dimensional space, since the image is in a two-dimensional plane, divided by Z, X ', Y ', Z ' representing the points on the image:
Figure BDA0003047629480000102
let a33 be 1, and develop the above formula, then solve the homography matrix.
A homography matrix is obtained. Converting images shot by four cameras of the ship into top views by using a homography matrix; and acquiring the corner information stored in the file by using the ship panoramic image system, checking the calibration effect, and finishing the calibration process if the effect is not good and the steps can be repeated and the effect is good.
The prior art fixes a calibration plate on land or other objects convenient to attach, the water surface of a ship is greatly different from the land environment, one side is a water-leaning side, the other side is a floating bridge or both sides are water sides, the traditional calibration method is difficult to realize, the calibration method in the embodiment has the advantages that the ship panoramic image system adopts the calibration method of arranging the calibration plate on the floating bridge, adopts a simple, practical, safe and high-reliability scheme, can solve the problem of the water surface difficult to fix the calibration plate and realize the calibration of a camera of the ship panoramic image system by arranging the calibration plate on the floating bridge and turning the ship around for one hundred eighty degrees to acquire four groups of angular point coordinate information, is convenient to use, greatly solves the problems that the calibration plate cannot be reasonably used on the water surface and the calibration plate is difficult to fix on the water surface, the influence brought by the environments such as waves, wind power and the like is overcome to a certain extent, meanwhile, the underwater calibration is avoided, the safety of experimenters is guaranteed, and the efficiency and the accuracy of calibration and the reliability of the system are improved through the real-time calibration of the ship panoramic image system.
In this embodiment, the pre-acquired registration map is a registration map generated by a horizontal plane-based unified coordinate system with a hull center as an origin of coordinates.
The size of the registration graph is calculated according to the size of the ship and coordinates of a preset calibration plate corresponding to the four groups of corner points respectively.
In practical application of this embodiment, in order to implement the image stitching and registration process, a registration map of a horizontal plane-based unified coordinate system is established with the hull center as the origin of coordinates, where the size of the registration map is calculated according to the input ship size and the calibration plate distance at two corner points of the pontoon, and the registration map is illustrated as fig. 9. And copying the acquired camera top view images in different areas of the ship to corresponding areas of the registration map to realize image splicing and registration and obtain a panoramic spliced image needing post-processing.
In the practical application of this embodiment, the spliced image still has the problems of splicing seams, uneven brightness, and the like, and needs to be post-processed.
Preferably, the edge blending process specifically includes:
and establishing a corresponding rectangular coordinate system in the overlapped area of the two top views and acquiring the size of the overlapped area.
The overlapped area is rectangular.
And the coordinates of each pixel point in the overlapped area are (x, y).
Wherein x is a column where the pixel is located; y is the row in which the pixel is located.
And calculating a first linear slope according to the overlapped region size.
Determining an expression for the first line based on the first line slope.
The first straight line is a diagonal line of the overlapped region.
And acquiring the weight of any pixel point in the overlapped area in different top views based on the linear expression.
And then, acquiring an RGB component value of the image fused in the overlapping region by any pixel point in the overlapping region according to the formula (1).
The formula (1) is:
P=P11+P2*(1-ω1);
p represents the RGB component value of the image fused in the overlapping area by any pixel point in the overlapping area.
P1、P2And the RGB value components of any pixel point in different top views in the respectively represented overlapped areas.
ω1、(1-ω1) And respectively representing the weight of any pixel point in the overlapped area in different top views.
Traversing the whole overlapping area to realize smooth transition.
In this embodiment, the edge blending process is performed, and for the top view obtained after the splicing, a splicing seam exists at the splicing position of two adjacent images, so that an obvious jumping condition exists at the splicing seam in the process of displaying the panoramic image in real time, and therefore, the influence of the splicing seam on the panoramic image needs to be eliminated.
There are several methods for eliminating the splice seam, two of which are: 1. and eliminating the splicing seams by a median filtering method. 2. And eliminating the splicing seams by utilizing weighted average fusion. There are two requirements for the elimination of the splice seam: firstly, the transition of the splicing region is smooth, secondly, the jump of the brightness of the splicing region is not changed greatly, the problem of smooth transition of the splicing region is mainly solved in the edge fusion processing part, and the jump of the brightness of the splicing region is solved in the brightness homogenization processing part.
The invention provides a rectangular coordinate system-based weighted average method, which utilizes an overlap region to carry out smooth transition processing according to a transition direction to eliminate splicing seams and obtain different weight values so as to realize smooth transition among different images. The method comprises the steps of establishing a corresponding rectangular coordinate system in a coincidence area of two images, and solving a linear slope through the size of the coincidence area, wherein the coordinates of each pixel point in the coincidence area are (x, y), x is a column where a pixel is located, y is a row where the pixel is located, and a linear expression can be solved by combining the linear slope. As shown in fig. 10, the slope of the L1 straight line is obtained from coordinates of B, C at two points, L4 is parallel to L1, the point D passes and intersects L2 and L3 at F, E at two points, | DE |/| FE | is the weight of the pixel point in the left image, | FD |/| FE | is the weight of the pixel point in the front image, and so on, the smooth transition can be realized by traversing the whole overlapping region.
In the practical application of this embodiment, when the ship four-way fisheye camera performs image stitching, the installation position and the installation angle of the camera, and the noise of the sensor, there is a certain difference between the sensitivity and the like, especially the sensitivity, and the scene illumination brightness in each direction is different for the ship, which may cause obvious brightness and color difference in the stitched panoramic image, and is very easy to cause the problems of difficulty in visual observation, unclear observation of the ship body periphery, and the like, so that it is necessary to perform illumination homogenization processing on the stitched and fused panoramic image.
The brightness uniformization processing specifically includes: image histogram equalization processing or Gamma color correction processing.
Preferably, the brightness uniformization process specifically includes:
and acquiring the value of the color correction coefficient based on a preset target function.
The color correction coefficients include: forward view color correction factor fF(ii) a Left view color correction factor fL(ii) a Right view color correction factor fR(ii) a Backward view color correction factor fB
The forward view is a top view corresponding to a camera arranged at the bow.
The backward view is a top view corresponding to a camera arranged at the stern.
The right view is a top view corresponding to the camera arranged on the right side of the ship body.
The left view is a top view corresponding to the camera arranged on the left side of the ship body.
Wherein the preset objective function is:
F=(fFAF-fLAL)2+(fFBF-fRBR)2+(fLCL-fBCB)2+(fRDR-fBDB)2
wherein A isFRepresenting the mean of all the pixels in the left front overlapping region a of the forward view.
ALRepresenting the mean of all the pixels in the left front overlapping region a of the left view.
BFRepresenting the mean of all the pixels in the right front overlap region B of the forward view.
BRRepresenting the mean of all the pixels in the right front overlap region B of the right view.
CLRepresenting the mean of all the pixel points in the left-hand view in the left-hand rear overlap region C.
CBRepresenting the mean of all the pixels in the left rear overlapping region C of the rear view.
DRAnd the right view is represented as the mean value of all pixel points in the right rear overlapping region D.
DBAnd the mean value of all pixel points in the right rear overlapping area D of the backward view is represented.
And correcting the puzzle colors according to the value of the color correction coefficient and the corresponding gray value of the RGB channel.
In this embodiment, the obtaining a value of a color correction coefficient based on a preset objective function specifically includes:
respectively correcting the color correction coefficient f of the forward view based on a preset objective functionF(ii) a Left view color correction factor fL(ii) a Right view color correction factor fR(ii) a Backward view color correction factor fBCalculating partial derivatives, and converting the partial derivatives into a matrix form:
Figure BDA0003047629480000141
obtaining the color correction coefficient f of the forward view by a singular value decomposition methodFLeft view color correction factor fLRight view color correction factor fRBackward view color correction factor fBThe value of (c).
This embodiment still provides a system based on machine vision obtains boats and ships panoramic image, the system includes:
the system comprises at least one processor and at least one memory which is connected with the processor in a communication mode, wherein the memory stores program instructions which can be executed by the processor, and the processor calls the program instructions to execute any method for acquiring the ship panoramic image based on the machine vision.
In the method and the system for obtaining the panoramic image of the ship based on the machine vision in the embodiment, the images around the ship without dead angles are spliced, so that the panoramic image of the ship can be obtained in real time, and the influence of wind surge currents in sea and inland waters on the images can be reduced to a certain extent. The invention can assist the berthing of the ship in the port, can also play a certain role in the aspects of collision prevention and collision avoidance of the ship, reduces the collision risk degree and greatly improves the safety of the ship in the sailing and berthing processes. The invention is in the leading position in the field of intelligent ships, has stronger advancement, has larger market demand in the field of intelligent ships, and has great market space in China.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium; either as communication within the two elements or as an interactive relationship of the two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, a first feature may be "on" or "under" a second feature, and the first and second features may be in direct contact, or the first and second features may be in indirect contact via an intermediate. Also, a first feature "on," "above," and "over" a second feature may be directly or obliquely above the second feature, or simply mean that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the second feature, or may simply mean that the first feature is at a lower level than the second feature.
In the description herein, the description of the terms "one embodiment," "some embodiments," "an embodiment," "an example," "a specific example" or "some examples" or the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it should be understood that the above embodiments are illustrative and not restrictive, and that those skilled in the art may make changes, modifications, substitutions and alterations to the above embodiments without departing from the scope of the present invention.

Claims (10)

1. The method for obtaining the panoramic image of the ship based on the machine vision is characterized in that cameras are arranged on the two sides of the bow, the stern and the hull of the ship, and comprises the following steps:
s1, acquiring image data acquired by a camera on the ship;
s2, carrying out distortion correction processing on the image data collected by each camera based on the image data and the pre-acquired internal reference matrix and distortion coefficient of each camera to acquire second image data corresponding to each camera;
the second image data is image data after distortion correction processing;
s3, converting second image data corresponding to the camera on the ship into a corresponding top view according to a homography matrix mapped from a world coordinate system to a pixel coordinate system of the camera, wherein the homography matrix is acquired in advance;
s4, mapping the top view of each camera to a pre-acquired registration map to acquire a spliced image;
s5, post-processing the spliced images to obtain a panoramic image of the ship;
the post-processing comprises: edge blending processing and brightness homogenization processing.
2. The method of claim 1,
the camera is a wide-angle 180-degree fisheye network camera.
3. The method of claim 2,
the internal reference matrix and the distortion coefficient of each camera are obtained by adopting a Zhang-Yongyou calibration method in advance.
4. The method according to claim 3, wherein the step S1 is preceded by:
s0, acquiring a homography matrix mapped from a world coordinate system to a pixel coordinate system of the camera according to coordinate information of four groups of corner points acquired in advance;
the angular point is the point of the most boundary of the floating bridge within the visual angle range collected in the camera picture when the ship is close to the floating bridge;
the coordinate information of each group of corner points comprises: when the ship berths close to the floating bridge, the pixel coordinates of the angular point in the camera, the actual position coordinates of the floating bridge corresponding to the angular point in the picture of the camera and the coordinates of a preset calibration plate corresponding to the angular point.
5. The method of claim 4,
the pre-acquired registration map is generated by a horizontal plane-based unified coordinate system with a hull center as a coordinate origin;
the size of the registration graph is calculated according to the size of the ship and coordinates of a preset calibration plate corresponding to the four groups of corner points respectively.
6. The method of claim 1,
the edge blending process specifically includes:
establishing a corresponding rectangular coordinate system in the overlapped area of the two top views and acquiring the size of the overlapped area;
the overlapped area is rectangular;
the coordinates of all pixel points in the overlapped area are (x, y);
wherein x is a column where the pixel is located; y is the row in which the pixel is located;
calculating a first linear slope according to the size of the overlapped area;
determining an expression of the first line based on the slope of the first line;
the first straight line is a diagonal line of the overlapped area;
acquiring the weight of any pixel point in the overlapped area in different top views based on the linear expression;
then, acquiring an RGB component value of the image after any pixel point in the overlapped area is fused in the overlapped area according to a formula (1);
the formula (1) is:
P=P11+P2*(1-ω1);
p represents the RGB component value of the image fused in the overlapping area by any pixel point in the overlapping area;
P1、P2RGB value components of any pixel point in different top views in the respectively represented overlapped areas;
ω1、(1-ω1) Respectively representing the weight of any pixel point in the overlapped area in different top views;
traversing the whole overlapping area to realize smooth transition.
7. The method of claim 1,
the brightness uniformization processing specifically includes: image histogram equalization processing or Gamma color correction processing.
8. The method of claim 1,
the brightness uniformization processing specifically includes:
acquiring a value of a color correction coefficient based on a preset target function;
the color correction coefficients include: forward view color correction factor fF(ii) a Left view color correction factor fL(ii) a Right view color correction factor fR(ii) a Backward view color correction factor fB
The forward view is a top view corresponding to a camera arranged at the bow;
the backward view is a top view corresponding to a camera arranged at the stern;
the right view is a top view corresponding to a camera arranged on the right side of the ship body;
the left view is a top view corresponding to a camera arranged on the left side of the ship body;
wherein the preset objective function is:
F=(fFAF-fLAL)2+(fFBF-fRBR)2+(fLCL-fBCB)2+(fRDR-fBDB)2
wherein A isFRepresenting the mean value of all pixel points of the forward view in the left front overlapping area A;
ALrepresenting the mean value of all pixel points of the left view in the left front overlapping area A;
BFrepresenting the mean value of all pixel points in a right front side overlapping region B of the forward view;
BRrepresenting the mean value of all pixel points in a right front side overlapping region B of a right-direction view;
CLrepresenting the mean value of all pixel points of the left view in the left rear overlapping area C;
CBrepresenting the mean value of all pixel points of the backward view in the left back overlapping area C;
DRrepresenting the mean value of all pixel points in a right rear overlapping area D of a right view;
DBindicating that the backward view overlaps the region on the right back sideD, averaging all pixel points;
and correcting the puzzle colors according to the value of the color correction coefficient and the corresponding gray value of the RGB channel.
9. The method according to claim 8, wherein the obtaining the value of the color correction coefficient based on the preset objective function specifically comprises:
respectively correcting the color correction coefficient f of the forward view based on a preset objective functionF(ii) a Left view color correction factor fL(ii) a Right view color correction factor fR(ii) a Backward view color correction factor fBCalculating partial derivatives, and converting the partial derivatives into a matrix form:
Figure FDA0003047629470000041
obtaining the color correction coefficient f of the forward view by a singular value decomposition methodFLeft view color correction factor fLRight view color correction factor fRBackward view color correction factor fBThe value of (c).
10. A system for obtaining a panoramic image of a ship based on machine vision, the system comprising:
at least one processor;
and at least one memory communicatively coupled to the processor, wherein the memory stores program instructions executable by the processor, and the processor invokes the program instructions to implement the method for acquiring a panoramic image of a ship based on machine vision according to any one of claims 1 to 9.
CN202110476679.1A 2021-04-29 2021-04-29 Method and system for obtaining ship panoramic image based on machine vision Active CN113191974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110476679.1A CN113191974B (en) 2021-04-29 2021-04-29 Method and system for obtaining ship panoramic image based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110476679.1A CN113191974B (en) 2021-04-29 2021-04-29 Method and system for obtaining ship panoramic image based on machine vision

Publications (2)

Publication Number Publication Date
CN113191974A true CN113191974A (en) 2021-07-30
CN113191974B CN113191974B (en) 2023-02-03

Family

ID=76980701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110476679.1A Active CN113191974B (en) 2021-04-29 2021-04-29 Method and system for obtaining ship panoramic image based on machine vision

Country Status (1)

Country Link
CN (1) CN113191974B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113916234A (en) * 2021-10-25 2022-01-11 中国人民解放军海军大连舰艇学院 Automatic planning method for ship collision avoidance route under complex dynamic condition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413141A (en) * 2013-07-29 2013-11-27 西北工业大学 Ring illuminator and fusion recognition method utilizing ring illuminator illumination based on shape, grain and weight of tool
CN104574339A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale cylindrical projection panorama image generating method for video monitoring
CN107274336A (en) * 2017-06-14 2017-10-20 电子科技大学 A kind of Panorama Mosaic method for vehicle environment
CN107424118A (en) * 2017-03-28 2017-12-01 天津大学 Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN109087251A (en) * 2018-08-30 2018-12-25 上海大学 A kind of vehicle-mounted panoramic image display method and system
CN109435852A (en) * 2018-11-08 2019-03-08 湖北工业大学 A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN109883433A (en) * 2019-03-21 2019-06-14 中国科学技术大学 Vehicle positioning method in structured environment based on 360 degree of panoramic views
CN110287884A (en) * 2019-06-26 2019-09-27 长安大学 A kind of auxiliary drive in crimping detection method
CN110689506A (en) * 2019-08-23 2020-01-14 深圳市智顺捷科技有限公司 Panoramic stitching method, automotive panoramic stitching method and panoramic system thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413141A (en) * 2013-07-29 2013-11-27 西北工业大学 Ring illuminator and fusion recognition method utilizing ring illuminator illumination based on shape, grain and weight of tool
CN104574339A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale cylindrical projection panorama image generating method for video monitoring
CN107424118A (en) * 2017-03-28 2017-12-01 天津大学 Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN107274336A (en) * 2017-06-14 2017-10-20 电子科技大学 A kind of Panorama Mosaic method for vehicle environment
CN109087251A (en) * 2018-08-30 2018-12-25 上海大学 A kind of vehicle-mounted panoramic image display method and system
CN109435852A (en) * 2018-11-08 2019-03-08 湖北工业大学 A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN109883433A (en) * 2019-03-21 2019-06-14 中国科学技术大学 Vehicle positioning method in structured environment based on 360 degree of panoramic views
CN110287884A (en) * 2019-06-26 2019-09-27 长安大学 A kind of auxiliary drive in crimping detection method
CN110689506A (en) * 2019-08-23 2020-01-14 深圳市智顺捷科技有限公司 Panoramic stitching method, automotive panoramic stitching method and panoramic system thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113916234A (en) * 2021-10-25 2022-01-11 中国人民解放军海军大连舰艇学院 Automatic planning method for ship collision avoidance route under complex dynamic condition
CN113916234B (en) * 2021-10-25 2024-04-02 中国人民解放军海军大连舰艇学院 Automatic planning method for ship collision avoidance route under complex dynamic condition

Also Published As

Publication number Publication date
CN113191974B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN107844750B (en) Water surface panoramic image target detection and identification method
CN109283538B (en) Marine target size detection method based on vision and laser sensor data fusion
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
CN109087251B (en) Vehicle-mounted panoramic image display method and system
WO2016112714A1 (en) Assistant docking method and system for vessel
CN109146947B (en) Marine fish three-dimensional image acquisition and processing method, device, equipment and medium
CN109741455A (en) A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system
CN113301274B (en) Ship real-time video panoramic stitching method and system
CN112233188B (en) Calibration method of data fusion system of laser radar and panoramic camera
CN104778695A (en) Water sky line detection method based on gradient saliency
CN113223075A (en) Ship height measuring system and method based on binocular camera
CN113191974B (en) Method and system for obtaining ship panoramic image based on machine vision
CN110264395A (en) A kind of the camera lens scaling method and relevant apparatus of vehicle-mounted monocular panorama system
CN106403901A (en) Measuring apparatus and method for attitude of buoy
Hurtós et al. A novel blending technique for two-dimensional forward-looking sonar mosaicing
CN115239820A (en) Split type flying vehicle aerial view real-time splicing and parking space detection method
CN114926739B (en) Unmanned collaborative acquisition processing method for geographical space information on water and under water of inland waterway
CN112927233A (en) Marine laser radar and video combined target capturing method
CN115936995A (en) Panoramic splicing method for four-way fisheye cameras of vehicle
CN113525631A (en) Underwater terminal docking system and method based on optical visual guidance
CN106846243A (en) The method and device of three dimensional top panorama sketch is obtained in equipment moving process
CN115131720A (en) Ship berthing assisting method based on artificial intelligence
CN111860632B (en) Multipath image consistency fusion method
CN112802109A (en) Method for generating automobile aerial view panoramic image
CN108765292A (en) Image split-joint method based on the fitting of space triangular dough sheet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant