CN116245730A - Image stitching method, device, equipment and storage medium - Google Patents

Image stitching method, device, equipment and storage medium Download PDF

Info

Publication number
CN116245730A
CN116245730A CN202310180002.2A CN202310180002A CN116245730A CN 116245730 A CN116245730 A CN 116245730A CN 202310180002 A CN202310180002 A CN 202310180002A CN 116245730 A CN116245730 A CN 116245730A
Authority
CN
China
Prior art keywords
determining
image acquisition
pixel
acquisition equipment
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310180002.2A
Other languages
Chinese (zh)
Inventor
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310180002.2A priority Critical patent/CN116245730A/en
Publication of CN116245730A publication Critical patent/CN116245730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a method, a device, equipment and a storage medium for image stitching, relates to the technical field of image processing, and particularly relates to the fields of artificial intelligence, intelligent traffic, automatic driving, autonomous parking and the like. The specific implementation scheme is as follows: determining a splicing seam of images acquired by adjacent image acquisition equipment; determining pixel deviation of the specified feature points by utilizing the splice lines, wherein the pixel deviation of the specified feature points comprises pixel deviation of the specified feature points in images acquired by adjacent image acquisition equipment; determining the external parameter change condition of adjacent image acquisition equipment by using the pixel deviation of the appointed characteristic points and the pixel deviation of the appointed pixel points in the splicing seams; and optimizing image stitching parameters by utilizing the external parameter change condition, wherein the image stitching parameters are used for stitching images acquired by adjacent image acquisition equipment. The external parameters are corrected through the determined height change between the adjacent image acquisition devices, so that adverse factors such as splicing ghost images and the like in the image splicing process can be eliminated.

Description

Image stitching method, device, equipment and storage medium
The application is a divisional application of Chinese cases with the invention and creation name of image stitching method, device, equipment and storage medium, the application number of the method, the device and the equipment is 202210271945.1, and the application date of the method, the device and the storage medium is 2022, 3 and 18.
Technical Field
The disclosure relates to the technical field of image processing, in particular to the fields of artificial intelligence, intelligent traffic, automatic driving, autonomous parking and the like, and particularly relates to a method, a device, equipment and a storage medium for image stitching.
Background
The look-around rendering is a basic function of autopilot for assisting the driver in driving safely. And image stitching is carried out by utilizing the internal parameters and the external parameters of the image acquisition equipment, so that the environment around the vehicle is displayed. In the related art, when external force causes the external parameters of the image acquisition equipment to change, obvious gaps can appear in the spliced images, and the use experience is seriously influenced.
Disclosure of Invention
The disclosure provides a method, a device, equipment and a storage medium for image stitching.
According to an aspect of the present disclosure, there is provided a method of image stitching, the method may include the steps of:
determining a splicing seam of images acquired by adjacent image acquisition equipment;
determining pixel deviation of the specified feature points by utilizing the splice lines, wherein the pixel deviation of the specified feature points comprises pixel deviation of the specified feature points in images acquired by adjacent image acquisition equipment;
determining the external parameter change condition of adjacent image acquisition equipment by using the pixel deviation of the appointed characteristic points and the pixel deviation of the appointed pixel points in the splicing seams;
And optimizing image stitching parameters by utilizing the external parameter change condition, wherein the image stitching parameters are used for stitching images acquired by adjacent image acquisition equipment.
According to another aspect of the present disclosure, there is provided an apparatus for image stitching, the apparatus may include:
the splicing seam determining module is used for determining splicing seams of the images acquired by the adjacent image acquisition equipment;
the pixel deviation determining module is used for determining the pixel deviation of the specified characteristic point by utilizing the splice joint, wherein the pixel deviation of the specified characteristic point comprises the pixel deviation of the specified characteristic point in the image acquired by the adjacent image acquisition equipment;
the external parameter change condition determining module is used for determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the pixel deviation of the appointed characteristic points and the pixel deviation of the appointed pixel points in the splicing seams;
the parameter optimization module is used for optimizing image splicing parameters by utilizing the external parameter change condition, and the image splicing parameters are used for splicing images acquired by adjacent image acquisition equipment.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
The technology according to the present disclosure utilizes the principle that the splice joint misalignment is proportional to the height variation between adjacent image capturing devices, and reflects the proportional relationship by specifying the pixel deviation of the feature point and the pixel deviation of the specified pixel point in the splice joint. The external parameters are corrected through the determined height change between the adjacent image acquisition devices, so that adverse factors such as splicing ghost images and the like in the image splicing process can be eliminated.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is one of the flow charts of a method of image stitching according to the present disclosure;
FIG. 2 is a schematic diagram of image stitching according to the present disclosure;
FIG. 3 is a schematic illustration of a three-dimensional concave structural model for image stitching according to the present disclosure;
FIG. 4 is a schematic diagram of an image stitching effect according to the present disclosure;
FIG. 5 is a schematic illustration of a splice seam according to the present disclosure;
FIG. 6 is a flow chart for determining pixel deviations for specified feature points in accordance with the present disclosure;
FIG. 7 is a flow chart for determining a change in a feature of an adjacent image capture device in accordance with the present disclosure;
FIG. 8 is a flow chart for determining a designated pixel point in accordance with the present disclosure;
FIG. 9 is a flow chart for optimizing image stitching parameters according to the present disclosure;
FIG. 10 is a second flowchart of a method of image stitching according to the present disclosure;
FIG. 11 is a schematic illustration of a candidate stitching model according to the present disclosure;
FIG. 12 is a schematic illustration of image stitching effects obtained using a three-dimensional concave structural model with a bottom area greater than a corresponding threshold in accordance with the present disclosure;
FIG. 13 is a schematic illustration of image stitching effects obtained with a three-dimensional concave structural model having a bottom area no greater than a corresponding threshold in accordance with the present disclosure;
FIG. 14 is a schematic diagram of an apparatus for image stitching according to the present disclosure;
fig. 15 is a block diagram of an electronic device for implementing a method of image stitching of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, the present disclosure relates to a method of image stitching, which may include the steps of:
s101: determining a splicing seam of images acquired by adjacent image acquisition equipment;
s102: determining pixel deviation of the specified feature points by utilizing the splice lines, wherein the pixel deviation of the specified feature points comprises pixel deviation of the specified feature points in images acquired by adjacent image acquisition equipment;
s103: determining the external parameter change condition of adjacent image acquisition equipment by using the pixel deviation of the appointed characteristic points and the pixel deviation of the appointed pixel points in the splicing seams;
S104: and optimizing image stitching parameters by utilizing the external parameter change condition, wherein the image stitching parameters are used for stitching images acquired by adjacent image acquisition equipment.
The execution subject of the present disclosure may be a local device or a cloud device or the like that is communicatively connected to the image device. The images acquired by the adjacent image acquisition devices may be images to be stitched. The execution scene of the present disclosure may be image stitching used by a vehicle having an autopilot function, image stitching of a camera scene, image stitching of a monitoring scene, or the like.
Taking a vehicle with an autopilot function as an example, the image capturing device may be a camera mounted around the vehicle body. The camera may be an infrared camera, a fisheye camera, etc. As shown in connection with fig. 2, images acquired by a multi-path image acquisition device may first be acquired. Secondly, parameters of each path of image acquisition equipment are acquired, wherein the parameters can comprise internal parameters and external parameters of the image acquisition equipment. Finally, the images are spliced according to the parameters of the image acquisition equipment.
A 4-way image capturing apparatus will be described as an example with reference to fig. 3. The image stitching principle is to map the images acquired by the 4-path image acquisition equipment to the inner wall of a pre-constructed three-dimensional concave structure model. The mapping process uses the internal parameters and external parameters of the image acquisition equipment. Finally, an effect diagram as shown in fig. 4 can be obtained.
In the process of mapping, the splicing seams between the images acquired by the adjacent image acquisition devices can be determined through an existing algorithm. For example, a line segment in which the same feature point appears most may be used as a splice seam in accordance with the feature point recognition method. A schematic view of the splice seam is shown in connection with fig. 5. In fig. 5, the left side view shows a top view and the right side view shows a solid view. As can be seen from fig. 5, the height of the image capturing device on the vehicle body may be changed due to a load change or tire pressure change that may occur to the vehicle. In this case, a more noticeable ghost effect in the stitched image may be caused.
Based on this, the present disclosure determines pixel deviations of specified feature points in images acquired by adjacent image acquisition devices using the splice lines as references. For illustrative purposes, a 4-way image capturing device will be described herein as an example, with images captured by the 4-way image capturing device corresponding to front, rear, left, and right views, respectively. Illustratively, the front view and the left view, the front view and the right view are images acquired by adjacent image acquisition devices. The image stitching is described by taking the front view and the right view as an example. Each pixel point or pixel area on the splice line may be first determined. Next, the entity corresponding to each pixel (area) is determined, and the i-th pixel (area) is taken as an example, and the corresponding entity i is a positive integer. Wherein, the number of pixel points (areas) on the splice joint can be n, n is a positive integer, and i is more than or equal to 1 and less than or equal to n. The coordinates of the entity i in the front view and the coordinates in the right view are then determined, respectively. And finally, determining pixel deviation according to the calculated coordinate difference. Ideally, the pixel deviation is zero. I.e. no ghost appears in the combined image. Conversely, the pixel deviation may be recorded as (dxi, dyi). Wherein dxi may be a differential representation of the abscissa of the pixel deviation; dyi may be a differential representation of the ordinate of the pixel deviation. Correspondingly, the pixel deviation corresponding to each pixel point (area) in the splice seam can be expressed as L1: (dx 1, dy 1), … …, ln: (dxn, dyn). It will be appreciated that the closer the pixel is to the vehicle, the less the pixel deviation. The pixel deviation is larger at the pixel points farther from the vehicle.
And performing feature recognition and feature matching on the front view and the right view near the splicing seam. Finding the appointed characteristic points. For example, the specified feature points may be the top of the tree, the top of the blade, the end points of the lane lines, etc. The pixel deviation of the specified feature point in the front view and the right view may be noted as (Dx, dy). Wherein Dx may be a differential representation of the abscissa of the pixel deviation; dy may be a differential representation of the ordinate of the pixel deviation.
And projecting the midpoints of the two characteristic points in the front view and the right view to the splice joint, wherein the projection points can be used as designated pixel points in the splice joint. And obtaining the pixel deviation of the appointed pixel point in the splice joint by using the pixel deviation corresponding to each pixel point in the splice joint.
And determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the proportional relation between the pixel deviation of the appointed characteristic point and the pixel deviation of the appointed pixel point. The logic of the above calculation is determined based on the linear relationship of splice joint misalignment and vehicle height variation. The determined change condition of the external parameters can be expressed as the current height deviation between the first image acquisition device which acquires the front view and the second image acquisition device which acquires the right view. The current height deviation may reflect a difference from the initial height deviation, that is, it may be considered that the height deviation may be used to reflect a height change due to a load change or a tire pressure change.
The determined external parameter change condition is utilized to correct the external parameters during image stitching, so that stitching optimization of images acquired by adjacent image acquisition equipment can be realized.
Through the process, the principle that the dislocation of the splicing seam is in direct proportion to the height change between adjacent image acquisition equipment is utilized, and the proportional relation is reflected through the pixel deviation of the appointed characteristic points and the pixel deviation of the appointed pixel points in the splicing seam. The external parameters are corrected through the determined height change between the adjacent image acquisition devices, so that splicing ghost images and the like in the image splicing process can be eliminated.
As shown in fig. 6, in one embodiment, step S102 may include the following process:
s601: setting a range threshold;
s602: in each image, taking a pixel point within a range threshold value of a splicing seam as a candidate pixel point;
s603: determining pixel points meeting a predetermined condition, which appear in the candidate pixel points, as specified feature points;
s604: and calculating pixel deviation of the appointed characteristic points in the images acquired by the adjacent image acquisition equipment.
The range threshold may be empirically set, for example, 5 pixel units, 10 pixel units, 15 pixel units, and the like.
Each pixel within the range threshold may be determined as a candidate pixel centered on each pixel (region) in the splice line.
And identifying the real object in the candidate pixel points. Selecting a pixel point corresponding to a representative object as a designated characteristic point through the identification result; or selecting the pixel point corresponding to the object with higher identification degree as the appointed characteristic point. Illustratively, the specified feature points may be the top of the tree, the top of the blade, the end points of the lane lines, etc. That is, a representative object or an object with a higher identification degree may be selected as the predetermined condition.
Still taking the front view and the right view as an example, the pixel deviation of the designated feature point in the images acquired by the adjacent image acquisition devices can be finally determined by calculating the pixel deviation of the designated feature point in the front view and the right view.
Through the above process, the specified feature points are found in the vicinity of the splice joint. Only the limited area near the splicing seam is required to be searched for the characteristic points, so that the calculated amount can be reduced, and the real-time effect of image splicing is improved.
As shown in fig. 7, in one embodiment, step S103 may include the following process:
S701: determining a designated pixel point in the stitching joint according to the coordinates of the designated feature point in the image acquired by the adjacent image acquisition equipment;
s702: acquiring pixel deviation of a designated pixel point;
s703: and determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the proportional relation between the pixel deviation of the appointed characteristic point and the pixel deviation of the appointed pixel point.
Still taking the front view and the right view as an example, in the case of occurrence of ghost or the like, the specified feature points may appear in the front view and the right view, respectively, and have a certain pixel deviation. Based on this, the specified pixel point can be determined in the splice joint using the connection of the specified feature points in the front view and the right view.
For example, the intersection point of the connecting line and the splice seam may be regarded as a specified pixel point. For another example, a projection point of the midpoint of the connection line to the splice joint may be set as the specified pixel point.
After the specified pixel point is determined, the pixel deviation of the specified pixel point can be selected from the pixel deviations of each pixel point (area) on the previously recorded splice line. The differential representation of the pixel deviation for the specified pixel point may be (dxi, dyi).
And determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the proportional relation between the pixel deviation of the appointed characteristic point and the coordinates of the appointed pixel point. For example, the pixel deviation of the specified feature point may be expressed as (Dx, dy). Then, the proportional relationship may be expressed as c=dx/dxi =dy/dyi. Wherein c can represent the obtained height change coefficient, namely the corresponding external parameter change condition.
Through the process, the parameters involved in calculation are differential representations, so that the height change coefficient can be determined through the derivative calculation process, and the determination of the relatively more accurate actual change height is facilitated.
As shown in fig. 8, in one embodiment, step S701 may include the following process:
s801: acquiring a first coordinate of a specified feature point in a first image and a second coordinate of the specified feature point in a second image, wherein the first image and the second image are images acquired by adjacent image acquisition equipment;
s802: taking the intersection point of the coordinate connecting line and the splicing seam as a designated pixel point, wherein the coordinate connecting line comprises a connecting line between a first coordinate and a second coordinate; or (b)
S803: and taking the projection point of the coordinate midpoint projected to the splicing seam as a specified pixel point, wherein the coordinate midpoint is the midpoint of a connecting line between the first coordinate and the second coordinate.
The first coordinates of the specified feature points in the first image and the second coordinates of the specified feature points in the second image are acquired respectively. The first image may correspond to a front view in the previous example and the second image may correspond to a right view in the previous example.
As for the acquisition of the specified feature point coordinates, a feature point extraction (Scale-invariant Feature Transform, SIFT) algorithm based on a conventional Computer Vision (CV) algorithm, a feature point vector (Oriented Fast and Rotated Brief, ORB) algorithm, a corner detection (Features fromaccelerated segment test, fast) algorithm, or a deep learning-based feature extraction network (SuperPoint CNN) or the like may be employed.
In the case where the first coordinates of the specified feature point in the first image and the second coordinates of the specified feature point in the second image are obtained, a line connecting the first coordinates and the second coordinates may be determined.
In the first method, an intersection point of a connecting line of the first coordinate and the second coordinate and the splice line can be used as a designated pixel point.
In a second method, a midpoint of a line connecting the first and second coordinates may be first determined, and the midpoint of the line may be referred to as a coordinate midpoint. And projecting the coordinate midpoint to the splicing seam to obtain a projection point. The projected point may be referred to as a specified pixel point.
Through the above-described process, the determination of the designated pixel point can be performed in various ways.
As shown in fig. 9, in one embodiment, step S104 may include the following process:
s901: determining the current height difference of adjacent image acquisition equipment according to the external parameter change condition;
s902: comparing the current height difference of the adjacent image acquisition equipment with the initial height difference acquired in advance to obtain a comparison result;
s903: under the condition that the comparison result exceeds the corresponding threshold value, correcting the current height difference according to the initial height difference to obtain a correction result;
S904: and taking the corrected result as an optimized image splicing parameter.
The height variation dh (e.g., 0.1 mm) may be preset. And calculating the current height difference of the adjacent image acquisition equipment by using the determined external parameter change condition (c) and the preset height change quantity.
As in the previous example, dh may be a differential representation of the preset height variation. According to the product of the external parameter change condition and the preset height change quantity, the current height difference of the adjacent image acquisition equipment can be calculated. I.e. the current height difference dh=c×dh.
The initial height difference may be known data set at the time of initial use or shipment of each image capturing apparatus. Comparing the current height difference of the adjacent image acquisition devices with the initial height difference acquired in advance, the height change condition of each image acquisition device can be determined.
The corresponding threshold may be a height difference value that affects image stitching determined from empirical values. And under the condition that the comparison result is that the current height difference of the adjacent image acquisition equipment exceeds the corresponding threshold value, indicating that correction is needed. Based on the above, the current height difference can be corrected according to the initial height difference, and a correction result is obtained.
The corrected result can be used as the optimized image splicing parameter.
Through the process, the optimized image splicing parameters are utilized to carry out image splicing, so that the influence caused by height change can be eliminated.
As shown in fig. 10, in one embodiment, before step S101, the following procedure may be further included:
s1001: detecting position attributes of the image acquisition equipment, wherein the position attributes comprise open attributes or narrow attributes;
s1002: selecting a target splicing model from preset candidate splicing models according to the position attribute; the candidate stitching model is a model for stitching images, and comprises models of three-dimensional concave structures with different bottom areas.
The location attribute of the image acquisition device may be determined using an on-board radar, an on-board global positioning system (Global Positioning System, GPS), or the like.
For example, on a vehicle on which an image pickup apparatus is mounted, a radar apparatus is generally equipped to detect an obstacle around the vehicle, and a distance of the obstacle from the vehicle. For example, in the case where there is an obstacle with a distance below a corresponding threshold on either side of the vehicle, it may be determined that the position attribute of the image capturing apparatus is a narrow attribute; conversely, the location attribute of the image capturing device may be determined to be an open attribute.
In addition, the position of the vehicle (image pickup device) can also be determined using an on-vehicle GPS. For example, in a relatively closed area such as a community or a parking lot, the location attribute of the image capturing apparatus may be determined to be a narrow attribute. In a relatively open area such as a road, the positional attribute of the image pickup device may be determined to be an open attribute.
As shown in connection with fig. 11, the candidate stitching model may include at least two classes. The two types of candidate stitching models may be three-dimensional concave structural models having different bottom areas. The left side model in fig. 11 is a three-dimensional concave structural model with a bottom area larger than the corresponding threshold value, and the right side model in fig. 11 is a three-dimensional concave structural model with a bottom area smaller than the corresponding threshold value.
In combination with the three-dimensional concave structural model with the bottom area larger than the corresponding threshold value as shown in fig. 12, the effect of reflecting the long-range ground effect (the effect of the lane line in the left side view in fig. 12 is better) is better after the image is spliced because the bottom area is larger, but the effect distortion of the close-range three-dimensional object (the obstacle vehicle in the right side view in fig. 12) is more serious.
In conjunction with the three-dimensional concave structural model with the bottom area smaller than the corresponding threshold value as shown in fig. 13, the effect reflected after image stitching is distortion of the long-range ground effect (the left-side view lane line in fig. 13 is severely curved), but the effect of the close-range solid object (the right-side view obstacle vehicle in fig. 13) is better.
In the present embodiment, two types of candidate stitching models are taken as examples. However, the method is not limited to this in practical use, and for example, three types of candidate stitching models with large, medium and small differences in bottom area can be designed. Or more candidate stitching models may be set more refined.
Through the above-described process, the positional attribute of the image pickup device can be judged according to the sensor or the positioning information, so that a model of the three-dimensional concave structure for image stitching can be selected based on this (positional attribute). So that the different scenes can realize clear display of the surrounding environment.
In one embodiment, in the case that the position attribute is an open attribute, a three-dimensional concave structural model with a bottom area smaller than a corresponding threshold value is taken as the target stitching model.
In the case where the position attribute is an open attribute, the definition priority of the far view is higher than that of the near view. In this case, a three-dimensional concave structural model having a bottom area smaller than the corresponding threshold value may be selected as the target stitching model.
In one embodiment, in the case where the position attribute is a narrow attribute, a three-dimensional concave structural model having a bottom area not smaller than a corresponding threshold value is taken as the target stitching model.
In the case where the position attribute is a narrow attribute, it means that the definition priority of the near view is higher than that of the far view. In this case, a three-dimensional concave structural model having a bottom area not smaller than a corresponding threshold value may be selected as the target stitching model.
Compared with the three-dimensional concave structural model with the compromise of bottom area, which needs to be debugged for many times in the prior art, in the current embodiment, at least two types of three-dimensional concave structural models with different bottom areas can be preset, and the three-dimensional concave structural models are selected according to the position attribute. Thereby meeting the definition requirements of near and far views.
In one embodiment, the image capture device includes image capture devices disposed at different locations of the vehicle body.
By the arrangement, adverse effects such as splicing ghost generated by tire pressure or load change in the image splicing process can be eliminated.
As shown in fig. 14, the present disclosure relates to an apparatus for image stitching, which may include:
a stitching seam determining module 1401, configured to determine stitching seams of images acquired by adjacent image acquisition devices;
a pixel deviation determining module 1402 for specifying a feature point, configured to determine, using the splice seam, a pixel deviation of the specified feature point, the pixel deviation of the specified feature point including a pixel deviation of the specified feature point in an image acquired by an adjacent image acquisition device;
The external parameter change condition determining module 1403 is configured to determine an external parameter change condition of an adjacent image capturing device by using a pixel deviation of the specified feature point and a pixel deviation of the specified pixel point in the stitching seam;
the parameter optimization module 1404 is configured to optimize image stitching parameters by using the external parameter variation condition, where the image stitching parameters are used to stitch images acquired by adjacent image acquisition devices.
In one embodiment, the pixel deviation determining module 1402 that specifies the feature point may include:
a range threshold setting submodule for setting a range threshold;
the candidate pixel point determining submodule is used for taking the pixel points in the range threshold value of the splicing seam as candidate pixel points in each image;
a specified feature point determination submodule for determining, as specified feature points, pixel points meeting a predetermined condition, which appear in the candidate pixel points;
and the pixel deviation calculating sub-module is used for calculating the pixel deviation of the appointed characteristic point in the image acquired by the adjacent image acquisition equipment.
In one embodiment, the extrinsic change condition determination module 1403 may include:
the appointed pixel point determining submodule is used for determining appointed pixel points in the splicing seams according to coordinates of the appointed characteristic points in the images acquired by the adjacent image acquisition equipment;
A pixel deviation determining sub-module for the specified pixel point, configured to obtain the pixel deviation of the specified pixel point;
and the external parameter change condition determining execution sub-module is used for determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the proportional relation between the pixel deviation of the appointed characteristic point and the pixel deviation of the appointed pixel point.
In one embodiment, the specifying pixel point determination submodule may include:
the coordinate acquisition sub-module is used for acquiring a first coordinate of a specified feature point in a first image and a second coordinate of the specified feature point in a second image, wherein the first image and the second image are images acquired by adjacent image acquisition equipment;
the appointed pixel point determining submodule is specifically used for taking an intersection point of a coordinate connecting line and a splicing seam as an appointed pixel point, wherein the coordinate connecting line comprises a connecting line between a first coordinate and a second coordinate; or in particular for
And taking the projection point of the coordinate midpoint projected to the splicing seam as a specified pixel point, wherein the coordinate midpoint is the midpoint of a connecting line between the first coordinate and the second coordinate.
In one embodiment, parameter optimization module 1404 may include:
the current height difference determining submodule is used for determining the current height difference of the adjacent image acquisition equipment according to the external parameter change condition;
The comparison result determining submodule is used for comparing the current height difference of the adjacent image acquisition equipment with the initial height difference acquired in advance to obtain a comparison result;
the correction result determining submodule is used for correcting the current height difference according to the initial height difference under the condition that the comparison result exceeds the corresponding threshold value to obtain a correction result;
and taking the corrected result as an optimized image splicing parameter.
In one embodiment, the method further comprises a target stitching model determination module, the module comprising:
the position attribute detection sub-module is used for detecting the position attribute of the image acquisition equipment, wherein the position attribute comprises an open attribute or a narrow attribute;
the target splicing model determining and executing sub-module is used for selecting a target splicing model from preset candidate splicing models according to the position attribute; the candidate stitching model is a model for stitching images, and comprises models of three-dimensional concave structures with different bottom areas.
In one embodiment, in the case that the location attribute is an open attribute, the target stitching model determination execution submodule is specifically configured to:
and taking the three-dimensional concave structural model with the bottom area smaller than the corresponding threshold value as a target splicing model.
In one embodiment, in the case that the location attribute is a stenosis attribute, the target stitching model determines an execution sub-module, specifically for:
and taking the three-dimensional concave structural model with the bottom area not smaller than the corresponding threshold value as a target splicing model.
In one embodiment, the image capture device includes image capture devices disposed at different locations of the vehicle body.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 15 illustrates a schematic block diagram of an example electronic device 1500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 15, the apparatus 1500 includes a computing unit 1510 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1520 or a computer program loaded from a storage unit 1580 into a Random Access Memory (RAM) 1530. In the RAM 1530, various programs and data required for the operation of the device 1500 may also be stored. The computing unit 1510, the ROM1520, and the RAM 1530 are connected to each other by a bus 1540. Input/output (I/O) interface 1550 is also connected to bus 1540.
Various components in device 1500 are connected to I/O interface 1550, including: an input unit 1560 such as a keyboard, mouse, or the like; an output unit 1570 such as various types of displays, speakers, and the like; a storage unit 1580 such as a magnetic disk, an optical disk, or the like; and a communication unit 1590 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1590 allows the device 1500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1510 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1510 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1510 performs the respective methods and processes described above, for example, a method of image stitching. For example, in some embodiments, the method of image stitching may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1580. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1500 via the ROM1520 and/or the communication unit 1590. When the computer program is loaded into RAM 1530 and executed by computing unit 1510, one or more steps of the method of image stitching described above may be performed. Alternatively, in other embodiments, the computing unit 1510 may be configured to perform the method of image stitching by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (27)

1. A method of image stitching, comprising:
determining a splicing seam of images acquired by adjacent image acquisition equipment;
determining pixel deviation of a specified feature point by using the splicing seam, wherein the pixel deviation of the specified feature point comprises pixel deviation of the specified feature point in an image acquired by the adjacent image acquisition equipment;
the method comprises the steps of projecting a coordinate midpoint to a projection point of the splicing seam as a specified pixel point, wherein the coordinate midpoint is a midpoint of a connecting line between a first coordinate of the specified feature point in a first image and a second coordinate of the specified feature point in a second image, and the first image and the second image are images acquired by the adjacent image acquisition equipment;
Determining the external parameter change condition of the adjacent image acquisition equipment by using the pixel deviation of the appointed characteristic point and the pixel deviation of the appointed pixel point;
and optimizing image stitching parameters by utilizing the external parameter change condition, wherein the image stitching parameters are used for stitching images acquired by the adjacent image acquisition equipment.
2. The method of claim 1, wherein the determining a stitching of images acquired by adjacent image acquisition devices comprises:
performing feature recognition on images acquired by adjacent image acquisition equipment;
and according to the result of the feature recognition, taking the line segment with the most identical feature points in the images acquired by the adjacent image acquisition equipment as a splicing seam of the images acquired by the adjacent image acquisition equipment.
3. The method of claim 1, wherein the determining, with the splice joint, a pixel deviation for a specified feature point comprises:
setting a range threshold;
in each image, taking the pixel points within the range threshold of the splice joint as candidate pixel points;
determining a pixel point meeting a predetermined condition, which appears in the candidate pixel points, as the specified feature point;
And calculating pixel deviation of the appointed characteristic points in the images acquired by the adjacent image acquisition equipment.
4. The method of claim 1, wherein the determining, with the splice joint, a pixel deviation for a specified feature point comprises:
determining an entity corresponding to a pixel point on the splice joint in each image;
determining coordinates of the same entity in each of the images;
and determining pixel deviation of the appointed characteristic point according to the difference value of the coordinates of the same entity in each image.
5. The method of claim 1, wherein the determining the extrinsic variation of the neighboring image acquisition devices using the pixel deviation of the specified feature point and the pixel deviation of the specified pixel point in the stitching comprises:
determining the appointed pixel point in the splice joint according to the coordinates of the appointed characteristic point in the image acquired by the adjacent image acquisition equipment;
acquiring pixel deviation of the appointed pixel point;
and determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the proportional relation between the pixel deviation of the appointed characteristic point and the pixel deviation of the appointed pixel point.
6. The method of claim 1, wherein the optimizing the image stitching parameters using the extrinsic parameters comprises:
determining the current height difference of adjacent image acquisition equipment according to the external parameter change condition;
comparing the current height difference of the adjacent image acquisition equipment with the initial height difference acquired in advance to obtain a comparison result;
correcting the current height difference according to the initial height difference under the condition that the comparison result exceeds a corresponding threshold value to obtain a correction result;
and taking the corrected result as an optimized image splicing parameter.
7. The method of claim 1, further comprising, prior to said determining a stitching of images acquired by adjacent image acquisition devices:
detecting a position attribute of the image acquisition equipment, wherein the position attribute comprises an open attribute or a narrow attribute;
selecting a target splicing model from preset candidate splicing models according to the position attribute; the candidate stitching model is a model for stitching the images and comprises models of three-dimensional concave structures with different bottom areas.
8. The method according to claim 7, wherein, in the case that the location attribute is an open attribute, the selecting a target stitching model from preset candidate stitching models includes:
and taking the three-dimensional concave structural model with the bottom area smaller than the corresponding threshold value as a target splicing model.
9. The method of claim 8, wherein determining that the location attribute is an open attribute comprises:
determining that the position attribute is an open attribute under the condition that an obstacle with a distance higher than a corresponding threshold value exists between a vehicle carrying the image acquisition equipment and any two sides of the vehicle is detected; or (b)
And determining that the position attribute is an open attribute under the condition that the vehicle carrying the image acquisition equipment is detected to be positioned in a relatively open area.
10. The method of claim 7, wherein, in the case where the location attribute is a stenosis attribute, the selecting a target stitching model from among pre-set candidate stitching models comprises:
and taking the three-dimensional concave structural model with the bottom area not smaller than the corresponding threshold value as a target splicing model.
11. The method of claim 10, wherein determining that the location attribute is a stenosis attribute comprises:
Determining that the position attribute is a narrow attribute when an obstacle with a distance lower than a corresponding threshold value exists between a vehicle carrying the image acquisition equipment and any two sides of the vehicle is detected; or (b)
And determining that the position attribute is a narrow attribute under the condition that the vehicle carrying the image acquisition equipment is detected to be positioned in a relatively closed area.
12. The method of any one of claims 1 to 11, wherein the image acquisition device comprises image acquisition devices disposed at different locations of a vehicle body.
13. An apparatus for image stitching, comprising:
the splicing seam determining module is used for determining splicing seams of the images acquired by the adjacent image acquisition equipment;
the pixel deviation determining module is used for determining the pixel deviation of the specified characteristic point by utilizing the splicing seam, wherein the pixel deviation of the specified characteristic point comprises the pixel deviation of the specified characteristic point in the image acquired by the adjacent image acquisition equipment;
the external parameter change condition determining module is used for determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the pixel deviation of the appointed characteristic point and the pixel deviation of the appointed pixel point;
The parameter optimization module is used for optimizing image splicing parameters by utilizing the external parameter change condition, and the image splicing parameters are used for splicing images acquired by the adjacent image acquisition equipment;
the parameter change condition determining module comprises a specified pixel point determining sub-module, and is specifically configured to project a coordinate midpoint to a projection point of the stitching seam as the specified pixel point, where the coordinate midpoint is a midpoint of a connection line between a first coordinate of the specified feature point in a first image and a second coordinate of the specified feature point in a second image, and the first image and the second image are images acquired by the adjacent image acquisition device.
14. The apparatus of claim 13, wherein the splice seam determination module is configured to:
performing feature recognition on images acquired by adjacent image acquisition equipment;
and according to the result of the feature recognition, taking the line segment with the most identical feature points in the images acquired by the adjacent image acquisition equipment as a splicing seam of the images acquired by the adjacent image acquisition equipment.
15. The apparatus of claim 13, wherein the means for determining pixel deviation of the specified feature point comprises:
A range threshold setting submodule for setting a range threshold;
a candidate pixel point determining sub-module, configured to use, in each of the images, a pixel point within the range threshold of the stitching seam as a candidate pixel point;
a specified feature point determination submodule for determining, as the specified feature point, a pixel point satisfying a predetermined condition, which appears in the candidate pixel points;
and the pixel deviation calculating sub-module is used for calculating the pixel deviation of the appointed characteristic point in the image acquired by the adjacent image acquisition equipment.
16. The apparatus of claim 13, wherein the pixel deviation determination module of the specified feature point is configured to:
determining an entity corresponding to a pixel point on the splice joint in each image;
determining coordinates of the same entity in each of the images;
and determining pixel deviation of the appointed characteristic point according to the difference value of the coordinates of the same entity in each image.
17. The apparatus of claim 13, wherein the extrinsic change condition determination module comprises:
a designated pixel point determining submodule, configured to determine the designated pixel point in the splice line according to coordinates of the designated feature point in the image acquired by the adjacent image acquisition device;
A pixel deviation determining submodule for the appointed pixel point, which is used for obtaining the pixel deviation of the appointed pixel point;
and the external parameter change condition determining execution sub-module is used for determining the external parameter change condition of the adjacent image acquisition equipment by utilizing the proportional relation between the pixel deviation of the specified characteristic point and the pixel deviation of the specified pixel point.
18. The apparatus of claim 13, wherein the parameter optimization module comprises:
the current height difference determining submodule is used for determining the current height difference of the adjacent image acquisition equipment according to the external parameter change condition;
the comparison result determining submodule is used for comparing the current height difference of the adjacent image acquisition equipment with the initial height difference acquired in advance to obtain a comparison result;
the correction result determining submodule is used for correcting the current height difference according to the initial height difference under the condition that the comparison result exceeds a corresponding threshold value to obtain a correction result;
and taking the corrected result as an optimized image splicing parameter.
19. The apparatus of claim 13, further comprising a target splice model determination module comprising:
A position attribute detection sub-module, configured to detect a position attribute of the image capturing device, where the position attribute includes an open attribute or a narrow attribute;
the target splicing model determining and executing sub-module is used for selecting a target splicing model from preset candidate splicing models according to the position attribute; the candidate stitching model is a model for stitching the images and comprises models of three-dimensional concave structures with different bottom areas.
20. The apparatus of claim 19, wherein in the case where the location attribute is an open attribute, the target stitching model determination execution submodule is specifically configured to:
and taking the three-dimensional concave structural model with the bottom area smaller than the corresponding threshold value as a target splicing model.
21. The apparatus of claim 20, wherein determining that the location attribute is an open attribute comprises:
determining that the position attribute is an open attribute under the condition that an obstacle with a distance higher than a corresponding threshold value exists between a vehicle carrying the image acquisition equipment and any two sides of the vehicle is detected; or (b)
And determining that the position attribute is an open attribute under the condition that the vehicle carrying the image acquisition equipment is detected to be positioned in a relatively open area.
22. The apparatus of claim 19, wherein in the case that the location attribute is a stenosis attribute, the target stitching model determines an execution sub-module, in particular for:
and taking the three-dimensional concave structural model with the bottom area not smaller than the corresponding threshold value as a target splicing model.
23. The apparatus of claim 22, wherein determining that the location attribute is a stenosis attribute comprises:
determining that the position attribute is a narrow attribute when an obstacle with a distance lower than a corresponding threshold value exists between a vehicle carrying the image acquisition equipment and any two sides of the vehicle is detected; or (b)
And determining that the position attribute is a narrow attribute under the condition that the vehicle carrying the image acquisition equipment is detected to be positioned in a relatively closed area.
24. The apparatus of any one of claims 13 to 23, wherein the image acquisition device comprises image acquisition devices disposed at different locations of a vehicle body.
25. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 12.
26. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 12.
27. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any of claims 1 to 12.
CN202310180002.2A 2022-03-18 2022-03-18 Image stitching method, device, equipment and storage medium Pending CN116245730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310180002.2A CN116245730A (en) 2022-03-18 2022-03-18 Image stitching method, device, equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210271945.1A CN114757824B (en) 2022-03-18 2022-03-18 Image splicing method, device, equipment and storage medium
CN202310180002.2A CN116245730A (en) 2022-03-18 2022-03-18 Image stitching method, device, equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210271945.1A Division CN114757824B (en) 2022-03-18 2022-03-18 Image splicing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116245730A true CN116245730A (en) 2023-06-09

Family

ID=82327165

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310180002.2A Pending CN116245730A (en) 2022-03-18 2022-03-18 Image stitching method, device, equipment and storage medium
CN202210271945.1A Active CN114757824B (en) 2022-03-18 2022-03-18 Image splicing method, device, equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210271945.1A Active CN114757824B (en) 2022-03-18 2022-03-18 Image splicing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116245730A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173071A (en) * 2023-11-02 2023-12-05 青岛天仁微纳科技有限责任公司 Image stitching method of nano-imprinting mold

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100446037C (en) * 2007-08-31 2008-12-24 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic
CN101646022B (en) * 2009-09-04 2011-11-16 华为终端有限公司 Image splicing method and system thereof
CN104331872B (en) * 2014-11-26 2017-06-30 中测新图(北京)遥感技术有限责任公司 Image split-joint method
US10136055B2 (en) * 2016-07-29 2018-11-20 Multimedia Image Solution Limited Method for stitching together images taken through fisheye lens in order to produce 360-degree spherical panorama
US9990753B1 (en) * 2017-01-11 2018-06-05 Macau University Of Science And Technology Image stitching
CN106952219B (en) * 2017-03-14 2020-11-06 成都通甲优博科技有限责任公司 Image generation method for correcting fisheye camera based on external parameters
CN108171759A (en) * 2018-01-26 2018-06-15 上海小蚁科技有限公司 The scaling method of double fish eye lens panorama cameras and device, storage medium, terminal
CN112132902B (en) * 2019-06-24 2024-01-16 上海安亭地平线智能交通技术有限公司 Vehicle-mounted camera external parameter adjusting method and device, electronic equipment and medium
CN113066158B (en) * 2019-12-16 2023-03-10 杭州海康威视数字技术股份有限公司 Vehicle-mounted all-round looking method and device
CN111462172B (en) * 2020-02-24 2023-03-24 西安电子科技大学 Three-dimensional panoramic image self-adaptive generation method based on driving scene estimation
CN112365406B (en) * 2021-01-13 2021-06-25 芯视界(北京)科技有限公司 Image processing method, device and readable storage medium
CN113688935A (en) * 2021-09-03 2021-11-23 阿波罗智能技术(北京)有限公司 High-precision map detection method, device, equipment and storage medium
CN114187366A (en) * 2021-12-10 2022-03-15 北京有竹居网络技术有限公司 Camera installation correction method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173071A (en) * 2023-11-02 2023-12-05 青岛天仁微纳科技有限责任公司 Image stitching method of nano-imprinting mold
CN117173071B (en) * 2023-11-02 2024-01-30 青岛天仁微纳科技有限责任公司 Image stitching method of nano-imprinting mold

Also Published As

Publication number Publication date
CN114757824B (en) 2023-03-21
CN114757824A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN109461211B (en) Semantic vector map construction method and device based on visual point cloud and electronic equipment
CN113989450B (en) Image processing method, device, electronic equipment and medium
EP4027299A2 (en) Method and apparatus for generating depth map, and storage medium
US20220277478A1 (en) Positioning Method and Apparatus
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
JP7422105B2 (en) Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device
EP3876142A1 (en) Map building method, apparatus and system, and storage medium
CN113706704B (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112487979A (en) Target detection method, model training method, device, electronic device and medium
CN112836698A (en) Positioning method, positioning device, storage medium and electronic equipment
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN116245730A (en) Image stitching method, device, equipment and storage medium
CN112529011A (en) Target detection method and related device
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN115239899B (en) Pose map generation method, high-precision map generation method and device
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN116188587A (en) Positioning method and device and vehicle
CN112598736A (en) Map construction based visual positioning method and device
CN111784659A (en) Image detection method and device, electronic equipment and storage medium
CN118072280A (en) Method and device for detecting change of traffic light, electronic equipment and automatic driving vehicle
CN114612544B (en) Image processing method, device, equipment and storage medium
CN115345919B (en) Depth determination method and device, electronic equipment and storage medium
CN115164924A (en) Fusion positioning method, system, equipment and storage medium based on visual AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination