CN117333528A - Image processing method, device, processing equipment and storage medium - Google Patents
Image processing method, device, processing equipment and storage medium Download PDFInfo
- Publication number
- CN117333528A CN117333528A CN202311393696.4A CN202311393696A CN117333528A CN 117333528 A CN117333528 A CN 117333528A CN 202311393696 A CN202311393696 A CN 202311393696A CN 117333528 A CN117333528 A CN 117333528A
- Authority
- CN
- China
- Prior art keywords
- images
- camera
- coordinate system
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 32
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims description 33
- 238000012163 sequencing technique Methods 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image processing method, an image processing device, processing equipment and a storage medium, and relates to the technical field of image processing. The image processing method comprises the following steps: acquiring a plurality of images of a target object under a plurality of view angles and a first camera coordinate corresponding to each image, wherein the first camera coordinate is a camera coordinate under a model coordinate system; determining a second camera coordinate corresponding to each image according to shooting position information of the plurality of images, wherein the second camera coordinate is a camera coordinate under a geodetic coordinate system; determining a proportional relation between a model coordinate system and a geodetic coordinate system according to a first camera coordinate and a second camera coordinate corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object. Based on the proportional relationship, the model size of the target object can be converted into the physical size of the target object, so that the real size of the target object can be determined.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, a processing apparatus, and a storage medium.
Background
Three-dimensional modeling is widely used in various industries and fields, and is generally performed according to a captured image, and the image and the three-dimensional modeling are also hot spots for research.
In the related art, a plurality of images of an object captured by a camera are acquired, three-dimensional modeling is performed according to the plurality of images, a three-dimensional model of the object is obtained, and a size calculated based on the three-dimensional model of the object is in a model coordinate system. In the related art, the true size of an object cannot be determined.
Disclosure of Invention
The present invention has been made in view of the above-mentioned drawbacks of the related art, and an object of the present invention is to provide an image processing method, apparatus, processing device, and storage medium, which solve the above-mentioned problems occurring in the related art.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a plurality of images of a target object under a plurality of view angles and first camera coordinates corresponding to each image, wherein the first camera coordinates are camera coordinates under a model coordinate system;
determining a second camera coordinate corresponding to each image according to the shooting position information of the plurality of images, wherein the second camera coordinate is a camera coordinate under a geodetic coordinate system;
determining a proportional relation between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object.
Optionally, the determining, according to the shooting position information of the plurality of images, the second camera coordinate corresponding to each image includes:
determining shooting position information of the plurality of images from attribute information of the plurality of images, wherein the shooting position information of the plurality of images comprises: longitude, latitude and altitude at which the camera was located when each of the images was taken;
and determining a second camera coordinate corresponding to each image according to the longitude, latitude and altitude of the camera when each image is shot.
Optionally, the acquiring a plurality of images of the target object under a plurality of view angles and the first camera coordinates corresponding to each image includes:
acquiring the plurality of images for the target object at a plurality of viewing angles;
determining the model coordinate system and camera external parameters corresponding to each image under the model coordinates according to the plurality of images;
taking shooting position parameters in the camera external parameters corresponding to each image as first camera coordinates corresponding to each image.
Optionally, the determining the proportional relationship between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images includes:
calculating camera distances between every two images in the plurality of images under the model coordinate system according to first camera coordinates corresponding to the plurality of images to obtain a plurality of first distances;
calculating camera distances between each of the plurality of images under the geodetic coordinate system according to second camera coordinates corresponding to the plurality of images to obtain a plurality of second distances;
and calculating the proportional relation according to the first distances and the second distances.
Optionally, the calculating the proportional relation according to the first distance and the second distance includes:
calculating the ratio of the first distances to the second distances to obtain a plurality of initial ratios;
screening the initial ratios to obtain target ratios;
and taking the average value of the target ratios as the proportional relation.
Optionally, the screening the plurality of initial ratios to obtain a plurality of target ratios includes:
sequencing the plurality of initial ratios to obtain a sequencing result;
and deleting the initial ratio which does not meet the preset condition in the sequencing result to obtain the target ratios.
Optionally, the method further comprises:
constructing a three-dimensional model of the target object under the model coordinate system according to the plurality of images;
calculating the model size of the target object under the model coordinate system;
and obtaining the physical size of the target object under the geodetic coordinate system according to the model size of the target object under the model coordinate system and the proportional relation.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the acquisition module is used for acquiring a plurality of images of the target object under a plurality of view angles and first camera coordinates corresponding to each image, wherein the first camera coordinates are camera coordinates under a model coordinate system;
the determining module is used for determining second camera coordinates corresponding to each image according to the shooting position information of the plurality of images, wherein the second camera coordinates are camera coordinates under a geodetic coordinate system; determining a proportional relation between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object.
Optionally, the determining module is specifically configured to determine, from attribute information of the plurality of images, shooting position information of the plurality of images, where the shooting position information of the plurality of images includes: longitude, latitude and altitude at which the camera was located when each of the images was taken; and determining a second camera coordinate corresponding to each image according to the longitude, latitude and altitude of the camera when each image is shot.
Optionally, the acquiring module is specifically configured to acquire the plurality of images for the target object under a plurality of viewing angles; determining the model coordinate system and camera external parameters corresponding to each image under the model coordinates according to the plurality of images; taking shooting position parameters in the camera external parameters corresponding to each image as first camera coordinates corresponding to each image.
Optionally, the determining module is specifically configured to calculate, according to first camera coordinates corresponding to the plurality of images, a camera distance between each two images in the plurality of images in the model coordinate system, so as to obtain a plurality of first distances; calculating camera distances between each of the plurality of images under the geodetic coordinate system according to second camera coordinates corresponding to the plurality of images to obtain a plurality of second distances; and calculating the proportional relation according to the first distances and the second distances.
Optionally, the determining module is specifically configured to calculate ratios between the plurality of first distances and the plurality of second distances to obtain a plurality of initial ratios; screening the initial ratios to obtain target ratios; and taking the average value of the target ratios as the proportional relation.
Optionally, the determining module is specifically configured to sort the plurality of initial ratios to obtain a sorting result; and deleting the initial ratio which does not meet the preset condition in the sequencing result to obtain the target ratios.
Optionally, the apparatus further includes:
the construction module is used for constructing a three-dimensional model of the target object under the model coordinate system according to the plurality of images;
the calculation module is used for calculating the model size of the target object under the model coordinate system; and obtaining the physical size of the target object under the geodetic coordinate system according to the model size of the target object under the model coordinate system and the proportional relation.
In a third aspect, an embodiment of the present invention further provides a processing apparatus, including: a memory storing a computer program executable by the processor, and a processor implementing the image processing method according to any one of the above first aspects when the processor executes the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when read and executed, implements the image processing method according to any one of the first aspects.
The beneficial effects of the invention are as follows: the embodiment of the invention provides an image processing method, which comprises the following steps: acquiring a plurality of images of a target object under a plurality of view angles and a first camera coordinate corresponding to each image, wherein the first camera coordinate is a camera coordinate under a model coordinate system; determining a second camera coordinate corresponding to each image according to shooting position information of the plurality of images, wherein the second camera coordinate is a camera coordinate under a geodetic coordinate system; determining a proportional relation between a model coordinate system and a geodetic coordinate system according to a first camera coordinate and a second camera coordinate corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object. The first camera coordinate is a camera coordinate in a model coordinate system, the second camera coordinate is a camera coordinate in a geodetic coordinate system, and the model size of the target object can be converted into the physical size of the target object based on the proportional relation obtained by the first camera coordinate and the second camera coordinate corresponding to the plurality of images, so that the real size of the target object can be determined.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for processing an image according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a third image processing method according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for image processing according to an embodiment of the present invention;
fig. 5 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 6 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 7 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, it should be noted that, if the terms "upper", "lower", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or an azimuth or the positional relationship that is commonly put when the product of the application is used, it is merely for convenience of description and simplification of the description, and does not indicate or imply that the apparatus or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and therefore should not be construed as limiting the present application.
Furthermore, the terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, without conflict, features in embodiments of the present application may be combined with each other.
The embodiment of the application provides an image processing method, which is applied to processing equipment, wherein the processing equipment is terminal equipment or a server, and the terminal equipment can be any one of the following: desktop computers, notebook computers, tablet computers, smart phones.
An image processing method provided in the embodiment of the present application is explained below.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention, as shown in fig. 1, the method may include:
s101, acquiring a plurality of images of a target object under a plurality of view angles and first camera coordinates corresponding to each image.
The first camera coordinates are camera coordinates in a model coordinate system.
In some embodiments, a camera is used to capture a target object at multiple view angles, so as to obtain multiple images captured by the camera, each two adjacent images in the multiple images have an overlapping area, a model coordinate system is determined according to the multiple images, and then camera coordinates corresponding to each image in the model coordinate system, namely, first camera coordinates, are determined.
In addition, the first camera coordinates corresponding to each image are used for representing the camera position when each image is shot under the model coordinate system.
It should be noted that, the coordinate system established in the modeling process of the model coordinate system may perform three-dimensional modeling according to a plurality of images to obtain a three-dimensional model of the target object, where the model coordinate system established in the modeling process is a three-dimensional coordinate system, and the three-dimensional coordinate system may be used to locate the position and direction of the three-dimensional model of the target object.
S102, determining second camera coordinates corresponding to each image according to shooting position information of the plurality of images.
Wherein the second camera coordinates are camera coordinates in the geodetic coordinate system.
In the embodiment of the application, each image in the plurality of images has corresponding shooting position information, and the shooting position information of each image is used for indicating the real position of the camera when the image is shot; and processing according to the shooting position information of the plurality of images to obtain camera coordinates corresponding to each image in the geodetic coordinate system, namely a second camera coordinate.
In addition, the geodetic coordinate system is a coordinate system established by taking a reference ellipsoid as a datum plane in geodetic measurement, and the position of a geodetic point is represented by geodetic longitude, geodetic latitude and geodetic altitude, wherein one of the shapes, sizes and orientations of the geodetic ellipses is called a reference ellipsoid. And the second camera coordinates corresponding to each image are used for representing the camera position when each image is shot under the geodetic coordinate system.
S103, determining a proportional relation between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images.
The proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object.
In some embodiments, a preset algorithm is adopted, calculation is performed according to first camera coordinates and second camera coordinates corresponding to the plurality of images, and a proportional relation between the model coordinate system and the geodetic coordinate system is determined.
It should be noted that, since the first camera coordinate corresponding to each image refers to the camera position when each image is captured under the model coordinate system, and the second camera coordinate corresponding to each image refers to the camera position when each image is captured under the geodetic coordinate system, the ratio between the model coordinate system and the geodetic coordinate system can be represented according to the ratio relationship determined by the first camera coordinate and the second camera coordinate.
In addition, the proportional relationship may also be referred to as a dimension proportional relationship, and the model size of the target object under the model coordinate system may be converted into the physical size of the target object by adopting the proportional relationship, where the model size of the target object under the model coordinate system refers to the size of the target object in three-dimensional modeling, and the physical size of the target object refers to the real size of the target object under the geodetic coordinate system.
In summary, an embodiment of the present invention provides an image processing method, including: acquiring a plurality of images of a target object under a plurality of view angles and a first camera coordinate corresponding to each image, wherein the first camera coordinate is a camera coordinate under a model coordinate system; determining a second camera coordinate corresponding to each image according to shooting position information of the plurality of images, wherein the second camera coordinate is a camera coordinate under a geodetic coordinate system; determining a proportional relation between a model coordinate system and a geodetic coordinate system according to a first camera coordinate and a second camera coordinate corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object. The first camera coordinate is a camera coordinate in a model coordinate system, the second camera coordinate is a camera coordinate in a geodetic coordinate system, and the model size of the target object can be converted into the physical size of the target object based on the proportional relation obtained by the first camera coordinate and the second camera coordinate corresponding to the plurality of images, so that the real size of the target object can be determined.
Optionally, fig. 2 is a second flowchart of an image processing method according to the embodiment of the present invention, as shown in fig. 2, a process of determining, in S102, a second camera coordinate corresponding to each image according to shooting position information of the plurality of images may include:
s201, determining shooting position information of a plurality of images from attribute information of the plurality of images.
Wherein the shooting position information of the plurality of images includes: the longitude, latitude, and altitude at which the camera was located when each image was taken.
In some embodiments, exif (Exchangeable image file format ) information of each image, which is a file format set specifically for a photograph of a digital camera, is acquired, and attribute information and photographing data of each image are included in an Exif package of each image. The Exif of each image contains GPS (Global Positioning System ) information, which can represent the longitude, latitude and altitude of the camera when each image is taken.
In the embodiment of the application, the attribute information of the plurality of images is acquired from the Exif package of the plurality of images, and then the shooting position information of the plurality of images is determined from the attribute information of the plurality of images.
S202, determining second camera coordinates corresponding to each image according to the longitude, latitude and altitude of the camera when each image is shot.
The longitude, latitude and altitude of the camera when each image is shot are converted to obtain the camera coordinates of the camera under the geodetic coordinate system when each image is shot, namely the second camera coordinates corresponding to each image.
In summary, according to the GPS information in the Exif packet of each image, the second camera coordinates corresponding to each image can be flexibly and accurately determined.
Optionally, fig. 3 is a flowchart of a third image processing method according to the embodiment of the present invention, as shown in fig. 3, a process for acquiring multiple images of a target object at multiple viewing angles and first camera coordinates corresponding to each image in S101 may include:
s301, acquiring a plurality of images of a target object under a plurality of view angles.
S302, determining a model coordinate system according to the plurality of images, and camera external parameters corresponding to each image under the model coordinates.
The method comprises the steps of shooting a target object at a plurality of view angles by a camera to obtain a plurality of images shot by the camera, wherein each two adjacent images in the plurality of images have an overlapping area, and determining a model coordinate system according to the plurality of images.
It should be noted that, the camera parameters of each image under the model coordinate system include: and shooting angle parameters and shooting position parameters corresponding to each image under the model coordinate system.
S303, taking shooting position parameters in camera external parameters corresponding to each image as first camera coordinates corresponding to each image.
In the embodiment of the application, the shooting position parameter corresponding to each image is extracted from the camera external parameter corresponding to each image, the shooting position parameter corresponding to each image refers to a parameter under a model coordinate system, and the shooting position parameter corresponding to each image under the model coordinate system refers to a camera coordinate under the model coordinate system, so that the first camera coordinate corresponding to each image is obtained.
In summary, according to the plurality of images, the model coordinate system is determined, and the camera external parameters corresponding to each image are determined under the model coordinates, and the shooting position parameters in the camera external parameters corresponding to each image are used as the first camera coordinates corresponding to each image, so that the first camera coordinates corresponding to each image can be determined flexibly and efficiently.
Optionally, fig. 4 is a flowchart of a method for processing an image, as shown in fig. 4, where the determining, in S103, a proportional relationship between a model coordinate system and a geodetic coordinate system according to a first camera coordinate and a second camera coordinate corresponding to a plurality of images may include:
s401, calculating camera distances between every two images in the images according to first camera coordinates corresponding to the images in the model coordinate system to obtain a plurality of first distances.
The number of the plurality of images may be n, and the first camera coordinates corresponding to the plurality of images may be expressed as: (x) i ,y i ,z i ) Where i=1, …, n.
In some embodiments, the plurality of first distances may be calculated using the following formula:
where i, j=1, …, n, but i+.j, when the number of the plurality of images is n, n (n-1)/2 first distances can be obtained.
S402, calculating camera distances between each of the plurality of images according to second camera coordinates corresponding to the plurality of images in a geodetic coordinate system to obtain a plurality of second distances.
Wherein the number of the plurality of first distances and the number of the plurality of second distances are the same, that is, n (n-1)/2 second distances can be obtained when the number of the plurality of images is n.
It should be noted that, the number of the plurality of images may be n, and the first camera coordinates corresponding to the plurality of images may be expressed as: (X) i ,Y i ,Z i ) Where i=1, …, n.
In some embodiments, the plurality of second distances may be calculated using the following formula:
where i, j=1, …, n, but i+.j.
S403, calculating a proportional relation according to the first distances and the second distances.
In this embodiment of the present application, the first distances and the second distances are in one-to-one correspondence, and the first distances and the second distances calculated for the same two adjacent images are corresponding. The proportional relationship may be calculated from the plurality of first distances, the plurality of second distances, and the correspondence relationship therebetween.
By way of example, the plurality of images includes: a. b, c, the first camera coordinates corresponding to the plurality of images comprise: a1, b1, c1, a first distance between a1 and b1, a first distance between a1 and c1, a first distance between b1 and c1 may be calculated. The second camera coordinates corresponding to the plurality of images include: a2, b2, c2, a second distance between a2 and b2, a second distance between a2 and c2, a second distance between b2 and c2 may be calculated. Wherein, a first distance between a1 and c1 and a second distance between a2 and c2 are corresponding, and similarly, a first distance between a1 and c1 and a second distance between a2 and c2 are corresponding.
Optionally, fig. 5 is a flowchart fifth of an image processing method according to an embodiment of the present invention, as shown in fig. 5, a process of calculating a proportional relationship according to the first distance and the second distance in S403 may include:
s501, calculating the ratio of the first distances to the second distances to obtain a plurality of initial ratios.
The first distance and the second distance obtained by calculation have a corresponding relationship.
In some embodiments, a second distance corresponding to each of the plurality of first distances is determined, and a ratio between each first distance and the corresponding second distance is calculated, resulting in a plurality of initial ratios.
It is noted that if the number of images is n, the number of initial ratios is n (n-1)/2.
It should be noted that the following formulas may be used to calculate a plurality of initial ratios:
wherein d ij Representing a plurality of first distances, D ij Representing a plurality of second distances, k ij The initial ratios i, j=1, …, n are shown, but i+.j, the number of images is n, and since i+.j, there is typically no division by zero.
S502, screening a plurality of initial ratios to obtain a plurality of target ratios;
s503, taking an average value of a plurality of target ratios as a proportional relation.
Wherein the proportional relationship may be denoted as K.
In the embodiment of the application, screening is performed on a plurality of initial ratios, part of the initial ratios in the plurality of initial ratios is deleted, and the rest initial ratios are used as a plurality of target ratios; and calculating the sum value of the target ratios and the number of the target ratios, and dividing the sum value of the target ratios by the number of the target ratios to obtain a proportional relationship.
Optionally, fig. 6 is a flowchart of a method for processing an image, as shown in fig. 6, where a process of screening a plurality of initial ratios to obtain a plurality of target ratios in S502 may include:
s601, sorting the initial ratios to obtain a sorting result.
Sequencing a plurality of initial ratios from large to small to obtain a sequencing result; or sorting the initial ratios from small to large to obtain a sorting result.
S602, deleting initial ratios which do not meet preset conditions in the sorting result to obtain a plurality of target ratios.
In some embodiments, deleting the initial ratio of the front preset part and the initial ratio of the rear preset part in the sorting result to obtain the initial ratio of the middle part in the sorting result, taking the initial ratio of the middle part as a plurality of target ratios, wherein the number of the plurality of images is n, and the number of the plurality of target ratios can be n (n-1)/4.
The preset portion may be a quarter portion, a fifth portion, or may be set according to an actual requirement or an empirical value, which is not specifically limited in the embodiment of the present application.
In the embodiment of the application, since the Exif packet of each image and the external parameters of the camera have errors, the calculated initial ratios also have errors, so that the initial ratios which do not meet the preset conditions in the sequencing result need to be deleted, so that the target ratios are obtained more accurately, and the finally determined proportional relationship is also more accurate.
Optionally, fig. 7 is a flow chart seven of an image processing method according to an embodiment of the present invention, as shown in fig. 7, the method may further include:
s701, constructing a three-dimensional model of the target object under a model coordinate system according to the plurality of images.
The three-dimensional model of the target object is constructed according to a plurality of images of the target object at a plurality of view angles, and the model coordinate system is a coordinate system established by taking a vertex in the three-dimensional model of the target object as an origin.
It should be noted that the three-dimensional model of the target object may be a NeRF (Neural Radiance Field, neural radiation field) model of the target object, or may be other types of three-dimensional models, which are not limited in particular in the embodiment of the present application.
S702, calculating the model size of the target object under the model coordinate system.
S703, obtaining the physical size of the target object in the geodetic coordinate system according to the model size of the target object in the model coordinate system and the proportional relation.
In some embodiments, according to the three-dimensional model of the target object in the model coordinate system, calculating the model size of the three-dimensional model of the target object in the model coordinate system, and multiplying the model size by the proportional relationship to obtain the physical size of the target object in the geodetic coordinate system, namely the real size of the target object.
In practical application, a plurality of pictures aiming at the hill are acquired, a three-dimensional model of the hill is constructed, the model size of the three-dimensional model of the hill under a model coordinate system is calculated, and the model size of the three-dimensional model of the hill is multiplied by a proportional relation to obtain the physical size of the hill under a geodetic coordinate system, namely the real size of the hill.
In summary, the embodiment of the present application provides an image processing method, which obtains a plurality of images of a target object under a plurality of view angles, and a first camera coordinate corresponding to each image, where the first camera coordinate is a camera coordinate under a model coordinate system; determining a second camera coordinate corresponding to each image according to shooting position information of the plurality of images, wherein the second camera coordinate is a camera coordinate under a geodetic coordinate system; determining a proportional relation between a model coordinate system and a geodetic coordinate system according to a first camera coordinate and a second camera coordinate corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object. The method comprises the steps of obtaining a first camera coordinate and a second camera coordinate corresponding to a plurality of images, wherein the first camera coordinate is a camera coordinate under a model coordinate system, the second camera coordinate is a camera coordinate under a geodetic coordinate system, and based on a proportional relationship obtained by the first camera coordinate and the second camera coordinate corresponding to the plurality of images, the model size of a target object can be converted into the physical size of the target object, so that the real size of the target object can be determined, and the whole implementation process is convenient and efficient.
The following describes an image processing apparatus, a processing device, a storage medium, etc. for executing the image processing method provided in the present application, and specific implementation processes and technical effects thereof refer to relevant contents of the foregoing image processing method, which are not described in detail below.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus includes:
an obtaining module 801, configured to obtain a plurality of images of a target object under a plurality of view angles, and a first camera coordinate corresponding to each image, where the first camera coordinate is a camera coordinate under a model coordinate system;
a determining module 802, configured to determine, according to the capturing position information of the plurality of images, a second camera coordinate corresponding to each of the images, where the second camera coordinate is a camera coordinate in a geodetic coordinate system; determining a proportional relation between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object.
Optionally, the determining module 802 is specifically configured to determine, from attribute information of the plurality of images, shooting location information of the plurality of images, where the shooting location information of the plurality of images includes: longitude, latitude and altitude at which the camera was located when each of the images was taken; and determining a second camera coordinate corresponding to each image according to the longitude, latitude and altitude of the camera when each image is shot.
Optionally, the acquiring module 801 is specifically configured to acquire the plurality of images of the target object under a plurality of viewing angles; determining the model coordinate system and camera external parameters corresponding to each image under the model coordinates according to the plurality of images; taking shooting position parameters in the camera external parameters corresponding to each image as first camera coordinates corresponding to each image.
Optionally, the determining module 802 is specifically configured to calculate, according to first camera coordinates corresponding to the plurality of images, a camera distance between each two images in the plurality of images in the model coordinate system, so as to obtain a plurality of first distances; calculating camera distances between each of the plurality of images under the geodetic coordinate system according to second camera coordinates corresponding to the plurality of images to obtain a plurality of second distances; and calculating the proportional relation according to the first distances and the second distances.
Optionally, the determining module 802 is specifically configured to calculate ratios between the first distances and the second distances to obtain a plurality of initial ratios; screening the initial ratios to obtain target ratios; and taking the average value of the target ratios as the proportional relation.
Optionally, the determining module 802 is specifically configured to sort the plurality of initial ratios to obtain a sorting result; and deleting the initial ratio which does not meet the preset condition in the sequencing result to obtain the target ratios.
Optionally, the apparatus further includes:
the construction module is used for constructing a three-dimensional model of the target object under the model coordinate system according to the plurality of images;
the calculation module is used for calculating the model size of the target object under the model coordinate system; and obtaining the physical size of the target object under the geodetic coordinate system according to the model size of the target object under the model coordinate system and the proportional relation.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 9 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention, as shown in fig. 9, where the processing apparatus includes: processor 901, memory 902.
The memory 902 is used for storing a program, and the processor 901 calls the program stored in the memory 902 to execute the above method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An image processing method, comprising:
acquiring a plurality of images of a target object under a plurality of view angles and first camera coordinates corresponding to each image, wherein the first camera coordinates are camera coordinates under a model coordinate system;
determining a second camera coordinate corresponding to each image according to the shooting position information of the plurality of images, wherein the second camera coordinate is a camera coordinate under a geodetic coordinate system;
determining a proportional relation between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object.
2. The method according to claim 1, wherein determining the second camera coordinates corresponding to each image according to the capturing position information of the plurality of images includes:
determining shooting position information of the plurality of images from attribute information of the plurality of images, wherein the shooting position information of the plurality of images comprises: longitude, latitude and altitude at which the camera was located when each of the images was taken;
and determining a second camera coordinate corresponding to each image according to the longitude, latitude and altitude of the camera when each image is shot.
3. The method of claim 1, wherein the acquiring a plurality of images for the target object at a plurality of viewing angles, and a first camera coordinate corresponding to each image, comprises:
acquiring the plurality of images for the target object at a plurality of viewing angles;
determining the model coordinate system and camera external parameters corresponding to each image under the model coordinates according to the plurality of images;
taking shooting position parameters in the camera external parameters corresponding to each image as first camera coordinates corresponding to each image.
4. The method of claim 1, wherein determining the proportional relationship between the model coordinate system and the geodetic coordinate system from the first camera coordinates and the second camera coordinates corresponding to the plurality of images comprises:
calculating camera distances between every two images in the plurality of images under the model coordinate system according to first camera coordinates corresponding to the plurality of images to obtain a plurality of first distances;
calculating camera distances between each of the plurality of images under the geodetic coordinate system according to second camera coordinates corresponding to the plurality of images to obtain a plurality of second distances;
and calculating the proportional relation according to the first distances and the second distances.
5. The method of claim 4, wherein said calculating said proportional relationship from said first distance and said second distance comprises:
calculating the ratio of the first distances to the second distances to obtain a plurality of initial ratios;
screening the initial ratios to obtain target ratios;
and taking the average value of the target ratios as the proportional relation.
6. The method of claim 5, wherein said screening the plurality of initial ratios to obtain a plurality of target ratios comprises:
sequencing the plurality of initial ratios to obtain a sequencing result;
and deleting the initial ratio which does not meet the preset condition in the sequencing result to obtain the target ratios.
7. The method according to any one of claims 1-6, further comprising:
constructing a three-dimensional model of the target object under the model coordinate system according to the plurality of images;
calculating the model size of the target object under the model coordinate system;
and obtaining the physical size of the target object under the geodetic coordinate system according to the model size of the target object under the model coordinate system and the proportional relation.
8. An image processing apparatus, comprising:
the acquisition module is used for acquiring a plurality of images of the target object under a plurality of view angles and first camera coordinates corresponding to each image, wherein the first camera coordinates are camera coordinates under a model coordinate system;
the determining module is used for determining second camera coordinates corresponding to each image according to the shooting position information of the plurality of images, wherein the second camera coordinates are camera coordinates under a geodetic coordinate system; determining a proportional relation between the model coordinate system and the geodetic coordinate system according to the first camera coordinates and the second camera coordinates corresponding to the plurality of images; the proportional relation is used for converting the model size of the target object under the model coordinate system into the physical size of the target object.
9. A processing apparatus, comprising: a memory storing a computer program executable by the processor, and a processor implementing the image processing method according to any one of the preceding claims 1-7 when the processor executes the computer program.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when read and executed, implements the image processing method according to any of the preceding claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311393696.4A CN117333528A (en) | 2023-10-25 | 2023-10-25 | Image processing method, device, processing equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311393696.4A CN117333528A (en) | 2023-10-25 | 2023-10-25 | Image processing method, device, processing equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117333528A true CN117333528A (en) | 2024-01-02 |
Family
ID=89290236
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311393696.4A Pending CN117333528A (en) | 2023-10-25 | 2023-10-25 | Image processing method, device, processing equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117333528A (en) |
-
2023
- 2023-10-25 CN CN202311393696.4A patent/CN117333528A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101900873B1 (en) | Method, device and system for acquiring antenna engineering parameters | |
CN110717861B (en) | Image splicing method and device, electronic equipment and computer readable storage medium | |
CN110648283A (en) | Image splicing method and device, electronic equipment and computer readable storage medium | |
CN113048980B (en) | Pose optimization method and device, electronic equipment and storage medium | |
CN110703805B (en) | Method, device and equipment for planning three-dimensional object surveying and mapping route, unmanned aerial vehicle and medium | |
CN111862180A (en) | Camera group pose acquisition method and device, storage medium and electronic equipment | |
CN114494388A (en) | Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment | |
US8509522B2 (en) | Camera translation using rotation from device | |
CN115797256B (en) | Method and device for processing tunnel rock mass structural plane information based on unmanned aerial vehicle | |
CN107534202B (en) | A kind of method and apparatus measuring antenna attitude | |
CN112991429B (en) | Box volume measuring method, device, computer equipment and storage medium | |
CN111445513A (en) | Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium | |
US20240338922A1 (en) | Fusion positioning method based on multi-type map and electronic device | |
CN111598930B (en) | Color point cloud generation method and device and terminal equipment | |
CN113808269A (en) | Map generation method, positioning method, system and computer readable storage medium | |
CN117333528A (en) | Image processing method, device, processing equipment and storage medium | |
CN117235299A (en) | Quick indexing method, system, equipment and medium for oblique photographic pictures | |
CN115457202B (en) | Method, device and storage medium for updating three-dimensional model | |
CN113379826A (en) | Method and device for measuring volume of logistics piece | |
CN113256811B (en) | Building modeling method, building modeling apparatus, and computer-readable storage medium | |
CN109919998B (en) | Satellite attitude determination method and device and terminal equipment | |
CN117392317B (en) | Live three-dimensional modeling method, device, computer equipment and storage medium | |
CN115100535B (en) | Satellite remote sensing image rapid reconstruction method and device based on affine camera model | |
CN110579169A (en) | Stereoscopic vision high-precision measurement method based on cloud computing and storage medium | |
US20220222909A1 (en) | Systems and Methods for Adjusting Model Locations and Scales Using Point Clouds |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Country or region after: China Address after: 8002, Floor 8, No. 36, Haidian West Street, Haidian District, Beijing, 100089 Applicant after: Beijing Tiantian Zhixin Semiconductor Technology Co.,Ltd. Address before: 8002, Floor 8, No. 36, Haidian West Street, Haidian District, Beijing, 100089 Applicant before: Beijing Tiantian Microchip Semiconductor Technology Co.,Ltd. Country or region before: China |
|
CB02 | Change of applicant information |