CN116468779A - Image generation method, device, computer equipment and storage medium - Google Patents

Image generation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116468779A
CN116468779A CN202210028353.7A CN202210028353A CN116468779A CN 116468779 A CN116468779 A CN 116468779A CN 202210028353 A CN202210028353 A CN 202210028353A CN 116468779 A CN116468779 A CN 116468779A
Authority
CN
China
Prior art keywords
image
images
shooting
information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210028353.7A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210028353.7A priority Critical patent/CN116468779A/en
Publication of CN116468779A publication Critical patent/CN116468779A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure proposes an image generation method, apparatus, computer device, and storage medium, the method comprising: the method comprises the steps of acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, acquiring images at corresponding shooting positions by an image pickup device, acquiring different images at different shooting positions by the image pickup device, determining the motion quantity information of the image pickup device between the different shooting positions, respectively extracting a plurality of corresponding images to be processed from the images according to the motion quantity information, and generating a target image according to the plurality of images to be processed. The method and the device can refer to the motion quantity information of the image pickup device between different shooting positions to extract the images to be processed in each image, and perform subsequent image feature recognition processing based on the extracted images to be processed to generate the target image, so that global feature recognition processing on each image is avoided, the data quantity required to be processed in the image generation process is effectively reduced, the time consumption of image generation is reduced, and the target image generation efficiency is effectively improved.

Description

Image generation method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image generating method, an image generating device, a computer device, and a storage medium.
Background
Panoramic image stitching is an important branch in the technical field of image processing, and has wide application in various industries.
In the related art, a full-frame feature detection process is generally performed on a plurality of images, then feature matching is performed, and fusion processing is performed when stitching is performed based on the result of the feature matching.
In this way, the data processing amount for full-frame feature detection of a plurality of images is large, so that more time consumption is generated, and the panoramic image stitching processing efficiency is affected.
Disclosure of Invention
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present disclosure is to provide an image generating method, apparatus, computer device, and storage medium, which can refer to motion amount information of an image capturing device between different capturing positions to extract images to be processed in each image, and perform subsequent image feature recognition processing based on the extracted images to be processed, so as to generate a target image, and avoid performing global feature recognition processing on each image, thereby effectively reducing data amount required to be processed in an image generating process, reducing time consumption of image generation, and effectively improving target image generating efficiency.
An image generation method provided by an embodiment of a first aspect of the present disclosure includes: acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are acquired at the corresponding shooting positions by an image pickup device, and the image pickup device acquires different images at different shooting positions; determining motion amount information of the image pickup device between different shooting positions; respectively extracting a plurality of corresponding images to be processed from the plurality of images according to the motion quantity information; and generating a target image according to the plurality of images to be processed.
According to the image generation method provided by the embodiment of the first aspect of the disclosure, a plurality of images are acquired, wherein the plurality of images respectively correspond to a plurality of shooting positions, the images are acquired by the image pickup device at the corresponding shooting positions, the image pickup device acquires different images at different shooting positions, the motion quantity information of the image pickup device between the different shooting positions is determined, the corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion quantity information, the target image is generated according to the plurality of images to be processed, the images to be processed in each image can be extracted by referring to the motion quantity information of the image pickup device between the different shooting positions, and subsequent image feature recognition processing is performed on the basis of the extracted images to generate the target image, so that global feature recognition processing on each image is avoided, the data quantity required to be processed in the image generation process is effectively reduced, the time consumption of image generation is reduced, and the target image generation efficiency is effectively improved.
An image generating apparatus according to an embodiment of a second aspect of the present disclosure includes: the image acquisition module is used for acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are acquired at the corresponding shooting positions by the image pickup device, and the image pickup device acquires different images at different shooting positions; a determining module for determining motion amount information of the image pickup device between different shooting positions; the extraction module is used for respectively extracting a plurality of corresponding images to be processed from the plurality of images according to the motion quantity information; and the generating module is used for generating a target image according to the plurality of images to be processed.
According to the image generation device provided by the second aspect of the present disclosure, a plurality of images are acquired, wherein the plurality of images respectively correspond to a plurality of shooting positions, the images are acquired by the image pickup device at the corresponding shooting positions, the image pickup device acquires different images at different shooting positions, motion amount information of the image pickup device between the different shooting positions is determined, the corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion amount information, a target image is generated according to the plurality of images to be processed, the images to be processed in each image can be extracted by referring to the motion amount information of the image pickup device between the different shooting positions, and subsequent image feature recognition processing is performed based on the extracted images to generate the target image, so that global feature recognition processing on each image is avoided, the data amount required to be processed in the image generation process is effectively reduced, the time consumption of image generation is reduced, and the target image generation efficiency is effectively improved.
An embodiment of a third aspect of the present disclosure proposes a computer device, including a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing an image generation method as proposed by an embodiment of the first aspect of the present disclosure when executing the program.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements an image generation method as proposed by an embodiment of the first aspect of the present disclosure.
An embodiment of a fifth aspect of the present disclosure proposes a computer program product which, when executed by an instruction processor in the computer program product, performs an image generation method as proposed by an embodiment of the first aspect of the present disclosure.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an image generation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of target image generation in an embodiment of the present disclosure;
FIG. 3 is a flow chart of an image generation method according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of motion vector acquisition in an embodiment of the present disclosure;
FIG. 5 is a schematic view of translational scene angle overlap rate estimation in an embodiment of the present disclosure;
FIG. 6 is a flow chart of an image generation method according to another embodiment of the present disclosure;
FIG. 7 is a schematic view of rotational scene angle overlap rate estimation in an embodiment of the disclosure;
FIG. 8 is a rotational scene overlapping ROI extraction schematic in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a bias compensation flow in an implementation of the present disclosure;
fig. 10 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic structural view of an image generating apparatus according to another embodiment of the present disclosure;
FIG. 12 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present disclosure and are not to be construed as limiting the present disclosure. On the contrary, the embodiments of the disclosure include all alternatives, modifications, and equivalents as may be included within the spirit and scope of the appended claims.
Fig. 1 is a flowchart of an image generating method according to an embodiment of the present disclosure.
It should be noted that, the execution body of the image generating method in this embodiment is an image generating apparatus, and the apparatus may be implemented by software and/or hardware, and the apparatus may be configured in a computer device, where the computer device may include, but is not limited to, a terminal, a server, and the like.
As shown in fig. 1, the image generation method includes:
it should be noted that in the description of the present disclosure, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
As shown in fig. 1, the image generation method includes:
s101: and acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are acquired at the corresponding shooting positions by the image pickup device, and the image pickup device acquires different images at different shooting positions.
The plurality of images may be images of different angles of view captured for the subject in the scene, and the images of different angles of view may be images obtained by capturing the subject at different distances in a horizontal position, or may be images of capturing angles acquired at different capturing angles in the same horizontal position, which is not limited.
The shooting position refers to a placement position of the image capturing device when the image is obtained, the shooting position may be a plurality of shooting positions generated by moving the image capturing device on a plane, or may be shooting positions generated by rotating the image capturing device at different angles, the plurality of images are obtained by the image capturing device at corresponding shooting positions, and the images obtained by the image capturing device at different shooting positions are different.
In the embodiment of the disclosure, when a plurality of images are acquired, the image capturing device may be used to capture images of the object at corresponding capturing positions, and the plurality of different capturing positions may be obtained by moving the placement position of the image capturing device or adjusting the capturing angle of the image capturing device, and the plurality of images of the object may be captured as the acquired plurality of images at different capturing positions by using image capturing.
S102: the movement amount information of the image pickup device between different photographing positions is determined.
The motion amount information refers to shooting position change information of the image pickup device, is used for describing position movement and angle transformation of the image pickup device between different shooting positions, and can be represented by motion vectors of the image pickup device between different positions, wherein the motion vectors can comprise translation vectors and rotation vectors.
In the embodiment of the disclosure, when determining the motion amount information of the image capturing device between different positions, a motion sensor with a fixed relative position relation with the image capturing device may be provided, the motion sensor is used for motion detection, the position movement and the angle conversion of the image capturing device between different capturing positions are detected, the position movement and the angle conversion signals of the image capturing device detected by the motion sensor are converted into corresponding motion vectors, and the generated motion vectors are used as the motion amount information of the image capturing device between different capturing positions.
In other embodiments, the image captured by the image capturing device at the capturing position may be processed to extract pose information of the image capturing device at different capturing positions, determine motion vectors of the image capturing device at different capturing positions according to the pose information of the image capturing device at different capturing positions, and use the generated motion vectors as motion information of the image capturing device between different capturing positions, or may determine motion information of the image capturing device between different capturing positions in any other possible manner, which is not limited.
S103: and respectively extracting a plurality of corresponding images to be processed from the plurality of images according to the motion quantity information.
The image to be processed refers to an image area in the acquired image, wherein the image area is used for carrying out feature detection and feature matching on the image to generate a target image, and the image to be detected can be an image overlapping area in a plurality of acquired images.
In the embodiment of the disclosure, when a plurality of corresponding images to be processed are respectively extracted from a plurality of images according to motion amount information, a view angle overlapping rate of the plurality of images may be calculated according to a translation vector and a rotation vector in the motion amount information, then overlapping interested (Region Of Interest, ROI) areas of the plurality of images may be calculated based on the view angle overlapping rate, the overlapping ROI areas may be calculated and extracted for the plurality of images by using an overlapping ROI calculation algorithm, and the extracted overlapping ROI areas are used as the plurality of images to be processed corresponding to the plurality of images.
In other embodiments, the translation vectors and the rotation vectors in the motion amount information may be disassembled, the field angle overlapping ratios of the multiple images may be calculated according to the translation vectors, the translation overlapping ROI areas in the multiple images may be extracted by using an overlapping ROI calculation algorithm based on the field angle overlapping ratios corresponding to the translation vectors, then the field angle overlapping ratios of the multiple images may be calculated according to the rotation vectors, the rotation overlapping ROI areas in the multiple images may be extracted by using an overlapping ROI calculation algorithm based on the field angle overlapping ratios corresponding to the rotation vectors, and the extracted translation overlapping ROI areas and rotation overlapping ROI areas may be used as multiple images corresponding to multiple images to be processed, or the multiple images corresponding to the multiple images may be extracted by using other arbitrary methods according to the motion amount information, which is not limited.
S104: and generating a target image according to the plurality of images to be processed.
The target image refers to an image obtained by performing stitching fusion processing on a plurality of corresponding images obtained according to an image pair to be processed, and the target image can be a complete panoramic image obtained by stitching fusion on the plurality of obtained images.
In the embodiment of the disclosure, after the corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion amount information, the target image may be generated according to the plurality of images to be processed.
In the embodiment of the disclosure, when generating a target image according to a plurality of images to be processed, feature detection may be performed on the plurality of images to be processed by using a feature extraction algorithm to obtain a plurality of image feature points to be processed in the images to be processed, then feature matching processing may be performed on the image feature points to be processed in the plurality of images to obtain a point set including a plurality of matching image feature points, and stitching and fusing processing may be performed on the obtained plurality of corresponding images based on the plurality of matching image feature points in the matching image feature point set to obtain the target image generated according to the plurality of images to be processed.
For example, as shown in fig. 2, fig. 2 is a schematic diagram Of generating a target image in the embodiment Of the disclosure, image 1, image 2 and image 3 may be images obtained by photographing the image capturing device at different corresponding photographing positions, then the acquired multiple images may be subjected to motion detection by using a motion sensor to determine motion information Of the image capturing device between the different photographing positions, a Field Of view (Field Vision) angle overlapping ratio between the multiple images 1 and 2 and between the multiple images 2 and 3 is calculated according to the determined motion information, then overlapping ROI image areas in the images 1 and 2 and the images 3 may be extracted according to the FOV overlapping ratio, the extracted overlapping ROI image areas may be used as images to be processed, then feature detection and feature matching processing may be performed on the multiple images to be processed to obtain matching image feature points in the multiple images to be processed, and then the acquired multiple images may be subjected to stitching fusion processing based on the multiple matching image feature points to achieve that the corresponding multiple images to be stitched to generate the target image according to the multiple images to be processed.
In other embodiments, the motion sensor may be used to estimate the motion amount of the matched image feature points in the matched image feature point set, so as to perform deviation compensation processing on the deviation possibly caused by the motion sensor, and then perform fusion and stitching processing on the acquired multiple images based on the matched image feature points after the deviation compensation processing is performed, so as to obtain a target image generated according to the multiple images to be processed, or any other possible manner may be adopted to generate the target image according to the multiple images to be processed, which is not limited.
In this embodiment, a plurality of images are acquired, where the plurality of images respectively correspond to a plurality of shooting positions, the images are shot by the image pickup device at the corresponding shooting positions, the different shooting positions are different, motion amount information of the image pickup device between the different shooting positions is determined, a corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion amount information, a target image is generated according to the plurality of images to be processed, the images to be processed in each image can be extracted by referring to the motion amount information of the image pickup device between the different shooting positions, and subsequent image feature recognition processing is performed based on the extracted images to be processed to generate the target image, so that global feature recognition processing on each image is avoided, the amount of data required to be processed in the image generation process is effectively reduced, the time consumption of image generation is reduced, and the target image generation efficiency is effectively improved.
Fig. 3 is a flowchart of an image generating method according to another embodiment of the present disclosure.
As shown in fig. 3, the image generation method includes:
s301: and acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are shot at the corresponding shooting positions by the image pickup device, and different shooting positions are different.
The description of S301 may be exemplified by the above embodiments, and will not be repeated here.
S302: pose information of the image pickup device at the shooting position is determined.
The pose information refers to a degree of freedom of movement of a shooting position of the image pickup device in a direction of a rectangular coordinate axis and a degree of freedom of rotation around the direction of the coordinate axis, and the pose information of the image pickup device at the shooting position can comprise movement information and rotation information of the image pickup device at the shooting position.
In the embodiment of the disclosure, when determining pose information of the image capturing device at the capturing position, motion detection processing and estimation processing may be performed on the image capturing device at the corresponding capturing position to obtain translation information and rotation information of the processed image capturing device, and the obtained translation information and rotation information are used as pose information of the image capturing device at the capturing position.
S303: and determining the motion vector of the image pickup device between different shooting positions according to the plurality of pose information.
After the pose information of the image capturing device at the capturing position is determined, the embodiment of the disclosure can determine the motion vector of the image capturing device between different capturing positions according to the plurality of pose information.
In the embodiment of the disclosure, when determining the motion vector of the image capturing device between different capturing positions according to the plurality of pose information, pose estimation processing may be performed according to the translation information and the rotation information in the plurality of pose information, so as to calculate and obtain the translation vector and the rotation vector of the image capturing device between different capturing positions, and the translation vector and the rotation vector are used as the motion vector of the image capturing device between different capturing positions.
S304: the motion vector is taken as the motion amount information.
In the embodiment of the disclosure, after the pose information of the image capturing device at the capturing position is determined, and the motion vectors of the image capturing device between different capturing positions are determined according to the plurality of pose information, the obtained motion vectors can be used as the motion amount information.
For example, as shown in fig. 4, fig. 4 is a schematic diagram of motion vector acquisition in the embodiment of the present disclosure, pose information a, pose information b, and pose information c of the image capturing device at a position P1, a position P2, and a position P3 may be determined, then, according to pose information of the image capturing device at different positions, a rotation vector r12 and a translation vector t12 of the image capturing device between the position P1 and the position P2 may be determined as a motion vector of the image capturing device between the position P1 and the position P2, a rotation vector r23 and a translation vector t23 of the image capturing device between the position P2 and the position P3 may be determined as a motion vector of the image capturing device between the position P2 and the position P3, and the acquired motion vector may be used as motion vector information.
S305: based on the motion amount information, a plurality of view angle overlapping ratios corresponding to the plurality of images are determined.
The view angle overlapping rate is used for representing the distribution condition of overlapping image areas among images, and can be calculated according to a view angle overlapping rate calculation formula.
The embodiment of the disclosure can determine a plurality of view angle overlapping rates corresponding to a plurality of images respectively according to the motion amount information after determining the motion vector of the image pickup device between different shooting positions as the motion amount information according to the plurality of pose information.
In the embodiment of the disclosure, when determining the plurality of view angle overlapping rates corresponding to the plurality of images respectively according to the motion amount information, the view angle overlapping rate at the shooting position corresponding to the plurality of images may be calculated according to the translation vector and the rotation vector in the plurality of motion amount information by using the view angle overlapping rate calculation formula.
Optionally, in some embodiments, when determining the multiple overlapping angles corresponding to the multiple images respectively according to the motion amount information, multiple image capturing parameters corresponding to the multiple images respectively may be determined, where the images are captured by the image capturing device at the capturing positions by using the corresponding image capturing parameters, and the overlapping angles of the images corresponding to the image capturing parameters are determined according to the image capturing parameters and the motion amount information, so that the overlapping angles of the images corresponding to the image capturing parameters may be determined according to the image capturing parameters and the motion amount information, and since the overlapping angles of the image capturing parameters and the motion amount information may be used to extract overlapping image areas in the images, so as to reduce the amount of image area data that needs to be subjected to processing such as feature detection, and effectively reduce the time consumption for feature detection on the images.
The imaging parameters are width data information, height data information, and distance data information between the imaging object and the imaging device of the image.
In the embodiment of the disclosure, when determining a plurality of view angle overlapping rates corresponding to a plurality of images respectively according to motion amount information, width data information, height data information and distance data information between a photographing device and a photographing device corresponding to the plurality of images may be acquired, the acquired plurality of data information are used as a plurality of photographing parameters corresponding to the plurality of images, the photographing device is used to photograph the plurality of images corresponding to photographing positions by using corresponding photographing parameters, and then the corresponding view angle overlapping rate may be calculated by using a view angle overlapping rate calculation formula according to the photographing parameters and the motion amount information, and the calculation result is used as the view angle overlapping rate of the image corresponding to the photographing parameters.
Optionally, in some embodiments, the motion vector is a translation vector, when determining, according to the image capturing parameter and the motion vector information, a field angle overlapping rate of an image corresponding to the image capturing parameter, the image capturing parameter may be parsed to obtain a distance between the image capturing device and the target object, where the image capturing device captures the target object based on the image capturing parameter, determines an image size of the image, and determines, according to the distance and the image size, the field angle overlapping rate of the image corresponding to the image capturing parameter in combination with the motion vector information, so that when the motion vector is a translation vector, the field angle overlapping rate of the image corresponding to the image capturing parameter may be determined, and thus, when a plurality of images acquired by the image capturing device are in a translation relationship, an overlapping area of the plurality of images may be determined as an image to be processed, and feature detection and feature matching may be performed on the image to be processed, thereby reducing an image data processing amount when the motion vector is a translation vector, and reducing time consumption for feature detection and feature matching on the image.
The translation vector is used for representing the moving position information of the image pickup device on the plane.
In the embodiment of the disclosure, when the motion vector is acquired as the motion amount information, the motion detection may be performed on the image capturing device, and if the distance movement of the image capturing device on the plane is detected, the translation vector of the image capturing device is acquired as the motion vector.
The target object is an imaging object when an imaging device is used for imaging an image at a corresponding imaging position, and the image is obtained by the imaging device based on imaging parameters.
In the embodiment of the disclosure, when the motion vector is a translation vector and the view angle overlapping rate of the image corresponding to the image capturing parameter is determined according to the image capturing parameter and the motion amount information, the image capturing parameter may be analyzed to obtain the distance between the image capturing device and the target object, wherein the unit of the distance between the image capturing device and the target object is meter, then the width data information and the height data information of the image may be determined as the image size, wherein the unit of the width data information and the height data information is pixel, and then the view angle overlapping rate of the image corresponding to the image capturing parameter may be determined based on the distance and the image size by using the view angle overlapping rate calculation formula and combining the motion amount information.
For example, as shown in fig. 5, fig. 5 is a schematic view angle overlapping rate estimation diagram of a panning scene in the embodiment of the disclosure, where pos1 and pos2 are two different shooting positions after the panning performed by the imaging device, respectively, width data information Height and Height data information Width of the image can be determined as image sizes, a distance Z between the imaging device and the target object is obtained by resolving from the imaging parameters,for the translation vector generated after the camera device is translated from pos1 to pos2, then the field angle overlap ratio is used to calculate the formula +.>And->The image angle of view overlapping ratios at pos2 and pos1 are calculated, respectively, and the calculation result is taken as the angle of view overlapping ratio of the image corresponding to the imaging parameter.
S306: and respectively extracting a plurality of corresponding images to be processed from the corresponding plurality of images according to the overlapping rate of the plurality of angles of view.
In the embodiment of the disclosure, after determining the multiple view angle overlapping rates corresponding to the multiple images according to the motion amount information, the corresponding multiple images to be processed may be extracted from the corresponding multiple images according to the multiple view angle overlapping rates.
In the embodiment of the disclosure, when a plurality of images to be processed are extracted from a corresponding plurality of images according to a plurality of view angle overlapping rates, overlapping regions of interest (Region Of Interest, ROI) of the plurality of images may be calculated based on the view angle overlapping rates, the plurality of images may be calculated and extracted by using an overlapping ROI calculation algorithm, and the extracted overlapping ROI regions are used as the plurality of images to be processed corresponding to the plurality of images.
S307: and generating a target image according to the plurality of images to be processed.
The description of S307 may be exemplified with reference to the above embodiments, and will not be repeated here.
In this embodiment, a plurality of images are acquired, where the plurality of images respectively correspond to a plurality of shooting positions, the images are shot by the image pickup device at the corresponding shooting positions, the different shooting positions are different, the motion amount information of the image pickup device between the different shooting positions is determined, the corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion amount information, the target image is generated according to the plurality of images to be processed, the image to be processed in each image can be extracted by referring to the motion amount information of the image pickup device between the different shooting positions, the subsequent image feature recognition processing is performed based on the extracted images to be processed, so as to generate the target image, the global feature recognition processing on each image is avoided, thereby effectively reducing the data amount required to be processed in the image generation process, reducing the time consumption of image generation, effectively improving the generation efficiency of the target image, and determining the view angle overlapping rate corresponding to the obtained image according to the shooting parameters and the motion amount information.
Fig. 6 is a flowchart of an image generating method according to another embodiment of the present disclosure.
As shown in fig. 6, the image generation method includes:
s601: and acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are shot at the corresponding shooting positions by the image pickup device, and different shooting positions are different.
S602: pose information of the image pickup device at the shooting position is determined.
S603: and determining the motion vector of the image pickup device between different shooting positions according to the plurality of pose information.
S604: the motion vector is taken as the motion amount information.
The description of S601-S604 may be exemplified by the above embodiments, and will not be repeated here.
The motion vector is a rotation vector, and the rotation vector is used for representing rotation angle information of the image pickup device.
In the embodiment of the disclosure, when the motion vector is acquired as the motion amount information, the motion detection may be performed on the image capturing apparatus, and if the rotation of the image capturing apparatus by a certain angle is detected, the rotation vector of the image capturing apparatus is acquired as the motion vector.
In the embodiment of the disclosure, after the image capturing device rotates by a certain angle, the motion sensor may be used to detect the motion of the image capturing device, and calculate the rotation vector of the image capturing device, and use the rotation vector as the motion vector.
S605: and determining a plurality of shooting parameters corresponding to the plurality of images respectively, wherein the images are shot at shooting positions by the shooting device by adopting the corresponding shooting parameters.
The description of S605 may be exemplified with reference to the above embodiments, and will not be repeated here.
S606: and analyzing the shooting parameters to obtain target field angle information of the shooting device corresponding to the rotation vector.
The target angle of view information is rotation angle information for characterizing a photographing angle of the imaging device.
In the embodiment of the disclosure, after determining the plurality of imaging parameters corresponding to the plurality of images, the imaging parameters may be analyzed to obtain the target angle of view information of the imaging device corresponding to the rotation vector.
In the embodiment of the disclosure, when the image capturing parameters are analyzed to obtain the target field angle information of the image capturing device corresponding to the rotation vector, the calculation processing may be performed according to the image pixel information of the image captured by the image capturing device at the corresponding capturing position in combination with the rotation vector, and the calculation result may be used as the target field angle information of the image capturing device corresponding to the rotation vector.
Optionally, in some embodiments, when analyzing the image capturing parameters to obtain the target angle of view information of the image capturing device corresponding to the rotation vector, a plurality of coordinate directions associated with the rotation vector may be determined, the image capturing parameters may be analyzed to obtain the initial angle of view information of the image capturing device, the initial angle of view information may be analyzed to obtain a plurality of angles of view corresponding to the plurality of coordinate directions, and the plurality of angles of view may be used together as the target angle of view information, so that the angle of view transition information after the rotation of the image capturing device may be effectively promoted.
The initial angle of view information is angle of view information when no promotion is made by the imaging device.
In the embodiment of the disclosure, when analyzing the image capturing parameters to obtain the target view angle information of the image capturing device corresponding to the rotation vector, a plurality of coordinate directions associated with the rotation vector may be determined, where the coordinate directions may be a horizontal x direction and a vertical y direction, then the image capturing parameters are analyzed according to the coordinate directions to obtain the initial view angle information of the image capturing device, then the view angle information may be analyzed to obtain a plurality of view angles corresponding to the x direction and the y direction, and the obtained plurality of view angle information are used together as the target view angle information.
S607: and determining the field angle overlapping rate of the image corresponding to the shooting parameters according to the target field angle information and the rotation vector.
In the embodiment of the disclosure, after the above-mentioned determination that the plurality of angles of view corresponding to the plurality of coordinate directions are used together as the target angle of view information, the overlapping rate of angles of view of the image corresponding to the imaging parameter may be determined according to the target angle of view information and the rotation vector.
In the embodiment of the disclosure, when determining the view angle overlapping rate of the image corresponding to the imaging parameter according to the target view angle information and the rotation vector, the view angle overlapping rate of the image corresponding to the imaging parameter may be calculated based on the rotation vector and the plurality of view angles by using a view angle overlapping rate calculation formula.
For example, as shown in fig. 7, fig. 7 is a rotation scene view in an embodiment of the present disclosureThe field angle overlap rate estimation schematic diagram, pos1 and pos2 are two different shooting positions after the camera device translates,for rotation vector Fov x And Fov y For the corresponding angles of view of the image pickup device in the x-direction and the y-direction, a calculation formula of the overlapping rate of the angles of view is utilizedAnd->And respectively calculating the corresponding angle of view overlapping rates of the imaging device on pos1 and pos2, and taking the calculated angle of view overlapping rate as the angle of view overlapping rate of the image corresponding to the imaging parameters.
S608: a plurality of imaging rotation angles corresponding to the plurality of view angle overlapping ratios are determined.
In the embodiment of the disclosure, after the above-mentioned overlapping rate of the angles of view of the images corresponding to the imaging parameters is determined according to the target angle of view information and the rotation vector, a plurality of imaging rotation angles respectively corresponding to the overlapping of the angles of view may be determined.
In the embodiment of the disclosure, in determining a plurality of imaging rotation angles corresponding to the overlapping rates of a plurality of angles of view, the rotation angle of the imaging device may be detected by using the motion sensor to obtain a plurality of angles of view R x And the detected rotation angle is used as a plurality of imaging rotation angles corresponding to the plurality of view angle overlapping rates.
S609: a plurality of image resolution information corresponding to the plurality of images, respectively, is determined.
The image resolution information may be used to assist in extracting the image to be processed, and may be represented by width pixel data information and height pixel data information of the image.
In the embodiment of the disclosure, when determining the plurality of image resolution information corresponding to the plurality of images respectively, the image processing algorithm may be used to perform data extraction processing on the plurality of images to obtain Width pixel data information Width and Height pixel data information Height corresponding to the plurality of images, and the Width pixel data information and the Height pixel data information of the plurality of images are used as the plurality of image resolution information corresponding to the plurality of images respectively.
S610: and extracting a corresponding image to be processed from the corresponding image according to the field angle overlapping rate, the shooting rotation angle and the image resolution information.
After determining the view angle overlapping rate of the image corresponding to the image capturing parameter, the plurality of image capturing rotation angles corresponding to the plurality of view angle overlapping rates, and the plurality of image resolution information corresponding to the plurality of images, the embodiments of the present disclosure may extract the corresponding image to be processed from the corresponding image according to the view angle overlapping rate, the image capturing rotation angle, and the image resolution information.
In the embodiment of the disclosure, when the corresponding image to be processed is extracted from the corresponding image according to the view angle overlapping rate, the imaging rotation angle and the image resolution information, the overlapping ROI area may be extracted from the plurality of images by using the overlapping ROI calculation algorithm, and the extracted overlapping ROI area is used as the corresponding image to be processed extracted from the corresponding image.
For example, as shown in fig. 8, fig. 8 is a schematic diagram of overlapping ROI extraction of a rotating scene in an embodiment of the disclosure, and image 1 and image 2 are images captured by an imaging device under the rotating scene, respectively, where R x For the imaging rotation angle corresponding to the overlapping rate of the view angles, the resolution of the image is width×height, and then an overlapping ROI calculation algorithm can be utilizedCalculating an overlapping ROI area of the image 1 in the x-direction, using an overlapping ROI calculation algorithm +.>Calculating overlapping ROI (region of interest) of the image 2 in the x direction, and taking a plurality of acquired overlapping ROI regions as a corresponding image to be processed extracted from the corresponding image。
S611: and generating a target image according to the plurality of images to be processed.
According to the embodiment of the disclosure, after the corresponding to-be-processed image is extracted from the corresponding image according to the overlapping rate of the field angle, the shooting rotation angle and the image resolution information, the target image can be generated according to the plurality of to-be-processed images.
In the embodiment of the disclosure, when generating a target image according to a plurality of images to be processed, feature detection may be performed on the plurality of images to be processed by using a feature extraction algorithm to obtain a plurality of image feature points to be processed in the images to be processed, then feature matching processing may be performed on the image feature points to be processed in the plurality of images to be processed to obtain a point set including a plurality of matching image feature points, and based on the plurality of matching image feature points in the matching image feature point set, the acquired plurality of corresponding images are spliced and fused to obtain the target image generated according to the plurality of images to be processed.
In other embodiments, the motion sensor may be further used to estimate the motion amount by using the matched image feature points in the set of matched image feature points, so as to perform deviation compensation processing on the deviation possibly caused by the motion sensor with respect to the matched image feature points, and then splice the acquired multiple corresponding images based on the matched image feature points after the deviation compensation processing is completed, so as to obtain the target image generated according to the multiple images to be processed.
For example, as shown in fig. 9, fig. 9 is a schematic diagram of a deviation compensation flow in the implementation of the present disclosure, the homography estimation process may be performed on the matching point set 1 and the matching point set 2 after the matching process, then the matrix decomposition process is performed on the processing result of the homography estimation process, and then the motion amount estimation is performed on the processing result of the matrix decomposition, so as to obtain the rotation motion amount and the translational motion amount [ R ] 12 T 12 ]To calculate the motion compensation delta= [ R ] 12 T 12 ]-[r 12 t 12 ]Then, deviation compensation processing is carried out based on the motion quantity compensation so that the processed motion vector meets the requirement ofAnd then, the acquired images can be subjected to splicing and fusion processing based on the matched image characteristic points after the deviation compensation processing is completed, so that a target image generated according to the images to be processed is obtained.
In this embodiment, a plurality of images are acquired, where the plurality of images respectively correspond to a plurality of shooting positions, the images are shot by the image pickup device at the corresponding shooting positions, the different shooting positions are different, motion amount information of the image pickup device between the different shooting positions is determined, a corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion amount information, a target image is generated according to the plurality of images to be processed, the images to be processed in each image can be extracted by referring to the motion amount information of the image pickup device between the different shooting positions, and subsequent image feature recognition processing is performed based on the extracted images to be processed to generate the target image, so that global feature recognition processing is avoided for each image, thereby effectively reducing the amount of data required to be processed in the image generation process, reducing time consumption of image generation, effectively improving the generation efficiency of the target image, analyzing a plurality of view angles from initial view angle information as target view angle information, effectively improving view angle conversion information after rotation of the image pickup device, and improving the accuracy of the extracted overlapping images under the rotation scene due to the fact that the target view angle information can be used for calculating the overlapping angle of the rotating device.
Fig. 10 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 10, the image generating apparatus 100 includes:
an acquiring module 1001, configured to acquire a plurality of images, where the plurality of images respectively correspond to a plurality of shooting positions, the images are acquired by an image capturing device at the corresponding shooting positions, and the image capturing device acquires different images at different shooting positions;
a determining module 1002, configured to determine motion amount information of the image capturing apparatus between different capturing positions;
an extracting module 1003, configured to extract a plurality of images to be processed from the plurality of images according to the motion amount information, respectively;
the generating module 1004 is configured to generate a target image according to the plurality of images to be processed.
In some embodiments of the present disclosure, as shown in fig. 11, fig. 11 is a schematic structural diagram of an image generating apparatus according to another embodiment of the present disclosure, where an extracting module 1003 includes:
a determining submodule 10031 for determining a plurality of view angle overlapping rates corresponding to the plurality of images respectively according to the motion amount information;
and the extraction submodule 10032 is used for respectively extracting a plurality of corresponding images to be processed from the corresponding plurality of images according to the plurality of view angle overlapping rates.
In some embodiments of the present disclosure, the determining module 1002 is configured to:
determining pose information of the image pickup device at a shooting position;
according to the plurality of pose information, determining motion vectors of the image pickup device among different shooting positions;
the motion vector is taken as the motion amount information.
In some embodiments of the present disclosure, wherein the determining submodule 10031 is configured to:
determining a plurality of shooting parameters respectively corresponding to a plurality of images, wherein the images are shot at shooting positions by a shooting device by adopting corresponding shooting parameters;
and determining the overlapping rate of the angle of view of the image corresponding to the image capturing parameters according to the image capturing parameters and the motion quantity information.
In some embodiments of the present disclosure, the motion vector is a translation vector;
wherein, confirm submodule 10031 is used for:
analyzing the shooting parameters to obtain the distance between the shooting device and the target object, wherein the image is obtained by shooting the target object by the shooting device based on the shooting parameters;
determining an image size of the image;
and according to the distance and the image size, combining the motion quantity information to determine the overlapping rate of the view angles of the images corresponding to the shooting parameters.
In some embodiments of the present disclosure, the motion vector is a rotation vector;
Wherein, confirm submodule 10031 is used for:
analyzing the shooting parameters to obtain target angle of view information of the shooting device corresponding to the rotation vector;
and determining the field angle overlapping rate of the image corresponding to the shooting parameters according to the target field angle information and the rotation vector.
In some embodiments of the present disclosure, wherein the determining submodule 10031 is configured to:
determining a plurality of coordinate directions associated with the rotation vectors;
analyzing the shooting parameters to obtain initial field angle information of the shooting device;
the initial angle-of-view information is analyzed to obtain a plurality of angles of view corresponding to the plurality of coordinate directions, and the plurality of angles of view are used as target angle-of-view information.
In some embodiments of the present disclosure, wherein the extraction sub-module 10032 is configured to:
determining a plurality of imaging rotation angles respectively corresponding to the plurality of view angle overlapping rates;
determining a plurality of image resolution information corresponding to the plurality of images, respectively;
and extracting a corresponding image to be processed from the corresponding image according to the field angle overlapping rate, the shooting rotation angle and the image resolution information.
Corresponding to the image generating method provided by the embodiments of fig. 1 to 9, the present disclosure also provides an image generating apparatus, and since the image generating apparatus provided by the embodiments of the present disclosure corresponds to the image generating method provided by the embodiments of fig. 1 to 9, the implementation of the image generating method is also applicable to the image generating apparatus provided by the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
In this embodiment, a plurality of images are acquired, where the plurality of images respectively correspond to a plurality of shooting positions, the images are shot by the image pickup device at the corresponding shooting positions, the different shooting positions are different, motion amount information of the image pickup device between the different shooting positions is determined, a corresponding plurality of images to be processed are respectively extracted from the plurality of images according to the motion amount information, a target image is generated according to the plurality of images to be processed, the images to be processed in each image can be extracted by referring to the motion amount information of the image pickup device between the different shooting positions, and subsequent image feature recognition processing is performed based on the extracted images to be processed to generate the target image, so that global feature recognition processing on each image is avoided, the amount of data required to be processed in the image generation process is effectively reduced, the time consumption of image generation is reduced, and the target image generation efficiency is effectively improved.
To achieve the above embodiments, the present disclosure further proposes a computer device including: the image generation method according to the foregoing embodiments of the present disclosure is implemented when the processor executes the program.
In order to implement the above-described embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image generation method as proposed in the foregoing embodiments of the present disclosure.
To achieve the above-described embodiments, the present disclosure also proposes a computer program product which, when executed by an instruction processor in the computer program product, performs an image generation method as proposed by the foregoing embodiments of the present disclosure.
FIG. 12 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure. The computer device 12 shown in fig. 12 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in FIG. 12, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 12, commonly referred to as a "hard disk drive").
Although not shown in fig. 12, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, the computer device 12 may also communicate with one or more networks such as a local area network (Local Area Network; hereinafter LAN), a wide area network (Wide Area Network; hereinafter WAN) and/or a public network such as the Internet via the network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, implementing the image generation method mentioned in the foregoing embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It should be noted that in the description of the present disclosure, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.

Claims (19)

1. An image generation method, comprising:
acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are acquired at the corresponding shooting positions by an image pickup device, and the image pickup device acquires different images at different shooting positions;
determining motion amount information of the image pickup device between different shooting positions;
respectively extracting a plurality of corresponding images to be processed from the plurality of images according to the motion quantity information;
and generating a target image according to the plurality of images to be processed.
2. The method of claim 1, wherein extracting a corresponding plurality of images to be processed from the plurality of images, respectively, according to the motion amount information, comprises:
determining a plurality of view angle overlapping rates corresponding to the plurality of images respectively according to the motion amount information;
And respectively extracting the images to be processed from the corresponding images according to the overlapping rates of the view angles.
3. The method of claim 2, wherein the determining motion amount information of the image capturing apparatus between different capturing positions includes:
determining pose information of the image pickup device at the shooting position;
determining motion vectors of the image pickup device between different shooting positions according to the pose information;
and taking the motion vector as the motion quantity information.
4. The method of claim 3, wherein the determining a plurality of view angle overlap rates corresponding to the plurality of images, respectively, according to the motion amount information, comprises:
determining a plurality of shooting parameters respectively corresponding to the plurality of images, wherein the images are shot at the shooting positions by the shooting device by adopting the corresponding shooting parameters;
and determining the overlapping rate of the view angles of the images corresponding to the image capturing parameters according to the image capturing parameters and the motion quantity information.
5. The method of claim 4, wherein the motion vector is a translation vector;
Wherein the determining, according to the image capturing parameter and the motion amount information, the overlapping rate of the view angle of the image corresponding to the image capturing parameter includes:
analyzing the shooting parameters to obtain the distance between the shooting device and the target object, wherein the image is obtained by shooting the target object by the shooting device based on the shooting parameters;
determining an image size of the image;
and according to the distance and the image size, determining the overlapping rate of the view angle of the image corresponding to the shooting parameter by combining the motion quantity information.
6. The method of claim 4, wherein the motion vector is a rotation vector;
wherein the determining, according to the image capturing parameter and the motion amount information, the overlapping rate of the view angle of the image corresponding to the image capturing parameter includes:
analyzing the shooting parameters to obtain target field angle information of the shooting device corresponding to the rotation vector;
and determining the overlapping rate of the view angle of the image corresponding to the shooting parameter according to the target view angle information and the rotation vector.
7. The method of claim 6, wherein the analyzing the imaging parameters to obtain target angle of view information of the imaging device corresponding to the rotation vector comprises:
Determining a plurality of coordinate directions associated with the rotation vector;
analyzing the shooting parameters to obtain initial field angle information of the shooting device;
analyzing the initial view angle information to obtain a plurality of view angles respectively corresponding to the plurality of coordinate directions, and taking the plurality of view angles as the target view angle information.
8. The method according to claim 2, wherein extracting the corresponding plurality of images to be processed from the respective plurality of images according to the plurality of view angle overlapping rates, respectively, includes:
determining a plurality of imaging rotation angles respectively corresponding to the plurality of view angle overlapping rates;
determining a plurality of image resolution information corresponding to the plurality of images, respectively;
and extracting the corresponding image to be processed from the corresponding image according to the field angle overlapping rate, the shooting rotation angle and the image resolution information.
9. An image generating apparatus, comprising:
the image acquisition module is used for acquiring a plurality of images, wherein the images respectively correspond to a plurality of shooting positions, the images are acquired at the corresponding shooting positions by the image pickup device, and the image pickup device acquires different images at different shooting positions;
A determining module for determining motion amount information of the image pickup device between different shooting positions;
the extraction module is used for respectively extracting a plurality of corresponding images to be processed from the plurality of images according to the motion quantity information;
and the generating module is used for generating a target image according to the plurality of images to be processed.
10. The apparatus of claim 9, wherein the extraction module comprises:
a determining submodule, configured to determine a plurality of view angle overlapping rates corresponding to the plurality of images, respectively, according to the motion amount information;
and the extraction submodule is used for respectively extracting the images to be processed from the corresponding images according to the overlapping rates of the view angles.
11. The apparatus of claim 10, wherein the means for determining is to:
determining pose information of the image pickup device at the shooting position;
determining motion vectors of the image pickup device between different shooting positions according to the pose information;
and taking the motion vector as the motion quantity information.
12. The apparatus of claim 11, wherein the determination submodule is to:
Determining a plurality of shooting parameters respectively corresponding to the plurality of images, wherein the images are shot at the shooting positions by the shooting device by adopting the corresponding shooting parameters;
and determining the overlapping rate of the view angles of the images corresponding to the image capturing parameters according to the image capturing parameters and the motion quantity information.
13. The apparatus of claim 12, wherein the motion vector is a translation vector;
wherein the determining submodule is configured to:
analyzing the shooting parameters to obtain the distance between the shooting device and the target object, wherein the image is obtained by shooting the target object by the shooting device based on the shooting parameters;
determining an image size of the image;
and according to the distance and the image size, determining the overlapping rate of the view angle of the image corresponding to the shooting parameter by combining the motion quantity information.
14. The apparatus of claim 12, wherein the motion vector is a rotation vector;
wherein the determining submodule is configured to:
analyzing the shooting parameters to obtain target field angle information of the shooting device corresponding to the rotation vector;
And determining the overlapping rate of the view angle of the image corresponding to the shooting parameter according to the target view angle information and the rotation vector.
15. The apparatus of claim 14, wherein the determination submodule is to:
determining a plurality of coordinate directions associated with the rotation vector;
analyzing the shooting parameters to obtain initial field angle information of the shooting device;
analyzing the initial view angle information to obtain a plurality of view angles respectively corresponding to the plurality of coordinate directions, and taking the plurality of view angles as the target view angle information.
16. The apparatus of claim 10, wherein the extraction sub-module is to:
determining a plurality of imaging rotation angles respectively corresponding to the plurality of view angle overlapping rates;
determining a plurality of image resolution information corresponding to the plurality of images, respectively;
and extracting the corresponding image to be processed from the corresponding image according to the field angle overlapping rate, the shooting rotation angle and the image resolution information.
17. A computer device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-8.
CN202210028353.7A 2022-01-11 2022-01-11 Image generation method, device, computer equipment and storage medium Pending CN116468779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210028353.7A CN116468779A (en) 2022-01-11 2022-01-11 Image generation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210028353.7A CN116468779A (en) 2022-01-11 2022-01-11 Image generation method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116468779A true CN116468779A (en) 2023-07-21

Family

ID=87182937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210028353.7A Pending CN116468779A (en) 2022-01-11 2022-01-11 Image generation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116468779A (en)

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
US9635251B2 (en) Visual tracking using panoramas on mobile devices
US7554575B2 (en) Fast imaging system calibration
US7317558B2 (en) System and method for image processing of multiple images
US20120300020A1 (en) Real-time self-localization from panoramic images
US20150262346A1 (en) Image processing apparatus, image processing method, and image processing program
EP2640057A1 (en) Image processing device, image processing method and program
US20210082086A1 (en) Depth-based image stitching for handling parallax
CN109005334B (en) Imaging method, device, terminal and storage medium
US8417062B2 (en) System and method for stabilization of fisheye video imagery
US10063792B1 (en) Formatting stitched panoramic frames for transmission
JP2013009050A (en) Image processing apparatus and image processing method
WO2005024723A1 (en) Image combining system, image combining method, and program
JP5251410B2 (en) Camera work calculation program, imaging apparatus, and camera work calculation method
JP2016212784A (en) Image processing apparatus and image processing method
JP2007104516A (en) Image processor, image processing method, program, and recording medium
KR101529820B1 (en) Method and apparatus for determing position of subject in world coodinate system
Bevilacqua et al. A fast and reliable image mosaicing technique with application to wide area motion detection
CN116468779A (en) Image generation method, device, computer equipment and storage medium
WO2018150086A2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses
EP3318059A1 (en) Stereoscopic image capture
Fadaeieslam et al. Efficient key frames selection for panorama generation from video
JP2005309782A (en) Image processor
Herbon et al. Adaptive planar and rotational image stitching for mobile devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination