CN116485660A - Image acquisition method and three-dimensional reconstruction method - Google Patents

Image acquisition method and three-dimensional reconstruction method Download PDF

Info

Publication number
CN116485660A
CN116485660A CN202211466415.9A CN202211466415A CN116485660A CN 116485660 A CN116485660 A CN 116485660A CN 202211466415 A CN202211466415 A CN 202211466415A CN 116485660 A CN116485660 A CN 116485660A
Authority
CN
China
Prior art keywords
exposure
target area
round
projection
complete
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211466415.9A
Other languages
Chinese (zh)
Other versions
CN116485660B (en
Inventor
吴朋林
王佳奇
邓小婷
李宏坤
樊钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Migration Technology Co ltd
Original Assignee
Beijing Migration Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Migration Technology Co ltd filed Critical Beijing Migration Technology Co ltd
Priority to CN202211466415.9A priority Critical patent/CN116485660B/en
Publication of CN116485660A publication Critical patent/CN116485660A/en
Application granted granted Critical
Publication of CN116485660B publication Critical patent/CN116485660B/en
Priority to PCT/CN2023/132826 priority patent/WO2024109719A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure provides an image acquisition method, comprising: performing first-round scanning projection on the target area, and synchronously performing first-round multiple exposure at a first preset time interval sequence to acquire a first exposure image sequence of the target area; performing a second round of scanning projection on the target area, and synchronously performing a second round of multiple exposure at a second preset time interval sequence to acquire a second exposure image sequence of the target area; the second-round multi-exposure area at least comprises a first-round multi-exposure non-exposure area in the target area, so that the first-round multi-exposure non-exposure area is used for fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction to obtain a complete exposure area of the target area. The present disclosure also provides a three-dimensional reconstruction method, an image acquisition apparatus, an image acquisition system, an electronic device, a readable storage medium, and a program product.

Description

Image acquisition method and three-dimensional reconstruction method
Technical Field
The present disclosure relates to the field of three-dimensional vision technology, and in particular, to an image acquisition method, a three-dimensional reconstruction method, an image acquisition apparatus, an image acquisition system, an electronic device, a storage medium, and a program product.
Background
In the prior art, a conventional control scheme of a stripe structure light camera using an area-array camera and a laser galvanometer is that the camera starts global exposure at the beginning of projection until the projection is finished.
Because the generation of the projection stripes is formed by the movement of the laser lines in the view field of the camera, the problem that the effective exposure time of the projection stripes in the obtained image is far smaller than the actual exposure time of the ambient light exists in the scheme in the prior art, and especially when the irradiance of the ambient light is strong, the signal to noise ratio of the stripe image obtained by the area array global exposure camera is low, and the point cloud cannot be reconstructed.
Disclosure of Invention
The present disclosure reduces the exposure time of a camera device to improve the signal-to-noise ratio with the projection time unchanged. The present disclosure provides an image acquisition method, a three-dimensional reconstruction method, an image acquisition apparatus, an image acquisition system, an electronic device, a storage medium, and a program product.
According to one aspect of the present disclosure, there is provided an image acquisition method including:
performing first-round scanning projection on the target area, and synchronously performing first-round multiple exposure at a first preset time interval sequence to acquire a first exposure image sequence of the target area;
Performing a second round of scanning projection on the target area, and synchronously performing a second round of multiple exposure at a second preset time interval sequence to acquire a second exposure image sequence of the target area;
the second-round multi-exposure area at least comprises a first-round multi-exposure non-exposure area in the target area, and the second-round multi-exposure area is used for fusing based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction to obtain a complete exposure area of the target area.
According to the image acquisition method of at least one embodiment of the present disclosure, the trigger time of the first exposure in the first multiple exposure is the start time of the first scanning projection, and the trigger time of the first exposure in the second multiple exposure has a preset delay time relative to the start time of the second scanning projection.
According to an image capturing method of at least one embodiment of the present disclosure, the first preset time interval sequence is an equal time interval sequence.
According to an image capturing method of at least one embodiment of the present disclosure, the second preset time interval sequence is an equal time interval sequence.
According to an image capturing method of at least one embodiment of the present disclosure, at least two time intervals in the first preset time interval sequence are different.
According to an image capturing method of at least one embodiment of the present disclosure, at least two time intervals in the second preset time interval sequence are different.
According to an image capturing method of at least one embodiment of the present disclosure, the time interval of the first preset time interval series is the same as the time interval of the second preset time interval series.
According to an image capturing method of at least one embodiment of the present disclosure, the preset delay time period is greater than or equal to the first preset time interval.
According to the image acquisition method of at least one embodiment of the present disclosure, the preset delay time is less than or equal to a first exposure time of the first multiple exposures.
An image acquisition method according to at least one embodiment of the present disclosure, the scanned projection comprising:
outputting a preset format image for a preset time length and enabling the preset format image to movably irradiate the target area so as to cover the whole target area.
According to an image capturing method of at least one embodiment of the present disclosure, each exposure of the first plurality of exposures has the same exposure duration.
According to an image capturing method of at least one embodiment of the present disclosure, each exposure of the second round of multiple exposures has the same exposure duration.
According to an image capturing method of at least one embodiment of the present disclosure, exposure durations of at least two of the first multiple exposures are different.
According to an image capturing method of at least one embodiment of the present disclosure, exposure durations of at least two exposures in the second round of multiple exposures are different.
According to an image capturing method of at least one embodiment of the present disclosure, the exposure duration of each exposure of the first round of multiple exposures is the same as the exposure duration of each exposure of the second round of multiple exposures.
According to an image capturing method of at least one embodiment of the present disclosure, the number of exposure times of the first multiple exposure is the same as or different from the number of exposure times of the second multiple exposure.
According to an image acquisition method of at least one embodiment of the present disclosure, fusing the first exposure image sequence and the second exposure image sequence includes:
acquiring a first relative trigger time sequence based on trigger time of each exposure of a first round of multiple exposures relative to a start time of the first round of scanning projections;
Acquiring a second relative trigger time sequence based on the trigger time of each exposure of a second round of multiple exposures relative to the start time of the second round of scanning projections;
the first and second sequences of exposure images are fused based on the first and second relative trigger time sequences.
According to another aspect of the present disclosure, there is provided a three-dimensional reconstruction method for performing three-dimensional reconstruction based on a first exposure image sequence and a second exposure image sequence of a target region acquired by an image acquisition method of any one embodiment of the present disclosure.
A three-dimensional reconstruction method according to at least one embodiment of the present disclosure includes:
acquiring a complete exposure image of a complete exposure area of the target area based on the first exposure image sequence and the second exposure image sequence;
acquiring a complete phase map of the target area based on the complete exposure image;
acquiring complete depth information of the target area based on the complete phase map;
and acquiring the complete point cloud information of the target area based on the complete depth information.
According to a three-dimensional reconstruction method of at least one embodiment of the present disclosure, acquiring a complete exposure image of a complete exposure area of the target area based on the first exposure image sequence and the second exposure image sequence includes:
And performing superposition and/or stitching operation on the first exposure image sequence and the second exposure image sequence to acquire the complete exposure image.
A three-dimensional reconstruction method according to at least one embodiment of the present disclosure includes:
acquiring a first phase map based on the first exposure image sequence, and acquiring a second phase map based on the second exposure image sequence;
acquiring a complete phase map of the target area based on the first phase map and the second phase map;
acquiring complete depth information of the target area based on the complete phase map;
and acquiring the complete point cloud information of the target area based on the complete depth information.
According to the three-dimensional reconstruction method of at least one embodiment of the present disclosure, obtaining a complete phase map of the target area based on the first phase map and the second phase map includes:
and performing superposition and/or splicing operation on the first phase map and the second phase map to obtain the complete phase map.
A three-dimensional reconstruction method according to at least one embodiment of the present disclosure includes:
acquiring a first phase map based on the first exposure image sequence, and acquiring a second phase map based on the second exposure image sequence;
Acquiring first depth information of the target area based on the first phase map, and acquiring second depth information of the target area based on the second phase map;
acquiring first point cloud information of the target area based on the first depth information, and acquiring second point cloud information of the target area based on the second depth information;
and fusing the first point cloud information and the second point cloud information to obtain the complete point cloud information of the target area.
According to the three-dimensional reconstruction method of at least one embodiment of the present disclosure, fusing the first point cloud information and the second point cloud information to obtain complete point cloud information of the target area includes:
and performing superposition and/or splicing operation on the first point cloud information and the second point cloud information to obtain the complete point cloud information.
According to the three-dimensional reconstruction method of at least one embodiment of the present disclosure, the superimposition includes performing an average weighting process on different superimposition objects of the superimposition area such that a sum of weights of the different superimposition objects is 1.
According to still another aspect of the present disclosure, there is provided an image capturing apparatus including:
the projection control instruction generation module is used for generating a first projection control instruction to perform first-round scanning type projection on the target area based on the first projection control instruction, and is also used for generating a second projection control instruction to perform second-round scanning type projection on the target area based on the second projection control instruction;
An exposure control instruction generation module for generating a first exposure control instruction to synchronously perform a first round of multiple exposures at a first preset time interval sequence based on the first exposure control instruction to acquire a first exposure image sequence of a target area, and for generating a second exposure control instruction to synchronously perform a second round of multiple exposures at a second preset time interval sequence based on the second exposure control instruction to acquire a second exposure image sequence of the target area;
the second-round multi-exposure area at least comprises a first-round multi-exposure non-exposure area in the target area, and the second-round multi-exposure area is used for obtaining a complete exposure area of the target area by information fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction.
According to yet another aspect of the present disclosure, there is provided an image acquisition system including:
a projection device;
a camera device;
the projection control instruction generation module is used for generating a first projection control instruction so that the projection device can perform first-round scanning projection on a target area based on the first projection control instruction, and is also used for generating a second projection control instruction so that the projection device can perform second-round scanning projection on the target area based on the second projection control instruction;
An exposure control instruction generation module for generating a first exposure control instruction to cause the camera apparatus to synchronously perform a first plurality of times of exposure at a first preset time interval sequence based on the first exposure control instruction to acquire a first exposure image sequence of a target area, and for generating a second exposure control instruction to cause the camera apparatus to synchronously perform a second plurality of times of exposure at a second preset time interval sequence based on the second exposure control instruction to acquire a second exposure image sequence of the target area;
the second-round multi-exposure area at least comprises a first-round multi-exposure non-exposure area in the target area, and the second-round multi-exposure area is used for obtaining a complete exposure area of the target area by information fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction.
An image acquisition system according to at least one embodiment of the present disclosure, the projection device includes a line spot emitter and a galvanometer.
An image acquisition system according to at least one embodiment of the present disclosure, the camera device includes one camera or includes a plurality of cameras.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
a memory storing execution instructions;
and a processor executing the execution instructions stored in the memory, so that the processor executes the image acquisition method according to any one embodiment of the present disclosure and/or the three-dimensional reconstruction method according to any one embodiment of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the image acquisition method of any one embodiment of the present disclosure and/or the three-dimensional reconstruction method of any one embodiment of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the image acquisition method of any one of the embodiments of the present disclosure and/or the three-dimensional reconstruction method of any one of the embodiments of the present disclosure.
According to the image acquisition method disclosed by the invention, the exposure time in one projection period is set to be smaller than the projection time, and the regional multiple exposure is carried out in one projection, so that the exposure time of ambient light can be reduced in each region, the noise is reduced, and the signal-to-noise ratio is improved.
According to the technical schemes, the exposure time of the ambient light of the image generated in each single exposure period is reduced from the original T (namely the global exposure method) to T/n (n is the exposure times in each projection period), so that the signal to noise ratio of the fringe image is improved, the signal to noise ratio of the final image is improved, and the capability of resisting the interference of the ambient light of the camera device is improved.
According to some technical schemes of the present disclosure, a single exposure time T is set, for example, T > T/2n is set, so that the problem that two-round scanning projection cannot cover all projection areas (i.e., all target areas) is solved, T < T/n and T/n > (1/f) are set, f is a camera frame rate, and the problem of frame loss of a camera is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a schematic overall flow diagram of an image acquisition method according to one embodiment of the present disclosure.
Fig. 2 illustrates a trigger timing of an exposure trigger signal, a charge output period during a two-pass scanning projection in an image acquisition method according to an embodiment of the present disclosure.
Fig. 3 illustrates a timing sequence of an exposure trigger signal during two-pass scanning projection in an image acquisition method according to an embodiment of the present disclosure.
Fig. 4 is a flow diagram of a three-dimensional reconstruction method according to one embodiment of the present disclosure.
Fig. 5 shows a flow diagram of an image fusion method of one embodiment of the present disclosure.
Fig. 6 is a flow diagram of a three-dimensional reconstruction method according to another embodiment of the present disclosure.
Fig. 7 is a flow diagram of a three-dimensional reconstruction method according to yet another embodiment of the present disclosure.
Fig. 8 is a block diagram schematically illustrating the structure of an image capturing apparatus employing a hardware implementation of a processing system according to one embodiment of the present disclosure.
Fig. 9 is a block diagram schematically illustrating the structure of an image acquisition system according to an embodiment of the present disclosure.
Description of the reference numerals
100. Image acquisition system
110. Projection device
120. Camera device
1000. Image acquisition device
1002. Projection control instruction generation module
1004. Exposure control instruction generation module
1100. Bus line
1101. Linear light spot emitter
1102. Vibrating mirror
1103. Control panel
1200. Processor and method for controlling the same
1300. Memory device
1400 other circuits.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant content and not limiting of the present disclosure. It should be further noted that, for convenience of description, only a portion relevant to the present disclosure is shown in the drawings.
In addition, embodiments of the present disclosure and features of the embodiments may be combined with each other without conflict. The technical aspects of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Thus, unless otherwise indicated, features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present disclosure.
The use of cross-hatching and/or shading in the drawings is typically used to clarify the boundaries between adjacent components. As such, the presence or absence of cross-hatching or shading does not convey or represent any preference or requirement for a particular material, material property, dimension, proportion, commonality between illustrated components, and/or any other characteristic, attribute, property, etc. of a component, unless indicated. In addition, in the drawings, the size and relative sizes of elements may be exaggerated for clarity and/or descriptive purposes. While the exemplary embodiments may be variously implemented, the specific process sequences may be performed in a different order than that described. For example, two consecutively described processes may be performed substantially simultaneously or in reverse order from that described. Moreover, like reference numerals designate like parts.
When an element is referred to as being "on" or "over", "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to," or "directly coupled to" another element, there are no intervening elements present. For this reason, the term "connected" may refer to physical connections, electrical connections, and the like, with or without intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
The image capturing method, the three-dimensional reconstruction method, the image capturing apparatus, the image capturing system, and the like of the present disclosure are described in detail below with reference to fig. 1 to 9.
Fig. 1 is a schematic overall flow diagram of an image acquisition method according to one embodiment of the present disclosure.
Referring first to fig. 1, in some embodiments of the present disclosure, an image acquisition method S100 of the present disclosure includes:
s102, performing first-round scanning projection on a target area, and synchronously performing first-round multiple exposure at a first preset time interval sequence to acquire a first exposure image sequence (the first exposure image sequence comprises a plurality of exposure images) of the target area;
s104, performing a second round of scanning projection on the target area, and synchronously performing a second round of multiple exposure at a second preset time interval sequence to acquire a second exposure image sequence (the second exposure image sequence comprises a plurality of exposure images) of the target area;
wherein the second-pass multiple-exposure regions described in this disclosure include at least the first-pass multiple-exposure unexposed regions of the target region.
Based on the first exposure image sequence and the second exposure image sequence acquired by the image acquisition method S100 of the present disclosure, a complete exposure area of the target area may be obtained for three-dimensional reconstruction to obtain a three-dimensional image of the target area.
The image acquisition method of the present disclosure may be used for acquisition of a first sequence of exposure images and a second sequence of exposure images, such as described above, of an industrial surface (i.e. a target area).
The scanning projection described in the present disclosure may be scanning laser projection, and the scanning laser projection may be performed by a projection apparatus including a line spot emitter (e.g., a laser), a galvanometer (e.g., a laser galvanometer), and a control board (e.g., a control chip or a control circuit board), where the control board may output a desired laser galvanometer position signal, the laser galvanometer simultaneously feeds back a real-time self-position signal, the control board turns on or off the laser according to a preset format image (e.g., a stripe format image) at a calculated galvanometer position, may output a preset format image within a preset projection time period, i.e., a projection period (T), and if the projection time period is equal to an exposure time period of the camera apparatus, the camera apparatus may capture a complete format image (e.g., a stripe format image).
However, since a scanning type projection is employed, taking a scanning type laser projection as an example, a laser line will appear at a certain point in time only at a certain position of the target area described above, exposure is maintained during projection, but ambient light noise is introduced when a camera device (such as a 2D camera) is exposed, the magnitude of the ambient light noise is proportional to the exposure time, and the longer the exposure time of the camera device, the greater the ambient light noise will be introduced during one projection period.
Accordingly, the present disclosure contemplates reducing the exposure time of a camera device to increase the signal-to-noise ratio with the projection time unchanged.
The exposure time in one projection period is set to be smaller than the projection time, and the regional multiple exposure is carried out in one projection, and each region is respectively exposed, so that the exposure time of ambient light can be reduced, noise is reduced, and the signal to noise ratio is improved.
Due to the hardware limitation of the camera device, in the multiple exposure performed in one projection period, a certain time interval is required between the end of the first exposure and the beginning of the second exposure, so as to output the electric charges, the image sensor receives light during the exposure to generate photoelectrons, the electric charges generated by the exposure need to be timely output after the end of the exposure to start the next exposure, and if the electric charges are not completely discharged, part of exposure images are lost in the projection period.
Thus, the present disclosure performs a plurality of exposures at preset time intervals in one projection period.
Considering that the projection pattern is to be scanned over the whole exposure area (i.e. the whole target area) during the laser projection process, i.e. the projection pattern is moving during the shooting process (exposure process), the camera device may miss a part of the acquisition of the projection pattern during each shooting (exposure) output charge, i.e. due to the presence of the above mentioned preset time interval, a part of the acquisition of the projection pattern may miss in one projection cycle.
Accordingly, the present disclosure provides for coverage of the entire projected area (i.e., the entire target area) by providing at least two scan-type projections and providing the second multiple exposure area described in the present disclosure to include at least the first multiple exposure unexposed area of the target area. To improve image acquisition efficiency, the present disclosure preferably accomplishes image acquisition in two passes, but in other embodiments, it is within the scope of the present disclosure to provide multiple passes of scanning projections to achieve an exposure scheme that covers the unexposed areas of the first pass.
In some embodiments of the present disclosure, the trigger time of the first exposure in the first multiple exposure described in the image acquisition method S100 of the present disclosure is a start time of the first scanning projection, and the trigger time of the first exposure in the second multiple exposure has a preset delay time relative to the start time of the second scanning projection.
The purpose of setting the preset delay time for the first exposure of the second round of projection is to further reduce the exposure time of the ambient light and improve the signal to noise ratio of image acquisition. Meanwhile, in order to ensure that the exposure area where the second round of projection is achieved covers the unexposed area in the first round of projection, in some embodiments of the present disclosure, the preset delay time period described above is preferably greater than or equal to the interval time period after the first round of exposure. Further to prevent camera frame loss, in some embodiments of the present disclosure, the preset delay period described above is preferably less than or equal to the first exposure period of the first multiple exposures.
In multi-round projection, exposure of the whole target area is achieved by setting exposure time and exposure duration. On the basis, setting and adjusting a first preset time interval sequence, a second preset time interval sequence, a first round of exposure time length and a second round of exposure time length are all within the protection scope of the present disclosure.
For example, in some embodiments of the present disclosure, the first predetermined time interval sequence may be an equal time interval sequence and/or the second predetermined time interval sequence may be an equal time interval sequence. Wherein each interval duration is not shorter than the charge output period in the previous exposure period.
Preferably, in the image acquisition method of the present disclosure, the first preset time interval sequence and the second preset time interval sequence are both equal time interval sequences.
In other embodiments of the present disclosure, at least two time intervals in the first sequence of preset time intervals may be set to be different and/or at least two time intervals in the second sequence of preset time intervals may be set to be different, i.e. the first sequence of preset time intervals, the second sequence of preset time intervals may be a sequence of non-equal time intervals.
The adjustment of the time intervals of the first preset time interval sequence and the second preset time interval sequence falls within the protection scope of the present disclosure by a person skilled in the art in light of the technical scheme of the present disclosure.
Preferably, in some embodiments of the present disclosure, the first and second predetermined time interval sequences are each equal time interval sequences, and the time intervals of both are the same.
The image acquisition method disclosed by the invention can realize complete coverage of the target area through the arrangement.
Fig. 2 illustrates a trigger timing of an exposure trigger signal, a charge output period during a two-pass scanning projection in an image acquisition method according to an embodiment of the present disclosure.
Fig. 3 illustrates a timing sequence of an exposure trigger signal during two-pass scanning projection in an image acquisition method according to an embodiment of the present disclosure.
Referring to fig. 2 and 3, the image capturing method of the present disclosure includes two scan-type projections, i.e., a first scan-type projection and a second scan-type projection, so that in order to facilitate control of the two scan-type projections, and facilitate exposure control during the scan-type projection, projection periods of the two scan-type projections may be set to be the same, for example, both the projection periods are T (i.e., projection time lengths are both T, for example, 1 second). In other embodiments, the projection period may also be set to be different.
It will be appreciated by those skilled in the art that there is a time interval between the first and second scan projections.
For example, the projection time length T of each projection, the number of times of each exposure, n, and the exposure time T of a single image may be set according to the image acquisition scene.
During the development process, the inventor often finds that some problems exist, such as that the projection cannot cover the whole projection area, or that the camera has a frame loss. In order to solve the above-mentioned problems, the present disclosure sets the exposure time T of a single sheet, for example, sets T > T/2n, solves the problem that two-round scanning projection cannot cover all projection areas (i.e., all target areas), sets T < T/n and T/n > (1/f), and f is the camera frame rate, and solves the problem of frame loss of a camera.
Referring to fig. 2 and 3, the image acquisition method of the present disclosure first performs a first round of scanning projection:
triggering the 2D camera to take a picture (namely triggering the 2D camera to start exposure) in the kth X (T/n) (milli) second, wherein k is an integer; the period T of the first scanning projection is divided into n sections, each section is respectively exposed, the camera receives an exposure trigger signal to start exposure in each exposure period, the exposure time is T, after the exposure is completed, the camera outputs generated charges (charge output period), and then the camera waits for the exposure trigger signal to start the next exposure.
Four exposures performed based on four exposure triggers in a first round of scanning projection and four exposures performed based on four exposure triggers in a second round of scanning projection are exemplarily shown in fig. 2.
After the first round of projection and exposure are completed, a second round of scanning projection is performed:
triggering the 2D camera to shoot (namely triggering the 2D camera to start exposure) in the kth X (T/n) +T/2n (milli) seconds, wherein k is E [0, T/n), and k is an integer; the exposure trigger signal corresponding to the first exposure in the second round of multiple exposure is delayed by T/2n relative to the starting time of the second round of scanning projection, so as to cover the period of non-exposure caused by the charge output requirement in the single exposure period in the first round of scanning projection period.
In light of the technical solution of the present disclosure, a person skilled in the art may adjust the trigger delay, so long as the second round of multiple exposure can cover the period of non-exposure caused by the charge output requirement during the single exposure period during the first round of scanning projection, which falls within the protection scope of the present disclosure.
As can be seen from the above description, after two projections, the exposure areas of the cameras during the two projections are combined, so that the complete exposure area of the target area during the whole exposure period can be obtained.
In some embodiments of the present disclosure, the image data corresponding to the charge output period during the first round of scanning projection is acquired based on the second exposure image sequence acquired during the second round of scanning projection to integrate with the first exposure image sequence, and the entire image of the entire first round of projection can be acquired.
By the image acquisition method S100, the exposure time of the ambient light of the image (stripe image) generated in each single exposure period is reduced from the original T (namely the global exposure method) to T/n (n is the exposure time in each projection period), and the signal to noise ratio of the stripe image is improved, so that the signal to noise ratio of the final image is improved, and the capability of the camera for resisting the interference of the ambient light is improved.
In the image acquisition method S100 of the present disclosure, the scanning projection may include:
outputting a preset format image (preset stripe image) for a preset time period (T) and causing the preset format image to movably irradiate the target area so as to cover the entire target area.
In some embodiments of the present disclosure, each exposure of the first round of multiple exposures in the image acquisition method S100 of the present disclosure has the same exposure time t, and each exposure of the second round of multiple exposures has the same exposure time t.
In other embodiments of the present disclosure, the exposure time period of each exposure of the first plurality of exposures may be set to be different, for example, the exposure time period of at least two exposures in the first plurality of exposures may be different, or the exposure time period of each exposure of the second plurality of exposures may be set to be different, for example, the exposure time period of at least two exposures in the second plurality of exposures may be different.
The exposure time length of each exposure of the first round of multiple exposure and the exposure time length of each exposure of the second round of multiple exposure are adjusted by a person skilled in the art under the teaching of the technical proposal of the disclosure, and all fall into the protection scope of the disclosure.
The number of exposure times of the first multiple exposure and the number of exposure times of the second multiple exposure in the image acquisition method S100 of the present disclosure may be the same or different.
In some embodiments of the present disclosure, an image acquisition method of the present disclosure includes:
performing a first round of scanning projection on the target area based on a preset format image (preset stripe image) output for a preset time period T to cover the entire target area;
synchronously carrying out first-round n-time exposure on a target area in a first preset time interval sequence to acquire an exposure image sequence of the target area, wherein the triggering time of the first exposure in the first-round n-time exposure is the starting time of the first-round scanning projection;
Performing a second round of scanning projection on the target area based on the preset format image (preset stripe image) output for a preset time period T to cover the entire target area;
synchronously carrying out a second round of n times of exposure on the target area in a second preset time interval sequence to acquire an exposure image sequence of the target area, wherein the triggering time of the first time exposure in the second round of n times of exposure has a preset delay time length relative to the starting time of the second round of scanning projection, and the preset delay time length is greater than or equal to the first preset time interval;
and fusing the exposure image sequences acquired by the first n times of exposure and the exposure image sequences acquired by the second n times of exposure to obtain the complete exposure image of the target area.
Wherein, the single exposure time T of n times of exposure satisfies T/2n < T < T/n, and n is more than or equal to 2.
Wherein T/n > 1/f, f is the frame rate of the camera that acquired the exposure image.
The camera described above in the present disclosure is preferably a 2D camera, such as a CCD area array camera, and the structure/type of the camera is not particularly limited in this disclosure, and those skilled in the art may select/adjust the structure/type of the camera under the teaching of the technical solution of this disclosure, which falls within the protection scope of this disclosure.
The present disclosure also provides a three-dimensional reconstruction method for performing three-dimensional reconstruction based on the first exposure image sequence and the second exposure image sequence of the target region acquired by the image acquisition method of any one of the embodiments described above.
Fig. 4 is a flow diagram of a three-dimensional reconstruction method according to one embodiment of the present disclosure.
Referring to fig. 4, the three-dimensional reconstruction method S200 of the present embodiment includes:
s202, acquiring a complete exposure image of a complete exposure area of a target area based on a first exposure image sequence and a second exposure image sequence;
s204, acquiring a complete phase diagram of the target area based on the complete exposure image;
s206, acquiring complete depth information of the target area based on the complete phase diagram;
s208, acquiring complete point cloud information of the target area based on the complete depth information.
Thus, three-dimensional reconstruction of the target area can be completed based on the obtained complete point cloud information.
For the three-dimensional reconstruction method S200 of the present disclosure, the above-described fusion of the first exposure image sequence and the second exposure image sequence is performed based on the matching of pixels. For example, the overlapping or stitching of the pixels can be performed by comparing the pixels one by one and judging whether the pixels overlap or not so as to complete the fusion of the images.
In other embodiments, the above-described fusing of the first exposed image sequence and the second exposed image sequence may also be performed based on the following image fusion method:
s2022, acquiring a first relative trigger time sequence based on the trigger time of each exposure of the first round of multiple exposures relative to the starting moment of the first round of scanning projection;
s2024, acquiring a second relative trigger time sequence based on the trigger time of each exposure of the second round of multiple exposures relative to the starting moment of the second round of scanning projection;
s2026, fusing the first exposure image sequence and the second exposure image sequence based on the first relative trigger time sequence and the second relative trigger time sequence.
Fig. 5 shows a flow diagram of an image fusion method of one embodiment of the present disclosure.
Wherein, in some embodiments of the present disclosure, the fusing described above of the present disclosure includes a superposition and/or stitching operation of the exposure images in the sequence of exposure images.
In some embodiments of the present disclosure, superimposing the images preferably includes averaging the different images of the superimposed area such that the sum of the weights of the different images is 1.
Fig. 6 is a flow diagram of a three-dimensional reconstruction method of yet another embodiment of the present disclosure.
Referring to fig. 6, the three-dimensional reconstruction method S300 of the present embodiment includes:
s302, acquiring a first phase map based on a first exposure image sequence, and acquiring a second phase map based on a second exposure image sequence;
s304, acquiring a complete phase map of the target area based on the first phase map and the second phase map;
s306, acquiring complete depth information of a target area based on a complete phase diagram;
s308, acquiring complete point cloud information of the target area based on the complete depth information.
Thus, three-dimensional reconstruction of the target area can be completed based on the obtained complete point cloud information.
In some embodiments of the present disclosure, the obtaining a complete phase map of the target area based on the first phase map and the second phase map described above includes:
and performing superposition and/or splicing operation on the first phase map and the second phase map to obtain a complete phase map. For example, the overlapping or stitching of the pixels can be performed by comparing the pixels one by one and judging whether the pixels overlap or not so as to complete the fusion of the phase diagrams.
In some embodiments of the present disclosure, superimposing the phase maps preferably includes averaging the different phase maps of the superimposed area such that the sum of the weights of the different phase maps is 1.
Fig. 7 is a flow diagram of a three-dimensional reconstruction method according to yet another embodiment of the present disclosure.
Referring to fig. 7, the three-dimensional reconstruction method S400 of the present embodiment includes:
s402, acquiring a first phase map based on a first exposure image sequence, and acquiring a second phase map based on a second exposure image sequence;
s404, acquiring first depth information of a target area based on the first phase map, and acquiring second depth information of the target area based on the second phase map;
s406, acquiring first point cloud information of the target area based on the first depth information, and acquiring second point cloud information of the target area based on the second depth information;
and S408, fusing the first point cloud information and the second point cloud information to obtain the complete point cloud information of the target area.
Thereby, three-dimensional reconstruction can be performed based on the complete point cloud information to obtain a three-dimensional image of the target area.
In some embodiments of the present disclosure, fusing the first point cloud information and the second point cloud information to obtain the complete point cloud information of the target area described above includes:
and performing superposition and/or splicing operation on the first point cloud information and the second point cloud information to obtain complete point cloud information.
In the three-dimensional reconstruction method disclosed by the disclosure, the specific acquisition algorithm of the phase map, the specific acquisition algorithm of the depth information and the specific acquisition algorithm of the point cloud information can all adopt the existing algorithm. The disclosure also provides an image acquisition device.
In some embodiments of the present disclosure, the image capturing apparatus 1000 of the present disclosure includes:
the projection control instruction generation module 1002, the projection control instruction generation module 1002 is configured to generate a first projection control instruction to perform a first round of scanning projection on the target area based on the first projection control instruction, and the projection control instruction generation module 1002 is further configured to generate a second projection control instruction to perform a second round of scanning projection on the target area based on the second projection control instruction;
the exposure control instruction generating module 1004, the exposure control instruction generating module 1004 is configured to generate a first exposure control instruction to synchronously perform a first round of multiple exposure at a first preset time interval sequence based on the first exposure control instruction to acquire a first exposure image sequence of the target area, and the exposure control instruction generating module 1004 is further configured to generate a second exposure control instruction to synchronously perform a second round of multiple exposure at a second preset time interval sequence based on the second exposure control instruction to acquire the first exposure image sequence of the target area;
the second-round multi-exposure area at least comprises a first-round multi-exposure unexposed area in the target area, and the first-round multi-exposure unexposed area is used for obtaining a complete exposure area of the target area through information fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction.
In some embodiments of the present disclosure, the image capture device 1000 of the present disclosure may be implemented in the form of a computer software architecture. In other embodiments of the present disclosure, the image capture device 1000 of the present disclosure may be implemented based on a hardware architecture with a processing system, referring to fig. 8.
Fig. 8 is a block diagram illustrating a hardware implementation of an image acquisition apparatus 1000 using a processing system according to an embodiment of the present disclosure.
The image acquisition apparatus 1000 may include corresponding modules that perform each or several of the steps in the flowcharts described above. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
The hardware architecture may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
The present disclosure also provides an image acquisition system. Fig. 9 shows a block schematic diagram of the image acquisition system 100 of one embodiment of the present disclosure.
In some embodiments of the present disclosure, the image acquisition system 100 of the present disclosure includes:
a projection device 110;
a camera device 120;
the projection control instruction generating module 1002, where the projection control instruction generating module 1002 is configured to generate a first projection control instruction to enable the projecting device 110 to perform a first round of scanning projection on the target area based on the first projection control instruction, and the projection control instruction generating module 1002 is further configured to generate a second projection control instruction to enable the projecting device to perform a second round of scanning projection on the target area based on the second projection control instruction;
the exposure control instruction generating module 1004 is configured to generate a first exposure control instruction to enable the camera device 120 to synchronously perform a first round of multiple exposure at a first preset time interval sequence based on the first exposure control instruction to acquire a first exposure image sequence of the target area, and generate a second exposure control instruction to enable the camera device 120 to synchronously perform a second round of multiple exposure at a second preset time interval sequence based on the second exposure control instruction to acquire the first exposure image sequence of the target area.
The second-round multi-exposure area at least comprises a first-round multi-exposure unexposed area in the target area, and the first-round multi-exposure unexposed area is used for obtaining a complete exposure area of the target area through information fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction.
The projection device 110 of the image capturing system 100 of the present disclosure may include a linear light spot emitter 1101 (e.g., a laser), a galvanometer 1102 (e.g., a laser galvanometer), a control board 1103, and the like, where the linear light spot emitter 1101 is used to emit a straight line, and the linear scanning is implemented in combination with the galvanometer 1102.
The laser galvanometer, i.e. the laser scanner, may consist of an X-Y optical scanning head, an electronic drive amplifier and an optical reflection lens. The present disclosure does not specifically limit the specific structure/type of the laser and the laser galvanometer, and those skilled in the art may use the existing laser, laser galvanometer, control board, and the like, which all fall within the protection scope of the present disclosure.
The camera device 120 of the image acquisition system 100 of the present disclosure may include one camera or may include a plurality of cameras.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a memory storing execution instructions; a processor, the processor executing execution instructions stored by the memory, causing the processor to perform the image acquisition method of any one of the embodiments of the present disclosure and/or the three-dimensional reconstruction method of any one of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the image acquisition method of any one of the embodiments of the present disclosure and/or the three-dimensional reconstruction method of any one of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the image acquisition method of any one of the embodiments of the present disclosure and/or the three-dimensional reconstruction method of any one of the embodiments of the present disclosure.
In the description of the present specification, reference to the terms "one embodiment/mode," "some embodiments/modes," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the present application. In this specification, the schematic representations of the above terms are not necessarily the same embodiments/modes or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/implementations or examples described in this specification and the features of the various embodiments/implementations or examples may be combined and combined by persons skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
It will be appreciated by those skilled in the art that the above-described embodiments are merely for clarity of illustration of the disclosure, and are not intended to limit the scope of the disclosure. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. An image acquisition method, comprising:
performing first-round scanning projection on the target area, and synchronously performing first-round multiple exposure at a first preset time interval sequence to acquire a first exposure image sequence of the target area;
performing a second round of scanning projection on the target area, and synchronously performing a second round of multiple exposure at a second preset time interval sequence to acquire a second exposure image sequence of the target area;
The second-round multi-exposure area at least comprises a first-round multi-exposure non-exposure area in the target area, and the second-round multi-exposure area is used for fusing based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction to obtain a complete exposure area of the target area.
2. The image acquisition method of claim 1, wherein the scanned projection comprises:
outputting a preset format image for a preset time length and enabling the preset format image to movably irradiate the target area so as to cover the whole target area.
3. The image acquisition method according to claim 1, wherein each exposure of the first plurality of exposures has the same exposure time period;
optionally, each exposure of the second plurality of exposures has the same exposure duration;
optionally, the exposure time length of at least two exposures in the first round of multiple exposures is different;
optionally, the exposure time length of at least two exposures in the second round of multiple exposures is different;
alternatively, the exposure time period of each exposure of the first plurality of exposures is the same as the exposure time period of each exposure of the second plurality of exposures.
4. The image capturing method according to claim 1, wherein the number of exposure times of the first plurality of exposure times is the same as or different from the number of exposure times of the second plurality of exposure times.
5. A three-dimensional reconstruction method, characterized by performing three-dimensional reconstruction based on the first exposure image sequence and the second exposure image sequence of the target region acquired by the image acquisition method according to any one of claims 1 to 4;
optionally, the method comprises:
acquiring a complete exposure image of a complete exposure area of the target area based on the first exposure image sequence and the second exposure image sequence;
acquiring a complete phase map of the target area based on the complete exposure image;
acquiring complete depth information of the target area based on the complete phase map; and
acquiring complete point cloud information of the target area based on the complete depth information;
optionally, acquiring a complete exposure image of the complete exposure area of the target area based on the first exposure image sequence and the second exposure image sequence includes:
performing superposition and/or stitching operation on the first exposure image sequence and the second exposure image sequence to acquire the complete exposure image;
Optionally, the method comprises:
acquiring a first phase map based on the first exposure image sequence, and acquiring a second phase map based on the second exposure image sequence;
acquiring a complete phase map of the target area based on the first phase map and the second phase map;
acquiring complete depth information of the target area based on the complete phase map; and
acquiring complete point cloud information of the target area based on the complete depth information;
optionally, acquiring a complete phase map of the target area based on the first phase map and the second phase map includes:
overlapping and/or splicing the first phase map and the second phase map to obtain the complete phase map;
optionally, the method comprises:
acquiring a first phase map based on the first exposure image sequence, and acquiring a second phase map based on the second exposure image sequence;
acquiring first depth information of the target area based on the first phase map, and acquiring second depth information of the target area based on the second phase map;
acquiring first point cloud information of the target area based on the first depth information, and acquiring second point cloud information of the target area based on the second depth information; and
Fusing the first point cloud information and the second point cloud information to obtain complete point cloud information of the target area;
optionally, fusing the first point cloud information and the second point cloud information to obtain complete point cloud information of the target area includes:
overlapping and/or splicing the first point cloud information and the second point cloud information to obtain the complete point cloud information;
optionally, the overlaying includes performing an average weighting process on different overlaying objects of the overlaying region such that a sum of weights of the different overlaying objects is 1.
6. An image acquisition device, comprising:
the projection control instruction generation module is used for generating a first projection control instruction to perform first-round scanning type projection on the target area based on the first projection control instruction, and is also used for generating a second projection control instruction to perform second-round scanning type projection on the target area based on the second projection control instruction; and
an exposure control instruction generation module for generating a first exposure control instruction to synchronously perform a first round of multiple exposures at a first preset time interval sequence based on the first exposure control instruction to acquire a first exposure image sequence of a target area, and for generating a second exposure control instruction to synchronously perform a second round of multiple exposures at a second preset time interval sequence based on the second exposure control instruction to acquire a second exposure image sequence of the target area;
The second-round multi-exposure area at least comprises a first-round multi-exposure non-exposure area in the target area, and the second-round multi-exposure area is used for obtaining a complete exposure area of the target area by information fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction.
7. An image acquisition system, comprising:
a projection device;
a camera device;
the projection control instruction generation module is used for generating a first projection control instruction so that the projection device can perform first-round scanning projection on a target area based on the first projection control instruction, and is also used for generating a second projection control instruction so that the projection device can perform second-round scanning projection on the target area based on the second projection control instruction; and
an exposure control instruction generation module for generating a first exposure control instruction to cause the camera apparatus to synchronously perform a first plurality of times of exposure at a first preset time interval sequence based on the first exposure control instruction to acquire a first exposure image sequence of a target area, and for generating a second exposure control instruction to cause the camera apparatus to synchronously perform a second plurality of times of exposure at a second preset time interval sequence based on the second exposure control instruction to acquire a second exposure image sequence of the target area;
The second-round multi-exposure area at least comprises a first-round multi-exposure unexposed area in the target area, and the second-round multi-exposure area is used for obtaining a complete exposure area of the target area by information fusion based on the first exposure image sequence and the second exposure image sequence in three-dimensional reconstruction;
optionally, the projection device includes a line light spot emitter and a galvanometer;
optionally, the camera device comprises one camera or comprises a plurality of cameras.
8. An electronic device, comprising:
a memory storing execution instructions; and
a processor executing the execution instructions stored by the memory, causing the processor to perform the image acquisition method of any one of claims 1 to 4 and/or the three-dimensional reconstruction method of claim 5.
9. A readable storage medium, characterized in that the readable storage medium has stored therein execution instructions, which when executed by a processor are for implementing the image acquisition method of any one of claims 1 to 4 and/or the three-dimensional reconstruction method of claim 5.
10. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the image acquisition method of any one of claims 1 to 4 and/or the three-dimensional reconstruction method of claim 5.
CN202211466415.9A 2022-11-22 2022-11-22 Image acquisition method and three-dimensional reconstruction method Active CN116485660B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211466415.9A CN116485660B (en) 2022-11-22 2022-11-22 Image acquisition method and three-dimensional reconstruction method
PCT/CN2023/132826 WO2024109719A1 (en) 2022-11-22 2023-11-21 Image acquisition method, three-dimensional reconstruction method, and image acquisition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466415.9A CN116485660B (en) 2022-11-22 2022-11-22 Image acquisition method and three-dimensional reconstruction method

Publications (2)

Publication Number Publication Date
CN116485660A true CN116485660A (en) 2023-07-25
CN116485660B CN116485660B (en) 2023-11-17

Family

ID=87210732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466415.9A Active CN116485660B (en) 2022-11-22 2022-11-22 Image acquisition method and three-dimensional reconstruction method

Country Status (2)

Country Link
CN (1) CN116485660B (en)
WO (1) WO2024109719A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024109719A1 (en) * 2022-11-22 2024-05-30 北京迁移科技有限公司 Image acquisition method, three-dimensional reconstruction method, and image acquisition system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101816171A (en) * 2007-10-03 2010-08-25 诺基亚公司 Multi-exposure pattern for enhancing dynamic range of images
CN105068384A (en) * 2015-08-12 2015-11-18 杭州思看科技有限公司 Method for controlling exposure time of laser projectors of handheld three-dimensional laser scanner
CN105825219A (en) * 2016-05-10 2016-08-03 梁伟棠 Machine vision detection system
CN107894215A (en) * 2017-12-26 2018-04-10 东南大学 HDR optical grating projection method for three-dimensional measurement based on fully automatic exposure
CN108900782A (en) * 2018-08-22 2018-11-27 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN110312056A (en) * 2019-06-10 2019-10-08 青岛小鸟看看科技有限公司 A kind of synchronous exposure method and image capture device
WO2020238806A1 (en) * 2019-05-31 2020-12-03 杭州海康威视数字技术股份有限公司 Image collection apparatus and photography method
CN113237435A (en) * 2021-05-08 2021-08-10 北京航空航天大学 High-light-reflection surface three-dimensional vision measurement system and method
CN113358063A (en) * 2021-06-04 2021-09-07 华中科技大学 Surface structured light three-dimensional measurement method and system based on phase weighted fusion
CN113496542A (en) * 2020-03-20 2021-10-12 先临三维科技股份有限公司 Multi-exposure image modeling method and device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6332623B2 (en) * 2014-06-05 2018-05-30 株式会社リコー Image forming apparatus and image forming method
CN110301924B (en) * 2019-07-08 2023-05-30 东软医疗系统股份有限公司 Method, device and equipment for processing image
CN112648935A (en) * 2020-12-14 2021-04-13 杭州思锐迪科技有限公司 Image processing method and device and three-dimensional scanning system
CN114820812A (en) * 2022-04-18 2022-07-29 南京航空航天大学 High-light-reflection surface three-dimensional reconstruction algorithm based on structured light
CN116485660B (en) * 2022-11-22 2023-11-17 北京迁移科技有限公司 Image acquisition method and three-dimensional reconstruction method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101816171A (en) * 2007-10-03 2010-08-25 诺基亚公司 Multi-exposure pattern for enhancing dynamic range of images
CN105068384A (en) * 2015-08-12 2015-11-18 杭州思看科技有限公司 Method for controlling exposure time of laser projectors of handheld three-dimensional laser scanner
CN105825219A (en) * 2016-05-10 2016-08-03 梁伟棠 Machine vision detection system
CN107894215A (en) * 2017-12-26 2018-04-10 东南大学 HDR optical grating projection method for three-dimensional measurement based on fully automatic exposure
CN108900782A (en) * 2018-08-22 2018-11-27 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
WO2020238806A1 (en) * 2019-05-31 2020-12-03 杭州海康威视数字技术股份有限公司 Image collection apparatus and photography method
CN110312056A (en) * 2019-06-10 2019-10-08 青岛小鸟看看科技有限公司 A kind of synchronous exposure method and image capture device
CN113496542A (en) * 2020-03-20 2021-10-12 先临三维科技股份有限公司 Multi-exposure image modeling method and device, computer equipment and storage medium
CN113237435A (en) * 2021-05-08 2021-08-10 北京航空航天大学 High-light-reflection surface three-dimensional vision measurement system and method
CN113358063A (en) * 2021-06-04 2021-09-07 华中科技大学 Surface structured light three-dimensional measurement method and system based on phase weighted fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHAN SONG, ET AL: "A high dynamic range structured light means for the 3D measurement of specular surface", 《OPTICS AND LASERS IN ENGINEERING》, vol. 95, pages 8 - 16, XP029990535, DOI: 10.1016/j.optlaseng.2017.03.008 *
刘桂华: "基于计算机视觉的大型复杂曲面三维光学测量关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》, vol. 2015, no. 03, pages 138 - 23 *
陈鹏: "线激光阵列3D光学轮廓测量研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》, vol. 2020, no. 01, pages 005 - 361 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024109719A1 (en) * 2022-11-22 2024-05-30 北京迁移科技有限公司 Image acquisition method, three-dimensional reconstruction method, and image acquisition system

Also Published As

Publication number Publication date
CN116485660B (en) 2023-11-17
WO2024109719A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
US7215364B2 (en) Digital imaging system using overlapping images to formulate a seamless composite image and implemented using either a digital imaging sensor array
CN104168416B (en) Picture pick-up device and its control method
CN110519489B (en) Image acquisition method and device
JP5108093B2 (en) Imaging apparatus and imaging method
US7151560B2 (en) Method and apparatus for producing calibration data for a digital camera
JP4939901B2 (en) Distance image generation method and apparatus
CN112351184B (en) Camera light supplementing method and device
CN109948398B (en) Image processing method for panoramic parking and panoramic parking device
CN116485660B (en) Image acquisition method and three-dimensional reconstruction method
JP5090692B2 (en) Imaging device
US20050012833A1 (en) Image capturing apparatus
JP6643371B2 (en) Imaging device, control method, and program
CN114424020A (en) Three-dimensional measuring device
CN110493535B (en) Image acquisition device and image acquisition method
CN110602415B (en) Exposure control device, method and camera
JP2008506308A (en) Extended dynamic range imaging system
CN115767054A (en) Projection screen entering method and projector
JPH11196319A (en) Image pickup device
JP7059076B2 (en) Image processing device, its control method, program, recording medium
JP2021027523A (en) Depth information generation device, imaging apparatus, depth information generation method, image processing device, image processing method, and program
JP5298396B2 (en) Electronic camera and image processing system
JP2001148865A (en) Image input device
JPH11196317A (en) Image pickup device
CN106027889A (en) Control method, control device and electronic device
JP2018007205A (en) Image processing apparatus, imaging apparatus, control method for image processing apparatus, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant