CN115037866A - Imaging method, device, equipment and storage medium - Google Patents
Imaging method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115037866A CN115037866A CN202110235434.XA CN202110235434A CN115037866A CN 115037866 A CN115037866 A CN 115037866A CN 202110235434 A CN202110235434 A CN 202110235434A CN 115037866 A CN115037866 A CN 115037866A
- Authority
- CN
- China
- Prior art keywords
- dimensional coordinate
- shooting
- coordinate system
- measured object
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 114
- 230000008447 perception Effects 0.000 claims abstract description 74
- 239000000463 material Substances 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000004590 computer program Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000009877 rendering Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 description 18
- 238000002372 labelling Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 230000002452 interceptive effect Effects 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000007943 implant Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000003826 tablet Substances 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The embodiment of the application relates to the field of communication, and discloses an imaging method, an imaging device, imaging equipment and a storage medium, which solve the problems that the existing 3D modeling is only suitable for terminal equipment implanted with a Lidar module, the influence of ambient brightness on the 3D modeling based on the Lidar module is large, and the imaging result is low in accuracy due to the fact that a fixed three-dimensional coordinate system is not used as a reference. In the present application, an imaging method is applied to an image pickup apparatus provided with a spatial perception component, the method including: determining the position information of each space sensing label in the coverage range of the space sensing assembly, wherein the space sensing labels are arranged on the outer surface of a measured object in advance, at least three space sensing labels are arranged on the outer surface of the measured object, and the positions of any two space sensing labels are not overlapped; constructing a standard three-dimensional coordinate system according to the position information; shooting a measured object to obtain a picture material; and imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
Description
Technical Field
The present disclosure relates to the field of communications, and in particular, to an imaging method, an imaging apparatus, an imaging device, and a storage medium.
Background
With the advent of Virtual Reality (VR) technology and Augmented Reality (AR) technology, the AR/VR market is being preempted to attract more users. Currently, some intelligent mobile device suppliers implant a laser radar Module (Lidar Module) in some intelligent mobile devices, so that 3D digital camera shooting of a displayed object scene is realized by the technology, and the intelligent mobile device implanted with the Lidar Module can be applied to AR scene reconstruction and VR object scanning.
At present, when the Lidar module-embedded intelligent mobile device performs digital shooting at S3, a Lidar module of the intelligent mobile device is controlled to project laser spot light to a measured object by using a laser projection measurement system implemented based on the Lidar module, and a distance between a camera and the measured object is integrated by using a direct flight time, so as to measure and image.
However, the current Lidar module measurement imaging method has at least the following disadvantages:
(1) the three-dimensional scanning effect is good in a dark environment, and the light spot projection calculation effect is not poor due to interference of bright light in a bright environment, so that the final measurement result and the imaging result are not accurate enough;
(2) a standard three-dimensional coordinate system is not established between the intelligent mobile equipment and the measured object, so that the size error of the 3D modeling scanned by the intelligent mobile equipment is larger;
(3) the function of object 3D scanning is used purely, and the intelligent mobile device adopts a laser projection measurement system, so that a distance measurement label and a camera on a measured object are in a dynamic test process, and 3D size errors generated by accumulative tests in the process cannot be reduced by mean value fitting through multiple measurements, so that the condition for generating a scanning refined 3D model is not met.
Therefore, it is desirable to provide an imaging method that can solve the above technical problems, and at the same time, enable the smart mobile device without the Lidar module to be imaged, so as to be applied to AR/VR scenes.
Disclosure of Invention
An object of the embodiments of the present application is to provide an imaging method, an imaging apparatus, an imaging device, and a storage medium, which are used to solve the above technical problems.
In order to solve the above technical problem, an embodiment of the present application provides an imaging method applied to an image pickup apparatus provided with a spatial sensing component, where the method includes:
determining the position information of each space perception label in the coverage range of the space perception component, wherein the space perception labels are arranged on the outer surface of a measured object in advance, at least three space perception labels are arranged on the outer surface of the measured object, and the positions of any two space perception labels are not overlapped;
constructing a standard three-dimensional coordinate system according to the position information;
shooting the measured object to obtain a picture material;
and imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
To achieve the above object, an embodiment of the present application further provides an imaging apparatus, including:
the determining module can also determine the position information of each space sensing label in the coverage range of the space sensing assembly, the space sensing labels are arranged on the outer surface of a measured object in advance, at least three space sensing labels are arranged on the outer surface of the measured object, and the positions of any two space sensing labels are not overlapped;
the construction module is used for constructing a standard three-dimensional coordinate system according to the position information;
the shooting module is used for shooting the measured object to obtain a picture material;
and the imaging module is used for imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
To achieve the above object, an embodiment of the present application further provides an imaging apparatus, including:
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the imaging method as described above.
In order to achieve the above object, an embodiment of the present application further provides a computer-readable storage medium storing a computer program. Which computer program, when being executed by a processor, carries out the imaging method as described above.
According to the imaging method, the imaging device, the imaging equipment and the storage medium, the fixed and unchangeable standard three-dimensional coordinate system is constructed based on the position information of the space sensing label fixed on the object to be measured, so that the object to be measured is imaged according to the constructed standard three-dimensional coordinate system and the picture material obtained by shooting, the process of dynamic shooting and ranging imaging is changed into relatively static operation, and the accuracy of a ranging result and an imaging result is greatly improved.
In addition, the imaging method, the imaging device, the imaging equipment and the storage medium provided by the application change the existing Lidar module-based ranging imaging into the space sensing component-based or space sensing label-based ranging imaging, and the whole imaging process does not need to depend on light spot projection, so that the interference of ambient brightness is avoided, and the accuracy of a measurement result and an imaging result is further ensured.
In addition, according to the imaging method, the imaging device, the imaging equipment and the storage medium, the Lidar module is not required to be implanted into the terminal equipment, and the condition that the space sensing component or the space sensing label is required to be arranged in the terminal equipment is not limited, namely, the space sensing component or the space sensing label can be externally arranged on the terminal equipment, so that the distance measurement imaging can be realized by directly externally arranging the space sensing label on the existing terminal equipment without the Lidar module and the space sensing component, the hardware threshold of the 3D modeling function is greatly reduced, and the popularization of 3D acquisition is facilitated.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
FIG. 1 is a flow chart of an imaging method provided in a first embodiment of the present application;
FIG. 2 is a schematic diagram of a standard three-dimensional coordinate system constructed based on step 102 of the imaging method of FIG. 1;
fig. 3 is a schematic diagram of a photographing position determined based on step 102 in the imaging method of fig. 1;
FIG. 4 is a schematic illustration of imaging a subject based on step 104 of the imaging method of FIG. 1;
FIG. 5 is a flow chart of an imaging method provided by a second embodiment of the present application;
fig. 6 is a schematic structural view of an image forming apparatus provided in a third embodiment of the present application;
fig. 7 is a schematic structural diagram of an image forming apparatus according to a fourth embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in the various embodiments of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present application, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present application relates to an imaging method applied to an image pickup apparatus provided with a spatial perception component.
Specifically, the existing distance measurement and imaging modes using the laser projection measurement system are greatly interfered by the ambient brightness, for example, the three-dimensional scanning effect is good in a dark environment, and the light spot projection calculation effect is poor in a bright environment due to the interference of bright light. Meanwhile, since the use of the laser projection measurement system needs to rely on a Lidar module, and in general, the Lidar module needs to be implanted into the image pickup apparatus in advance.
However, most of the image capturing apparatuses on the market currently do not have a Lidar module embedded therein, which results in that the three-dimensional scanning imaging operation must be implemented only by the image capturing apparatus with the Lidar module embedded therein, and a common personal user who does not have the image capturing apparatus with the Lidar module embedded therein cannot use the existing apparatus to implement the function.
In addition, the cost of implanting a Lidar module in an imaging device is also relatively high. Therefore, the conventional three-dimensional scanning imaging method has poor effect and cannot be popularized among ordinary individual users.
Based on this, the imaging method provided by this embodiment uses low-price and high-popularity spatial sensing components, such as millimeter wave distance measurement components, ultrasonic distance measurement components, Ultra Wide Band (UWB) distance measurement components and other distance measurement components with strong penetrating power, low power consumption, good interference effect, high security, large spatial capacity and accurate positioning, so that 3D modeling can be completed in common terminal devices with cameras, such as mobile phones and tablet computers, and these terminals do not need to implant a Lidar module with large power consumption and high cost, so that a hardware threshold of 3D modeling is reduced as much as possible, and 3D acquisition can be popularized in common personal users.
Furthermore, it can be understood that the spatial perception component listed above can be prepared as thin as possible in practical applications. Therefore, the space sensing component which is needed to be integrated inside the camera equipment during imaging is realized, and the space sensing component can also be directly attached to the outer shell of the camera equipment in a labeling mode, so that the hardware threshold of 3D modeling is further reduced, the camera equipment which is not provided with the space sensing component can realize 3D acquisition in a mode of external space sensing labels and is already put into the market.
The following describes implementation details of the imaging method of the present embodiment, and the following is provided only for ease of understanding and is not necessary for implementing the present embodiment.
For convenience of explanation, the present embodiment specifically explains that the imaging method is applied to an image pickup apparatus provided with a UWB ranging component, such as a mobile phone, as an example.
The specific flow of this embodiment is shown in fig. 1, and specifically includes the following steps:
Specifically, in this embodiment, each of the spatially aware labels within the coverage range determined by the spatially aware component is a spatially aware label of the same type previously attached to the outer surface of the object to be tested.
As described above, the imaging method of the present embodiment is applied to a mobile phone provided with a UWB ranging module as an example, and then the spatial sensing label attached to the outer surface of the object to be measured in advance is a UWB ranging label.
In addition, since at least three pieces of specific coordinate information are required for constructing a three-dimensional coordinate system, in this embodiment, at least three spatially-sensitive labels are pre-arranged on the outer surface of the object to be measured, and the positions of any two spatially-sensitive labels on the outer surface of the object to be measured do not overlap with each other.
In addition, it can be understood that, in practical applications, after the spatial sensing component and the spatial sensing tag are powered on, the pulse signal is played outwards according to a preset period. Therefore, the step 101 of determining the location information of each space sensing label within the coverage area of the space sensing component specifically includes: sending a broadcast (broadcasting a pulse signal outward) — > measuring distance (determining the distance between any two spatially aware tags/components) — > locating (determining location information) flow.
For convenience of understanding, the following describes the process of determining the location information of each of the spatially aware labels within the coverage area of the spatially aware component:
(1) and determining the number of the space perception labels within the coverage range of the space perception component.
Specifically, firstly, a pulse signal is played outwards through the spatial perception component according to a preset period, and then each spatial perception label in the pulse signal coverage range is searched; then, extracting identification information which is distributed for the space perception label in advance from the received pulse signal response data packet of the space perception label; and finally, removing the duplication of the extracted identification information, and counting the identification information after the duplication removal to obtain the number of the space perception labels within the coverage range of the space perception component.
It can be understood that the pulse signal response data packet in this embodiment specifically refers to a pulse signal response data packet made for a received pulse signal after the spatial sensing tag attached to the outer surface of the object to be tested receives the pulse signal of the spatial sensing component according to the preset period.
The camera device is used as a mobile phone, the arranged space sensing assembly is a UWB ranging label (for convenience of distinguishing the UWB ranging label arranged on the mobile phone is called a first UWB ranging label), the measured object is a hexahedron, and for ensuring the imaging result, a UWB ranging label is attached to each outer surface of the hexahedron in advance (for convenience of distinguishing the UWB ranging label attached to the outer surface of the measured object is called a second UWB ranging label) for example.
First UWB range finding labeling and second UWB range finding labeling go up the electricity after, all can outwards broadcast the UWB signal according to predetermineeing the cycle, in order to distinguish that received UWB signal is which second UWB range finding labeling, can be in advance for each second UWB range finding labeling distribution can mark the identification number of its uniqueness, thereby first UWB range finding labeling is after the UWB signal of receiving second UWB range finding labeling broadcast, follow and extract corresponding identification number, filter second UWB range finding labeling according to the identification number, thereby carry out the gathering of quantity, the number that both can confirm the second UWB range finding labeling in the first UWB range finding labeling coverage is 6.
(2) According to the spatial perception principle, a first distance between the spatial perception component and each spatial perception label and a second distance between every two spatial perception labels are determined.
It can be known from the above description that, in practical applications, the specific types of the spatial sensing component disposed on the image pickup device and the spatial sensing label disposed on the object to be measured may be millimeter waves, ultrasonic waves, or UWB, so that when the first distance and the second distance are determined, the corresponding spatial sensing principle is selected to perform distance measurement according to the type of the spatial sensing label/component actually disposed.
Still use the space perception subassembly of setting on camera equipment as first UWB range finding labeling, set up the space perception labeling on the surface of testee as second UWB range finding labeling for the example, then first distance between first UWB range finding labeling and the arbitrary second UWB range finding labeling on the testee can be expressed with following formula:
wherein, B 1 Denoting the first UWB ranging Label, B 2 Denoting the second UWB ranging label, B 1 R1 denotes the ranging packet received by the first UWB ranging label for the first time from the second UWB ranging label, i.e., the ranging packet sent by the second UWB ranging label to the first UWB ranging label at time T1, B 1 T1 denotes a ranging packet sent by a first UWB ranging tag to a second UWB ranging tag at time T1, B 2 T1 denotes the ranging packet sent by the second UWB ranging tag to the first UWB ranging tag at time T1, B 2 R1 represents the ranging packet from the first UWB ranging label that is received for the first time by the second UWB ranging label, i.e., the ranging packet that the first UWB ranging label sent to the second UWB ranging label at time T1.
Accordingly, B 2 R2 denotes the ranging packet from the first UWB ranging label that is received the second time by the second UWB ranging label, i.e., the ranging packet that the first UWB ranging label sent to the second UWB ranging label at time T2, B 1 T2 represents a ranging packet sent by the first UWB ranging tag to the second UWB ranging tag at time T2.
Further, C represents the propagation speed between the ranging packet issued from the first UWB ranging label and the ranging packet issued from the second UWB ranging label.
In addition, the divisor of 4 is particularly because there are 4 transmissions of ranging packets between the first UWB ranging tag and the second UWB ranging tag.
It will be appreciated that in practice, the determination of the second distance between any two second UWB ranging labels may also be calculated based on the above formula.
(3) And determining the position information of each space perception label according to the first distance and the second distance.
Specifically, due to the different types of the set spatial sensing components/labels, in practical applications, a corresponding positioning algorithm can be selected to determine the position information of each spatial sensing label according to the specific type of the set spatial sensing component/label.
For example, for the UWB ranging tags, the location information of each second UWB ranging tag on the measured object, specifically the coordinate point in the three-dimensional space, can be determined based on typical location algorithms with super bandwidth, such as rssi (received Signal Strength indication) location algorithm based on received Signal emphasis indication, AOA (Angle-of-Arrival) location algorithm based on Signal Angle-of-Arrival ranging, toa (Time of Arrival) location algorithm based on Signal Time Difference TDOA (Time Difference 0f Arrival) location algorithm based on Signal Arrival Time.
For the specific use of the above positioning methods, those skilled in the art can refer to relevant data by themselves to realize the positioning, and details are not described here.
Based on the position information of all the space sensing labels in the coverage range of the space sensing component can be determined.
In addition, it can be understood that, in practical applications, the shapes of the objects to be measured are different, so that in order to restore the actual shape of the object to be measured as much as possible, it is necessary to ensure that at least one spatial perception label, an irregular object to be measured, or a spherical object to be measured having only one outer surface is attached to each outer surface of the polyhedral object to be measured, and at least three spatial perception labels are attached to the outer surfaces of the object to be measured at preset intervals.
In addition, the style of the spatial perception label, whether provided on the camera device or the object to be tested, includes but is not limited to: parallel line first cut (parallell Lines) pattern, Single wheel (Single Circle) pattern, Dot Matrix spot/Matrix spot (Dot Matrix) pattern, Cross Hair (Cross Hair) pattern, Concentric Circles (Concentric Circles) pattern, Dots (Dots) pattern, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not to be taken as the only limitation to the present embodiment.
And 102, constructing a standard three-dimensional coordinate system according to the position information.
It can be understood that since the object to be measured, which is to be three-dimensionally imaged, is generally stationary, the position information of the spatial perception label, i.e., the coordinates in the three-dimensional space, provided on the outer surface of the object to be measured is fixed. Therefore, the three-dimensional coordinate system is constructed according to the fixed position information, so that a stable and unchangeable standard three-dimensional coordinate system can be obtained, and the measured object can be more accurately and really restored through subsequent imaging based on the standard three-dimensional coordinate system.
In addition, it can be understood that, since the three-dimensional coordinate system is composed of an X axis, a Y axis and a Z axis, in practical applications, at least three pieces of position information of the spatially-aware sticker are required to construct a standard three-dimensional coordinate system. That is, when the standard three-dimensional coordinate system is constructed according to the position information, at least three spatial sensing labels are selected from all the spatial sensing labels arranged on the detected object as position reference labels, and then the standard three-dimensional coordinate system is constructed according to the position information of the selected at least three position reference labels.
For convenience of understanding, the embodiment takes the reference labels of the selected positions as three examples, and is specifically described with reference to fig. 2:
as shown in FIG. 2, A, B and C are selected position reference labels, respectively, and A ', B' and C 'are projections A, B and C, respectively, based on A, B, C, A', B 'and C', a three-dimensional coordinate system can be constructed. After the three-dimensional coordinate system is obtained, the coordinate axis perpendicular to the ground plane is selected as the Z axis, and the remaining two coordinate axes are respectively designated as the X axis and the Y axis, so that the standard three-dimensional coordinate system can be obtained.
As shown in fig. 2, the coordinate axis of the position reference label a in fig. 2 is a Z-axis, the coordinate axis of the position reference label C is an X-axis, and the coordinate axis of the projection B' corresponding to the position reference label B is a Y-axis.
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not intended to be the only limitations of the present embodiment.
In addition, it is worth mentioning that the accuracy of the constructed standard three-dimensional coordinate system is improved as much as possible, so that the final imaging can accurately restore the measured object. In practical application, if the number of the selected position reference labels is greater than 3, the following method is specifically adopted when a standard three-dimensional coordinate system is constructed:
(1) and respectively constructing a reference three-dimensional coordinate system by using the position information of every three position reference labels.
Suppose that there are 4 location reference labels, which are: A. b, C and D, if a reference three-dimensional coordinate system is constructed by the position information of every three position reference labels, the following four reference three-dimensional coordinate systems can be obtained:
the first one is: A. b and C;
secondly, the following steps: A. b and D;
thirdly, the step of: A. c and D;
fourthly: B. c and D.
It is understood that in practical applications, the above ordering is not sequential.
(2) And selecting one of the obtained reference three-dimensional coordinate systems as an initial standard three-dimensional coordinate system, and using the rest reference three-dimensional coordinate systems as calibration three-dimensional coordinate systems.
Assuming that the first reference three-dimensional coordinate system is selected from the four reference three-dimensional coordinate systems as the initial standard three-dimensional coordinate system, the remaining three reference three-dimensional coordinate systems are the calibration three-dimensional coordinate systems.
(3) And calibrating the initial standard three-dimensional coordinate system by using the calibration three-dimensional coordinate system to obtain the standard three-dimensional coordinate system.
Namely, according to the three calibration three-dimensional coordinate systems, the initial standard three-dimensional coordinate system is continuously aligned and adjusted in angle repeatedly, and finally a stable and unchangeable marked three-dimensional coordinate system can be obtained.
Based on the method, the standard three-dimensional coordinate system is obtained by calibrating and aligning the three-dimensional coordinate system constructed by the position information of the reference labels at any three positions, so that the actual condition of the measured object can be more truly restored by subsequent imaging based on the coordinate system.
In addition, in practical application, in order to ensure that the image pickup device can shoot each area on the outer surface of the measured object. After the standard three-dimensional coordinate system is obtained, the shooting positions of different areas of a plurality of corresponding measured objects can be determined according to the standard three-dimensional coordinate system and the position information of the three position reference labels for constructing the standard three-dimensional coordinate system, so that when the camera equipment is located at each shooting position, the picture material of each area on the outer surface of the measured object can be shot and obtained, and finally, the measured object can be completely and accurately subjected to three-dimensional imaging based on the standard three-dimensional coordinate system and the obtained picture material.
Further, it is understandable that the photographing position is a position where the object to be measured is finally photographed by the photographing apparatus. In order to restore the measured object as much as possible, the determined shooting position needs to be the position where the image pickup device can shoot the characteristic information of the measured object as clearly as possible.
Taking the space sensing component/label as the UWB ranging label as an example, it can be known from the above description that the label may be in a Parallel line first cut (parallell Lines) style, a Single wheel (Single Circle) style, a Dot Matrix spot/Matrix spot (Dot Matrix) style, a Cross line (Cross Hair) style, a Concentric Circle (Concentric Circles) style, or a Dot (Dots) style, and thus the determined shooting position may be based on the UWB ranging label that can clearly shoot the face of the detected object corresponding to the shooting position when the camera device is ensured to be in the position.
In addition, it can be understood that, because the measured object is often three-dimensional, in order to ensure that the final 3D imaging effect can restore the measured object as truly as possible, more than one determined shooting position is often determined, that is, the shooting position can be set around the measured object.
For better understanding, the manner of determining the shooting position is described below in conjunction with fig. 3:
specifically, in the present embodiment, the determined shooting position is determined from the constructed standard three-dimensional coordinate system and the position information of the at least three position reference tags used in constructing the standard three-dimensional coordinate system.
As shown in fig. 3, when the selected position reference labels are still three spatially-perceived coordinates A, B and C, three groups may be included in the determination of the shooting position.
Assuming that the shooting positions are determined at the horizontal positions shown in fig. 3, three sets of shooting positions, which are the coordinate positions of the position reference labels A, B and C on the Z-axis, respectively, are determined as the horizontal positions, and then three shooting tracks, 3 horizontal shooting tracks in fig. 3, are determined. In practical application, three shooting tracks may be constructed by the Y axis, or three shooting tracks may be constructed by the X axis, or preset number of shooting tracks may be constructed by the X axis, the Y axis, and the Z axis, which are not listed here one by one, and this embodiment is not limited thereto.
Then, each shooting track is divided at preset intervals, so that a plurality of shooting positions can be obtained.
As shown in fig. 3, a plurality of shooting positions are respectively determined on the first shooting trajectory, the second shooting trajectory, and the third shooting trajectory.
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not to be taken as the only limitation to the present embodiment.
And 103, shooting the measured object to obtain a picture material.
Specifically, as can be seen from step 102, since there are a plurality of shooting positions that are finally determined, when the object is shot according to the shooting positions, it is essential to move the imaging device to each shooting position, then shoot the object, and further obtain the picture material of the object corresponding to each shooting position.
Furthermore, it can be understood that, in practical applications, even at the same shooting position, if the cameras of the image pickup apparatus are at different shooting angles, different focal lengths, and the like, the picture materials obtained by shooting may be different. Therefore, in order to ensure that the obtained picture material can clearly reflect the spatial perception label arranged on the surface of the corresponding shooting position of the measured object as much as possible, when the measured object is shot, the position information of the spatial perception label corresponding to each shooting position needs to be determined firstly; then, adjusting the shooting angle according to each shooting position and the position information of the space perception label corresponding to each shooting position, and shooting the measured object at each adjusted shooting angle to obtain a plurality of picture materials.
Understandably, the space perception label is preset on the outer surface of the measured object, and particularly can be on the outer surface of the measured object. Therefore, the determining of the position information of the spatially aware label corresponding to each shooting position is specifically to determine the outer surface of the object to be measured opposite to each shooting position, and then determine the position information of the spatially aware label on the outer surface opposite to the corresponding shooting position.
In addition, in practical application, in order to facilitate user operation as much as possible, a visual interactive interface can be provided, a constructed standard three-dimensional coordinate system is displayed in the visual interactive interface, longitude and latitude junctions of a virtual sphere formed based on the standard three-dimensional coordinate system are determined as shooting positions, then the camera equipment is guided to move to each shooting position in the visual interactive interface, a camera center normal of the camera equipment is guided to align with the sphere center of the virtual sphere, and then shooting is performed, so that a picture material can be obtained.
In addition, it is worth mentioning that, in order to guarantee the reference value of the picture material, the space perception label arranged on the side where the detected object is not located, for example, the UWB ranging label can select a label with a color distinct from that of the detected object, so that the label can be better identified under strong light, and the final imaging effect is guaranteed.
And 104, imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
Specifically, when the measured object is imaged according to the standard three-dimensional coordinate system and the picture materials, the position information of each space perception label arranged on the measured object is extracted from each picture material; then, according to the position information of each space perception label, drawing the outline of the measured object in the standard three-dimensional coordinate system; and finally, extracting the characteristic information of the measured object from each picture material, and mapping and rendering the outline of the measured object.
As shown in fig. 4, assuming that the measured object is a triangular pyramid, the measured object is truly and accurately restored by determining the position information of the UWB ranging labels on the outer surface of the picture material, drawing the outline of the measured object in the constructed standard three-dimensional coordinate system according to the determined position information of each UWB ranging label, and finally extracting the feature information of the measured object from each picture material, such as color, mapping and rendering the outline of the measured object drawn in the standard three-dimensional coordinate system, i.e., realizing 3D imaging of the measured object.
In addition, it is worth mentioning that in practical application, in a case that the image pickup apparatus is held by a user to perform position moving shooting, in a shooting process, the user may not completely shoot the object to be measured according to the determined shooting position, and therefore, in order to ensure a final three-dimensional imaging result of the object to be measured, before step 104 is executed, it may be determined whether the currently shot image material covers each area of the outer surface of the object to be measured.
Correspondingly, if the measured object is not covered, the shooting position corresponding to the area of the measured object which is not shot is determined according to the standard three-dimensional coordinate system, position movement prompting is made according to the determined shooting position, for example, the specific shooting position which needs to be moved is displayed in an interactive interface, or a user is prompted to move a camera device to the shooting position in a voice prompting mode, then the measured object is shot, so that the picture material corresponding to the area of the measured object which is not shot is obtained, and finally three-dimensional imaging processing is conducted on the measured object based on the obtained picture material and the standard three-dimensional coordinate system.
It is not difficult to find out through the above description that the imaging method provided by this embodiment constructs a fixed and unchangeable standard three-dimensional coordinate system based on the position information of the spatial sensing label fixed on the object to be measured, so as to perform imaging processing on the object to be measured according to the constructed standard three-dimensional coordinate system and the picture material obtained based on shooting position shooting, thereby implementing changing the process of dynamic shooting ranging imaging into relatively static operation, and further greatly improving the accuracy of the ranging result and the imaging result.
In addition, the imaging method provided by the embodiment changes the existing Lidar module-based ranging imaging into the space sensing component-based or space sensing label-based ranging imaging, and the whole imaging process does not need to depend on light spot projection, so that the interference of ambient brightness is avoided, and the accuracy of the measurement result and the imaging result is further ensured.
In addition, according to the imaging method provided by the embodiment, because the Lidar module does not need to be implanted into the terminal device, and the condition that the space sensing component or the space sensing label is required to be embedded in the terminal device is not limited, namely, the space sensing component or the space sensing label can be externally arranged on the terminal device, the distance measurement imaging can be realized by directly externally arranging the space sensing label on the existing terminal device without the Lidar module and the space sensing component, the hardware threshold of the 3D modeling function is greatly reduced, and the popularization of 3D acquisition is facilitated.
A second embodiment of the present application relates to an imaging method. The second embodiment is further improved on the basis of the first embodiment, and the main improvement is as follows: before the measured object is shot to obtain the picture material, whether the camera device is located at the determined shooting position or not is determined, if the camera device is not located at the determined shooting position, the camera device is guided to move to the shooting position, and therefore the fact that the picture material obtained by subsequent shooting can reflect the actual information of the measured object more accurately is guaranteed.
As shown in fig. 5, the second embodiment relates to an imaging method including the steps of:
And 502, constructing a standard three-dimensional coordinate system according to the position information, and determining a shooting position.
It is to be understood that steps 501 and 502 in this embodiment are substantially the same as steps 101 and 102 in the first embodiment, and are not repeated here.
Specifically, in this embodiment, the determination of the position information of the spatial sensing component is substantially the same as the determination of the position information of the spatial sensing label disposed on the outer surface of the object to be measured, that is, the determination of the position of the spatial sensing component is achieved by determining the distance between the spatial sensing component and any spatial sensing label and further according to the distance.
The specific implementation of how to determine the distance between the spatial sensing component and the spatial sensing label and then determine the location information according to the distance has been described in detail in the first embodiment, and is not described in detail in this embodiment.
Specifically, when a standard three-dimensional coordinate system is constructed, a plurality of shooting positions are determined. Therefore, when determining whether the position information of the spatial sensing component matches the shooting position, it is essential to traverse each determined shooting position, and then match the traversed current shooting position with the specific position where the spatial sensing component is currently located, if the two are matched, the spatial sensing component is considered to be in the determined shooting position, and the operation in step 506 may be performed; otherwise, traversing the remaining shooting positions, matching the traversed current shooting position with the current specific position of the spatial perception component, and if the current position of the spatial perception component is determined not to match with each determined shooting position through comparison, executing the operation in step 505.
And step 505, making a position movement prompt so that the image pickup device moves to the shooting position according to the position movement prompt.
Specifically, due to the imaging method provided by the embodiment, a visual interactive interface can be directly provided for a user in order to facilitate the use of the personal camera device by the personal user, such as a tablet computer with a camera, a mobile phone and other terminal devices. Therefore, when the position information of the space perception component is determined not to be matched with the shooting position, the position moving prompt can be directly displayed on the visual interactive interface.
Furthermore, it can be understood that, since the spatial sensing component is disposed on the image capturing apparatus, whether it is built-in, i.e. directly integrated inside the image capturing apparatus, or externally disposed on the image capturing apparatus, the position information of the spatial sensing component can be equivalent to the position information of the image capturing apparatus. Therefore, the position movement prompt information displayed on the visual interactive interface may be, for example, "the current image pickup apparatus is not at the shooting position, please move the position".
Further, the position movement prompt can be detailed to which direction and how large the distance is moved, so that the user holding the camera device can move the camera device according to the position movement prompt.
Further, the distance between the current position of the camera device and the shooting position can be displayed in the visual interactive interface, and therefore the user can be guided to prepare to move to the shooting position.
Furthermore, it can be understood that if in practical applications the camera device is a smart machine device such as a drone, a smart robot, etc., the position movement prompt made may be directly coordinate information informing of the specific shooting position to which such a device needs to move.
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not intended to be the only limitations of the present embodiment.
And 507, imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
It is to be understood that steps 506 and 507 in this embodiment are substantially the same as steps 103 and 104 in the first embodiment, and are not repeated herein.
Therefore, according to the imaging method provided by the embodiment, before the object to be measured is shot according to the shooting position to obtain the picture material, whether the camera device is located at the determined shooting position is determined, and if the camera device is not located at the determined shooting position, the camera device is guided to move to the shooting position, so that the picture material obtained by subsequent shooting can reflect the actual information of the object to be measured more accurately, and the imaging result finally realized based on the standard three-dimensional coordinate system and the picture material obtained by shooting is more accurate.
In addition, it should be understood that the above steps of the various methods are divided for clarity, and the implementation may be combined into one step or split into some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included in the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
The third embodiment of the present application relates to an image forming apparatus, as shown in fig. 6, including: a determination module 601, a construction module 602, a photographing module 603, and an imaging module 604.
The determining module 601 can also determine the position information of each space sensing label in the coverage range of the space sensing component, the space sensing labels are arranged on the outer surface of a measured object in advance, at least three space sensing labels are arranged on the outer surface of the measured object, and the positions of any two space sensing labels are not overlapped; a constructing module 602, configured to construct a standard three-dimensional coordinate system according to the position information; the shooting module 603 is used for shooting the measured object to obtain a picture material; and the imaging module 604 is configured to perform imaging processing on the measured object according to the standard three-dimensional coordinate system and the picture material.
In addition, in another example, when determining the location information of each of the spatial awareness labels within the coverage area of the spatial awareness component, the determining module 601 specifically includes:
determining the number of the space sensing labels in the coverage range of the space sensing component;
according to the spatial perception principle, determining a first distance between the spatial perception component and each spatial perception label and a second distance between every two spatial perception labels;
and determining the position information of each space perception label according to the first distance and the second distance.
In addition, in another example, when the determining module 601 determines the number of the spatially aware labels within the coverage of the spatially aware component, specifically:
the space perception component plays pulse signals outwards according to a preset period and searches each space perception label in the pulse signal coverage range;
extracting identification information which is distributed for the space sensing label in advance from the received pulse signal response data packet of the space sensing label;
and removing the duplicate of the extracted identification information, and counting the identification information after the duplicate removal to obtain the number of the space perception labels within the coverage range of the space perception component.
In another example, when the building module 602 builds the standard three-dimensional coordinate system according to the position information, specifically:
selecting at least three space perception labels from the space perception labels preset on the measured object as position reference labels;
constructing the standard three-dimensional coordinate system according to the position information of at least three position reference labels;
and determining the shooting position according to the standard three-dimensional coordinate system and the position information of at least three position reference labels.
In addition, in another example, when the constructing module 602 constructs the standard three-dimensional coordinate system according to the position information of at least three of the position reference labels, specifically:
if the number of the position reference labels is larger than N, respectively constructing a reference three-dimensional coordinate system by using the position information of every three position reference labels, wherein N is an integer larger than or equal to 3;
selecting one of the obtained reference three-dimensional coordinate systems as an initial standard three-dimensional coordinate system, and using the rest reference three-dimensional coordinate systems as calibration three-dimensional coordinate systems;
and calibrating the initial standard three-dimensional coordinate system by using the calibration three-dimensional coordinate system to obtain the standard three-dimensional coordinate system.
In addition, in another example, the imaging device further includes a position determination module.
Specifically, the position determining module is used for determining a plurality of shooting positions according to the standard three-dimensional coordinate system and the position information of at least three position reference labels; each shooting position corresponds to an area of the outer surface of the measured object.
Correspondingly, when the camera module 603 takes a picture of the measured object to obtain a picture material, the method specifically comprises the following steps: and shooting the measured object according to each shooting position to obtain the picture material.
In addition, in another example, when the position determining module determines a plurality of shooting positions according to the standard three-dimensional coordinate system and the position information of at least three position reference labels, the position determining module specifically includes:
determining at least three shooting tracks according to the standard three-dimensional coordinate system and the position information of at least three position reference labels;
and dividing each shooting track at preset intervals to obtain a plurality of shooting positions.
In addition, in another example, when the shooting module 603 shoots the object to be measured to obtain a picture material, the shooting module specifically includes:
determining the position information of the space perception label corresponding to each shooting position;
and adjusting the shooting angle according to each shooting position and the position information of the space perception label corresponding to each shooting position, and shooting the measured object at each adjusted shooting angle to obtain a plurality of picture materials.
Further, in another example, the imaging apparatus further includes: and a position matching module.
Specifically, the location matching module is configured to perform the following operations:
determining location information of the spatially aware component;
judging whether the position information of the space sensing assembly is matched with the shooting position;
and if not, making a position movement prompt so that the camera equipment moves to the shooting position according to the position movement prompt.
In another example, when the imaging module 604 performs the imaging processing on the object to be measured according to the standard three-dimensional coordinate system and the picture material, specifically:
extracting the position information of each space perception label arranged on the measured object from each picture material;
drawing the outline of the measured object in the standard three-dimensional coordinate system according to the position information of each space perception label;
and extracting the characteristic information of the measured object from each picture material, and mapping and rendering the outline of the measured object.
Further, in another example, the imaging apparatus further includes: and a picture material checking module.
Specifically, the image material checking module is configured to determine whether each region of the outer surface of the measured object is covered by the image material.
Correspondingly, if the image is not covered, the position determining module is triggered to execute the operation of determining the shooting position corresponding to the area of the measured object which is not shot according to the standard three-dimensional coordinate system, and the shooting module 603 is triggered to execute the operation of making a position movement prompt according to the shooting position, so that the image pickup device moves to the shooting position according to the position movement prompt to shoot the measured object, and the operation of obtaining the image material corresponding to the area of the measured object which is not shot is obtained.
It should be understood that the present embodiment is a device embodiment corresponding to the first or second embodiment, and the present embodiment can be implemented in cooperation with the first or second embodiment. The related technical details mentioned in the first or second embodiment are still valid in this embodiment, and are not described herein again to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first or second embodiment.
It should be noted that, all modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, a unit which is not so closely related to solve the technical problem proposed by the present application is not introduced in the present embodiment, but this does not indicate that no other unit exists in the present embodiment.
A fourth embodiment of the present application relates to an image forming apparatus, as shown in fig. 7, including: includes at least one processor 701; and, a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701 to enable the at least one processor 701 to perform the imaging method described in the above method embodiments.
The memory 702 and the processor 701 are coupled by a bus, which may comprise any number of interconnecting buses and bridges that couple one or more of the various circuits of the processor 701 and the memory 702. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. Data processed by the processor 701 is transmitted over a wireless medium through an antenna, which receives the data and forwards the data to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 702 may be used for storing data used by the processor 701 in performing operations.
A fifth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the imaging method described in the method embodiments above.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.
Claims (14)
1. An imaging method applied to an image pickup apparatus provided with a spatial perception component, the method comprising:
determining the position information of each space perception label in the coverage range of the space perception component, wherein the space perception labels are arranged on the outer surface of a measured object in advance, at least three space perception labels are arranged on the outer surface of the measured object, and the positions of any two space perception labels are not overlapped;
constructing a standard three-dimensional coordinate system according to the position information;
shooting the measured object to obtain a picture material;
and imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
2. The imaging method of claim 1, wherein said determining location information for each spatially aware sticker within the coverage of the spatially aware component comprises:
determining the number of the space perception labels in the coverage range of the space perception component;
according to the space perception principle, determining a first distance between the space perception component and each space perception label and a second distance between every two space perception labels;
and determining the position information of each space perception label according to the first distance and the second distance.
3. The imaging method of claim 2, wherein said determining the number of spatially aware labels within the coverage of said spatially aware component comprises:
the space perception component plays pulse signals outwards according to a preset period and searches each space perception label in the pulse signal coverage range;
extracting identification information which is distributed for the space sensing label in advance from the received pulse signal response data packet of the space sensing label;
and removing the duplicate of the extracted identification information, and counting the identification information after the duplicate removal to obtain the number of the space sensing labels within the coverage range of the space sensing component.
4. The imaging method of claim 1, wherein said constructing a standard three-dimensional coordinate system from said location information comprises:
selecting at least three space perception labels from the space perception labels preset on the measured object as position reference labels;
and constructing the standard three-dimensional coordinate system according to the position information of at least three position reference labels.
5. The imaging method of claim 4, wherein said constructing said standard three-dimensional coordinate system from the positional information of at least three of said positional reference markers comprises:
if the number of the position reference labels is larger than N, respectively constructing a reference three-dimensional coordinate system by using the position information of every three position reference labels, wherein N is an integer larger than or equal to 3;
selecting one of the obtained reference three-dimensional coordinate systems as an initial standard three-dimensional coordinate system, and using the rest reference three-dimensional coordinate systems as calibration three-dimensional coordinate systems;
and calibrating the initial standard three-dimensional coordinate system by using the calibration three-dimensional coordinate system to obtain the standard three-dimensional coordinate system.
6. The imaging method of claim 4, wherein after said constructing a standard three-dimensional coordinate system from said location information, said method further comprises:
determining a plurality of shooting positions according to the standard three-dimensional coordinate system and the position information of at least three position reference labels; each shooting position corresponds to an area of the outer surface of the measured object;
the pair the measured object is shot to obtain picture materials, including:
and shooting the measured object according to each shooting position to obtain the picture material.
7. The imaging method of claim 6, wherein said determining a plurality of capture positions based on said standard three-dimensional coordinate system and position information of at least three of said position reference tags comprises:
determining at least three shooting tracks according to the standard three-dimensional coordinate system and the position information of the at least three position reference labels;
and dividing each shooting track at preset intervals to obtain a plurality of shooting positions.
8. The imaging method according to claim 6, wherein said capturing the object to be measured according to each of the capturing positions to obtain the picture material comprises:
determining the position information of the space perception label corresponding to each shooting position;
and adjusting the shooting angle according to each shooting position and the position information of the space perception label corresponding to each shooting position, and shooting the measured object at each adjusted shooting angle to obtain a plurality of picture materials.
9. The imaging method according to claim 8, wherein before said photographing the object to be measured according to each of the photographing positions to obtain the picture material, the method further comprises:
determining location information of the spatially aware component;
judging whether the position information of the space sensing assembly is matched with the shooting position;
and if not, making a position movement prompt so that the camera equipment moves to the shooting position according to the position movement prompt.
10. The imaging method according to claim 8, wherein the imaging the object to be measured according to the standard three-dimensional coordinate system and the picture material comprises:
extracting the position information of each space perception label arranged on the measured object from each picture material;
drawing the outline of the measured object in the standard three-dimensional coordinate system according to the position information of each space perception label;
and extracting the characteristic information of the measured object from each picture material, and mapping and rendering the outline of the measured object.
11. The imaging method according to claim 1, wherein before said imaging of said object under test according to said standard three-dimensional coordinate system and said picture material, said method further comprises:
judging whether the image material covers each area of the outer surface of the measured object;
if the measured object is not covered, determining a shooting position corresponding to an area of the measured object which is not shot according to the standard three-dimensional coordinate system;
and making a position movement prompt according to the shooting position, so that the camera device moves to the shooting position according to the position movement prompt to shoot the measured object, and obtaining the picture material corresponding to the region of the measured object which is not shot.
12. An image forming apparatus, comprising:
the determining module can also determine the position information of each space sensing label in the coverage range of the space sensing assembly, the space sensing labels are arranged on the outer surface of a measured object in advance, at least three space sensing labels are arranged on the outer surface of the measured object, and the positions of any two space sensing labels are not overlapped;
the construction module is used for constructing a standard three-dimensional coordinate system according to the position information;
the shooting module is used for shooting the measured object to obtain a picture material;
and the imaging module is used for imaging the measured object according to the standard three-dimensional coordinate system and the picture material.
13. An image forming apparatus, characterized by comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the imaging method of any one of claims 1 to 11.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the imaging method of any one of claims 1 to 11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110235434.XA CN115037866A (en) | 2021-03-03 | 2021-03-03 | Imaging method, device, equipment and storage medium |
PCT/CN2022/076387 WO2022183906A1 (en) | 2021-03-03 | 2022-02-15 | Imaging method and apparatus, device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110235434.XA CN115037866A (en) | 2021-03-03 | 2021-03-03 | Imaging method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115037866A true CN115037866A (en) | 2022-09-09 |
Family
ID=83118042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110235434.XA Pending CN115037866A (en) | 2021-03-03 | 2021-03-03 | Imaging method, device, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115037866A (en) |
WO (1) | WO2022183906A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117326172B (en) * | 2023-11-01 | 2024-05-24 | 东莞市新立智能设备有限公司 | Labeling self-adaptive adjustment method and system for labeling machine and automatic labeling machine |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101509763A (en) * | 2009-03-20 | 2009-08-19 | 天津工业大学 | Single order high precision large-sized object three-dimensional digitized measurement system and measurement method thereof |
EP2602588A1 (en) * | 2011-12-06 | 2013-06-12 | Hexagon Technology Center GmbH | Position and Orientation Determination in 6-DOF |
CN109490825B (en) * | 2018-11-20 | 2021-02-09 | 武汉万集信息技术有限公司 | Positioning navigation method, device, equipment, system and storage medium |
-
2021
- 2021-03-03 CN CN202110235434.XA patent/CN115037866A/en active Pending
-
2022
- 2022-02-15 WO PCT/CN2022/076387 patent/WO2022183906A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2022183906A1 (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101809067B1 (en) | Determination of mobile display position and orientation using micropower impulse radar | |
CN117354924A (en) | Positioning system, positioning terminal and positioning network | |
CN109540144A (en) | A kind of indoor orientation method and device | |
US11321867B2 (en) | Method and system for calculating spatial coordinates of region of interest, and non-transitory computer-readable recording medium | |
WO2017027338A1 (en) | Apparatus and method for supporting interactive augmented reality functionalities | |
US11380016B2 (en) | Fisheye camera calibration system, method and electronic device | |
CN112837207B (en) | Panoramic depth measurement method, four-eye fisheye camera and binocular fisheye camera | |
CN110569006B (en) | Display method, display device, terminal equipment and storage medium | |
WO2021005977A1 (en) | Three-dimensional model generation method and three-dimensional model generation device | |
CN110136207B (en) | Fisheye camera calibration system, fisheye camera calibration method, fisheye camera calibration device, electronic equipment and storage medium | |
US11514608B2 (en) | Fisheye camera calibration system, method and electronic device | |
US20230252666A1 (en) | Systems and methods of measuring an object in a scene of a captured image | |
CN114088012B (en) | Compensation method and device of measuring device, three-dimensional scanning system and storage medium | |
CN111354037A (en) | Positioning method and system | |
CN110726971B (en) | Visible light positioning method, device, terminal and storage medium | |
WO2022183906A1 (en) | Imaging method and apparatus, device, and storage medium | |
CN107229055B (en) | Mobile equipment positioning method and mobile equipment positioning device | |
WO2022227875A1 (en) | Three-dimensional imaging method, apparatus, and device, and storage medium | |
CN207766367U (en) | A kind of target tracker based on imitative Compound Eye of Insects | |
CN116679267A (en) | Combined calibration method, device, equipment and storage medium based on radar and image | |
CN114460579B (en) | Method, system and storage medium for monitoring offshore ship | |
CN107816990A (en) | Localization method and positioner | |
CN117670994A (en) | Image processing method, calibration system and related equipment | |
CN115334247A (en) | Camera module calibration method, visual positioning method and device and electronic equipment | |
CN109033164A (en) | A kind of panoramic map data acquisition system and its moving gathering termination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |