CN110648396A - Image processing method, device and system - Google Patents
Image processing method, device and system Download PDFInfo
- Publication number
- CN110648396A CN110648396A CN201910875678.7A CN201910875678A CN110648396A CN 110648396 A CN110648396 A CN 110648396A CN 201910875678 A CN201910875678 A CN 201910875678A CN 110648396 A CN110648396 A CN 110648396A
- Authority
- CN
- China
- Prior art keywords
- sand table
- dimensional
- image
- table model
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 239000004576 sand Substances 0.000 claims abstract description 187
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000000007 visual effect Effects 0.000 claims description 16
- 238000012544 monitoring process Methods 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 7
- 238000003786 synthesis reaction Methods 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 13
- 238000012806 monitoring device Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides an image processing method, device and system, and relates to the technical field of image processing. The image processing method comprises the following steps: acquiring a plurality of two-dimensional live-action images; generating a three-dimensional live-action sand table model according to the plurality of two-dimensional live-action images; acquiring attribute information of a target object; and superposing the attribute information of the target object to the three-dimensional real-scene sand table model to generate a three-dimensional situation sand table model. The method and the device can solve the problems that the three-dimensional sand table model cannot be generated in real time, and the attribute information of the target object cannot be superposed into the three-dimensional sand table model to form the three-dimensional situation sand table model so as to realize dynamic observation of the movement of the target object.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and system.
Background
At present, sand tables used by the military are models which are manually piled up by silt, weapons and other materials according to a certain proportion relation according to topographic maps, aerial photographs or field terrains. And the situation map is obtained by scanning a dynamically moving target by a radar, acquiring the geographic coordinate of the target and then marking the dynamically moving target information on the original two-dimensional map.
Therefore, when the position of the target is to be evaluated in the sand table, the position of the target needs to be estimated in the sand table manually according to the situation map, time and labor are wasted, the accuracy is low, and dynamic observation of the movement of the target cannot be achieved.
Disclosure of Invention
The purpose of the present disclosure is to overcome the defects of the prior art, and provide an image processing method, an image processing device, and an image processing system, where the image processing method can solve the problems that a three-dimensional sand table model cannot be generated in real time, and attribute information of a target cannot be superimposed on the three-dimensional sand table model to form a three-dimensional situation sand table model, so as to implement dynamic observation of movement of the target.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring a plurality of two-dimensional live-action images;
generating a three-dimensional live-action sand table model according to the plurality of two-dimensional live-action images;
acquiring attribute information of a target object;
and superposing the attribute information of the target object to the three-dimensional real-scene sand table model to generate a three-dimensional situation sand table model.
In one embodiment, the method further comprises:
collecting a plurality of two-dimensional situation sand table images in a three-dimensional situation sand table model according to a preset cutting rule, and restoring the three-dimensional situation sand table model according to the two-dimensional situation sand table images.
In one embodiment, the collecting, according to a preset cutting rule, a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model includes:
dividing the three-dimensional situation sand table model into a plurality of visual angles according to a preset complexity degree;
acquiring picture coordinate information corresponding to each visual angle;
and cutting the three-dimensional situation sand table model into a plurality of two-dimensional situation sand table graphs according to the plurality of visual angles, wherein the two-dimensional situation sand table graphs carry visual angle information and picture coordinate information.
In one embodiment, before acquiring the plurality of two-dimensional live-action images, the method further comprises:
obtaining sand table information of a target sand table, wherein the sand table information comprises at least one of the geographical range of the sand table, the height of the sand table from the ground and the accuracy;
generating shooting information according to the sand table information of the target sand table, wherein the shooting information is used for instructing image shooting equipment to shoot a plurality of two-dimensional live-action images;
in one example, prior to acquiring the plurality of two-dimensional live-action images, the method further comprises:
determining image shooting equipment for shooting the two-dimensional live-action image according to preset parameters; and shooting a plurality of two-dimensional live-action images by using the image shooting equipment.
In one embodiment, the attribute information of the object includes at least one of a type, a model, geographical coordinates, and a moving speed of the object.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the first acquisition module acquires a plurality of two-dimensional live-action images;
the first generation module is used for generating a three-dimensional live-action sand table model according to the plurality of two-dimensional live-action images;
the second acquisition module is used for acquiring the attribute information of the target object;
and the superposition module is used for superposing the attribute information of the target object to the three-dimensional real-scene sand table model to generate a three-dimensional situation sand table model.
In an embodiment, the apparatus further includes a processing module, configured to collect a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule, and restore the three-dimensional situation sand table model according to the plurality of two-dimensional situation sand table images.
In one embodiment, the processing module comprises a first processing submodule, a second processing submodule and a third processing submodule, wherein the first processing submodule is used for dividing the three-dimensional situation sand table model into a plurality of visual angles according to a preset complexity degree; the second processing submodule is used for acquiring picture coordinate information corresponding to each visual angle; and the third processing submodule is used for cutting the three-dimensional situation sand table model into a plurality of two-dimensional situation sand table graphs according to a plurality of visual angles.
In one embodiment, the apparatus further includes a third obtaining module and a second generating module, where the third obtaining module is configured to obtain the sand table information of the target sand table; and the second generation module is used for generating shooting information according to the sand table information of the target sand table, and the shooting information is used for indicating the image shooting equipment to shoot a plurality of two-dimensional live-action images.
In one embodiment, the apparatus further comprises a selection module for determining an image capturing device for capturing the two-dimensional live-action image.
According to a third aspect of embodiments of the present disclosure, there is provided an image processing system including:
the radar monitoring system comprises an image processing server, and image shooting equipment and radar monitoring equipment which are connected with the image processing server; wherein the content of the first and second substances,
the image shooting equipment is used for shooting a two-dimensional live-action image;
the radar monitoring equipment is used for acquiring attribute information of a target object;
the image processing server is used for generating a three-dimensional live-action sand table model according to the two-dimensional live-action image shot by the image shooting equipment, and superimposing attribute information of the target object collected by the radar monitoring equipment into the three-dimensional live-action sand table model to generate the three-dimensional situation sand table model.
In one embodiment, the system further comprises:
an image receiving device connected to the image processing server; wherein the content of the first and second substances,
the image processing server collects a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule and sends the collected two-dimensional situation sand table images to the image receiving equipment;
and the image receiving equipment synthesizes the received two-dimensional situation sand table image into the three-dimensional situation sand table model according to a preset synthesis rule.
In one embodiment, the system further comprises:
the image acquisition equipment is connected with the image processing server; the image receiving equipment is connected with the image acquisition equipment; wherein the content of the first and second substances,
the image acquisition equipment acquires a plurality of two-dimensional situation sand table images in a three-dimensional situation sand table model generated by the image processing server according to a preset cutting rule and sends the acquired two-dimensional situation sand table images to the image receiving equipment;
and the image receiving equipment synthesizes the received two-dimensional situation sand table image into the three-dimensional situation sand table model according to a preset synthesis rule.
In the embodiment of the disclosure, a plurality of two-dimensional live-action images can be acquired in real time, a three-dimensional live-action sand table model is generated according to the two-dimensional live-action images, meanwhile, attribute information of a target object acquired by a radar in real time is superimposed on the three-dimensional sand table model, the three-dimensional situation sand table model is generated and is displayed to a user in a three-dimensional stereo mode, and therefore the user can accurately, intuitively and real-timely know the states of a concerned area and the target object.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart for collecting a plurality of two-dimensional situation sand table images in a three-dimensional situation sand table model according to an embodiment of the present disclosure.
Fig. 3 is an architecture diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 4 is an architecture diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 5 is an architecture diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 6 is an architecture diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 7 is an architecture diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of an image processing system according to an embodiment of the present disclosure.
Fig. 9 is a schematic diagram of an image processing system according to an embodiment of the present disclosure.
Fig. 10 is a schematic diagram of an image processing system according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
An embodiment of the present disclosure provides an image processing method, as shown in fig. 1, the image processing method including the steps of:
102, generating a three-dimensional live-action sand table model according to the plurality of two-dimensional live-action images;
illustratively, a three-dimensional live-action sand table model may be generated from the plurality of two-dimensional live-action images according to a method of multi-dimensional fusion.
103, acquiring attribute information of a target object;
in one embodiment, the attribute information of the target object includes at least one of a type, a model, geographical coordinates, and a moving speed of the target object.
And step 104, superimposing the attribute information of the target object to the three-dimensional real-scene sand table model to generate a three-dimensional situation sand table model.
In one embodiment, the coordinate information of the target object can be superposed in the three-dimensional real-scene sand table model to generate the three-dimensional situation sand table model.
In one embodiment, model information of the target object can be superimposed into the three-dimensional real-scene sand table model to generate the three-dimensional situation sand table model.
Preferably, the image processing method further includes the steps of:
s105, collecting a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule, and restoring the three-dimensional situation sand table model according to the two-dimensional situation sand table images.
Fig. 2 is a flowchart for collecting a plurality of two-dimensional situation sand table images in a three-dimensional situation sand table model according to an embodiment of the present disclosure, as shown in fig. 2, the method includes:
1051, dividing the three-dimensional situation sand table model into a plurality of visual angles according to a preset complexity degree;
and 1053, cutting the three-dimensional situation sand table model into a plurality of two-dimensional situation sand table graphs according to the plurality of visual angles, wherein the two-dimensional situation sand table graphs carry visual angle information and picture coordinate information.
Illustratively, the preset cutting rule is to set a plurality of viewing angles for the three-dimensional situation sand table model according to complexity, and cut the three-dimensional situation sand table model into a plurality of two-dimensional situation sand table graphs with viewing angles and picture coordinate information.
When the three-dimensional situation sand table model is synthesized in the later stage, the two-dimensional situation sand table image can be synthesized into the three-dimensional situation sand table model according to the visual angle and the coordinate information
In this embodiment, for the consideration of information security, after the three-dimensional situation sand table model is generated, the three-dimensional situation sand table model is cut into two-dimensional situation sand table images according to a preset cutting rule, so that only the two-dimensional situation sand table images are transmitted in the image transmission process, and the three-dimensional situation sand table model is prevented from being stolen.
In one embodiment, before acquiring the plurality of two-dimensional live-action images, the method further comprises:
obtaining sand table information of a target sand table, wherein the sand table information comprises at least one of the geographical range of the sand table, the height of the sand table from the ground and the accuracy;
and generating shooting information according to the sand table information of the target sand table, wherein the shooting information is used for instructing image shooting equipment to shoot a plurality of two-dimensional live-action images.
In one embodiment, the photographing information includes a photographing height, a photographing angle, a photographing overlap degree, and an emphasis target.
In one embodiment, before acquiring the plurality of two-dimensional live-action images, the method further comprises:
and determining image shooting equipment for shooting the two-dimensional live-action images according to preset parameters, and shooting a plurality of two-dimensional live-action images by using the image shooting equipment, wherein the preset parameters can be the maximum shooting height and the shooting range.
In an embodiment, the unmanned aerial vehicle closest to the sand table and in an idle state may be determined as the image capturing device for capturing the two-dimensional live-action image according to the geographical range of the sand table, or the unmanned aerial vehicle with better performance or a special function may be selected as the image capturing device for capturing the two-dimensional live-action image according to the height and/or accuracy from the ground, which may be specifically determined according to the actual situation.
For example, the type of drone may be determined from the geographic extent and altitude of the sand table: shooting in a small range and selecting a multi-rotor unmanned aerial vehicle, and shooting in a large range and selecting a fixed-wing unmanned aerial vehicle which vertically takes off and lands; the multi-rotor unmanned aerial vehicle has good photo shooting quality and higher precision, but is limited by the flight range and height; the quality of the pictures shot by the vertical take-off and landing fixed wing unmanned aerial vehicle is relatively poor, but the vertical take-off and landing fixed wing unmanned aerial vehicle is suitable for long-distance and high-altitude flight.
Illustratively, the geographic range in the sand table information is 8km, the height from the ground is 600 m, the accuracy is 1 m, at this time, three unmanned aerial vehicles are in an idle state, the shooting range of the unmanned aerial vehicle a is 7km, the maximum shooting height is 400 m, the shooting range of the unmanned aerial vehicle B is 10km, the maximum shooting height is 700 m, the shooting range of the unmanned aerial vehicle C is 6km, and the maximum shooting height is 700 m, so that the image processing server can determine that the unmanned aerial vehicle B is used for shooting a two-dimensional live-action image.
Fig. 3 is a diagram of an image processing apparatus provided in an embodiment of the present disclosure, and as shown in fig. 3, the image processing apparatus includes a first obtaining module 301, a first generating module 302, a second obtaining module 303, and a superimposing module 304; the first obtaining module 301 is configured to obtain a plurality of two-dimensional live-action images; the first generation module 302 is configured to generate a three-dimensional live-action sand table model from the plurality of two-dimensional live-action images; the second obtaining module 303 is configured to obtain attribute information of the target object; the superposition module 304 is configured to superimpose the attribute information of the target object onto the three-dimensional live-action sand table model to generate a three-dimensional situation sand table model.
Fig. 4 is a diagram of an image processing apparatus provided in an embodiment of the present disclosure, and as shown in fig. 4, the image processing apparatus includes a first obtaining module 401, a first generating module 402, a second obtaining module 404, a superimposing module 404, and a processing module 405; the processing module 405 is configured to collect a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule, and the three-dimensional situation sand table model can be restored according to the plurality of two-dimensional situation sand table images.
Fig. 5 is a diagram of an image processing apparatus provided in an embodiment of the present disclosure, and as shown in fig. 5, the image processing apparatus includes a first obtaining module 501, a first generating module 502, a second obtaining module 504, a superimposing module 504, and a processing module 505; the processing module 505 comprises a first processing submodule 5051, a second processing submodule 5052, and a third processing submodule 5053, wherein the first processing submodule 5051 is configured to divide the three-dimensional situation sand table model into multiple views according to a preset complexity level; the second processing sub-module 5052 is configured to acquire picture coordinate information corresponding to each view; the third processing sub-module 5053 is configured to cut the three-dimensional situation sand table model into a plurality of two-dimensional situation sand table maps according to a plurality of viewing angles.
Fig. 6 is a diagram of an image processing apparatus provided in an embodiment of the present disclosure, as shown in fig. 6, the image processing apparatus includes a first obtaining module 601, a first generating module 602, a second obtaining module 603, a superimposing module 604, and a third obtaining module 605 and a second generating module 606, where the third obtaining module 605 is configured to obtain sand table information of a target sand table; the second generating module 606 is configured to generate shooting information according to the sand table information of the target sand table, where the shooting information is used to instruct an image shooting device to shoot a plurality of the two-dimensional live-action images.
Fig. 7 is a diagram of an image processing apparatus provided in an embodiment of the present disclosure, and as shown in fig. 7, the image processing apparatus includes a first obtaining module 701, a first generating module 702, a second obtaining module 703, a superimposing module 704, and a selecting module 705, where the selecting module 705 is configured to determine an image capturing device for capturing the two-dimensional live-action image.
Fig. 8 is an image processing system according to an embodiment of the present disclosure, and as shown in fig. 8, the system includes: an image processing server 801, an image capturing apparatus 802 and a radar monitoring apparatus 803 connected to the image processing server; the image capturing device 802 is configured to capture a two-dimensional live view image; the radar monitoring device 803 is used for collecting attribute information of a target object; the image processing server 801 is configured to generate a three-dimensional live-action sand table model according to the two-dimensional live-action image captured by the image capturing device 802, and superimpose attribute information of the target object acquired by the radar monitoring device 803 on the three-dimensional live-action sand table model to generate a three-dimensional situation sand table model.
In one embodiment, the image capture device may be a drone.
Fig. 9 is an image processing system according to an embodiment of the present disclosure, and as shown in fig. 9, the system includes: an image processing server 901, an image capturing device 902, a radar monitoring device 903, and an image receiving device 904 connected to the image processing server; the image processing server 901 is configured to collect a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule, and send the collected two-dimensional situation sand table images to the image receiving device 904; the image receiving device 904 is configured to synthesize the received two-dimensional situation sand table image into a three-dimensional situation sand table model according to a preset synthesis rule.
Fig. 10 is an image processing system according to an embodiment of the present disclosure, and as shown in fig. 10, the system includes: an image processing server 1001, an image capturing apparatus 1002, a radar monitoring apparatus 1003, and an image capturing apparatus 1004 connected to the image processing server, an image receiving apparatus 1005 connected to the image capturing apparatus, and a projector 1006 connected to the image receiving apparatus 1005; the image acquisition device 1004 acquires a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model generated by the image processing server 1001 according to a preset cutting rule, and sends the acquired two-dimensional situation sand table images to the image receiving device 1005; the image receiving device 1005 synthesizes the received two-dimensional situation sand table image into a three-dimensional situation sand table model according to a preset synthesis rule; projector 1006 is used to present a stereoscopic three-dimensional situation sand table model to the user.
In one embodiment, the image processing server may also send the sand table information to the unmanned aerial vehicle controller, and the unmanned aerial vehicle controller generates shooting information according to the sand table information, and then the unmanned aerial vehicle controller controls the unmanned aerial vehicle to acquire the required two-dimensional live-action image according to the shooting information.
It can be understood that the original three-dimensional situation sand table model can be restored based on a plurality of two-dimensional situation sand table graphs obtained by cutting according to a preset cutting rule and a preset synthesis rule.
It is easy to understand that the three-dimensional situation sand table model is a mark of the target in the three-dimensional real-scene sand table model, and the mark can be the geographic coordinate, the type, the signal, the moving speed, or the like of the target object. The important point of the embodiment of the disclosure is that the geographical coordinates of the superimposed objects are marked in the three-dimensional live-action sand table model.
It should be noted that, according to the embodiment of the present disclosure, the update frequency of the three-dimensional situation sand table model may be set according to an actual situation. Specifically, the update frequency of the three-dimensional live-action sand table model and the update frequency of the object attribute information may be set separately.
For example, if the update frequency of the attribute information of the target object is set to be higher, for example, one second, the radar detector sends new attribute information of the target object to the image processing server every second, and after the image processing server receives the new attribute information of the target object, the target object is updated in the three-dimensional live-action sand table model, so that the state of the identified target object is ensured to be the latest.
It can be understood that, because the probability of changes of terrain buildings and the like is small, the update frequency of the three-dimensional sand table model can be set to be low, for example, half a year, and then the image processing server controls the unmanned aerial vehicle to acquire a two-dimensional live-action image every half a year so as to avoid sudden changes of the concerned area. Of course, the update frequency of the three-dimensional sand table model can also be increased for special cases.
In the embodiment of the disclosure, a plurality of two-dimensional live-action images can be acquired in real time, a three-dimensional live-action sand table model is generated according to the two-dimensional live-action images, meanwhile, attribute information of a target object acquired by a radar in real time is superimposed on the three-dimensional sand table model, the three-dimensional situation sand table model is generated and is displayed to a user in a three-dimensional stereo mode, and therefore the user can accurately, intuitively and real-timely know the states of a concerned area and the target object.
It should be further noted that the target sand table refers to any one sand table, and the present disclosure is only described by taking the target sand table as an example to show how to implement the quick quotation of the target sand table, and the target does not represent any limitation.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. An image processing method, characterized in that the method comprises:
acquiring a plurality of two-dimensional live-action images;
generating a three-dimensional live-action sand table model according to the plurality of two-dimensional live-action images;
acquiring attribute information of a target object;
and superposing the attribute information of the target object to the three-dimensional real-scene sand table model to generate a three-dimensional situation sand table model.
2. The method of claim 1, further comprising:
and acquiring a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule, and restoring the three-dimensional situation sand table model according to the two-dimensional situation sand table images.
3. The method according to claim 2, wherein the collecting a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule comprises:
dividing the three-dimensional situation sand table model into a plurality of visual angles according to a preset complexity degree;
acquiring picture coordinate information corresponding to each visual angle;
and cutting the three-dimensional situation sand table model into a plurality of two-dimensional situation sand table graphs according to the plurality of visual angles, wherein the two-dimensional situation sand table graphs carry visual angle information and picture coordinate information.
4. The method of claim 1, wherein prior to said acquiring a plurality of two-dimensional live-action images, the method further comprises:
obtaining sand table information of a target sand table, wherein the sand table information comprises at least one of the geographical range of the sand table, the height of the sand table from the ground and the accuracy;
and generating shooting information according to the sand table information of the target sand table, wherein the shooting information is used for instructing image shooting equipment to shoot a plurality of two-dimensional live-action images.
5. The method of claim 1, wherein prior to said obtaining a plurality of said two-dimensional live-action images, said method further comprises:
and determining image shooting equipment for shooting the two-dimensional live-action images according to preset parameters, and shooting a plurality of two-dimensional live-action images by utilizing the image shooting equipment.
6. The method of claim 1, wherein the attribute information of the object includes at least one of a type, a model, geographical coordinates, and a moving speed of the object.
7. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module acquires a plurality of two-dimensional live-action images;
the first generation module is used for generating a three-dimensional live-action sand table model according to the plurality of two-dimensional live-action images;
the second acquisition module is used for acquiring the attribute information of the target object;
and the superposition module is used for superposing the attribute information of the target object to the three-dimensional real-scene sand table model to generate a three-dimensional situation sand table model.
8. An image processing system, characterized in that the system comprises:
the radar monitoring system comprises an image processing server, and image shooting equipment and radar monitoring equipment which are connected with the image processing server; wherein the content of the first and second substances,
the image shooting equipment is used for shooting a two-dimensional live-action image;
the radar monitoring equipment is used for acquiring attribute information of a target object;
the image processing server is used for generating a three-dimensional live-action sand table model according to the two-dimensional live-action image shot by the image shooting equipment, and superimposing attribute information of the target object collected by the radar monitoring equipment into the three-dimensional live-action sand table model to generate the three-dimensional situation sand table model.
9. The system of claim 8, further comprising:
an image receiving device connected to the image processing server; wherein the content of the first and second substances,
the image processing server collects a plurality of two-dimensional situation sand table images in the three-dimensional situation sand table model according to a preset cutting rule and sends the collected two-dimensional situation sand table images to the image receiving equipment;
and the image receiving equipment synthesizes the received two-dimensional situation sand table image into the three-dimensional situation sand table model according to a preset synthesis rule.
10. The system of claim 8, further comprising:
the image acquisition equipment is connected with the image processing server; the image receiving equipment is connected with the image acquisition equipment; wherein the content of the first and second substances,
the image acquisition equipment acquires a plurality of two-dimensional situation sand table images in a three-dimensional situation sand table model generated by the image processing server according to a preset cutting rule and sends the acquired two-dimensional situation sand table images to the image receiving equipment;
and the image receiving equipment synthesizes the received two-dimensional situation sand table image into the three-dimensional situation sand table model according to a preset synthesis rule.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910875678.7A CN110648396A (en) | 2019-09-17 | 2019-09-17 | Image processing method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910875678.7A CN110648396A (en) | 2019-09-17 | 2019-09-17 | Image processing method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110648396A true CN110648396A (en) | 2020-01-03 |
Family
ID=69010576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910875678.7A Pending CN110648396A (en) | 2019-09-17 | 2019-09-17 | Image processing method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110648396A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113055543A (en) * | 2021-03-31 | 2021-06-29 | 上海市东方医院(同济大学附属东方医院) | Construction method of digital twin command sand table of mobile hospital |
WO2022016756A1 (en) * | 2020-07-23 | 2022-01-27 | 西安万像电子科技有限公司 | Image processing method and apparatus, device, and storage medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116904A (en) * | 2012-10-23 | 2013-05-22 | 北京航空航天大学深圳研究院 | Two-dimensional feature extraction system and two-dimensional feature extraction method of three-dimensional model |
CN202974277U (en) * | 2012-11-29 | 2013-06-05 | 北京四维远见信息技术有限公司 | Photogrammetric measurement system for S-shaped orbits |
CN104463948A (en) * | 2014-09-22 | 2015-03-25 | 北京大学 | Seamless visualization method for three-dimensional virtual reality system and geographic information system |
CN104834784A (en) * | 2015-05-13 | 2015-08-12 | 西南交通大学 | Railway emergency auxiliary rescue three-dimensional virtual electronic sand table system |
CN106296821A (en) * | 2016-08-19 | 2017-01-04 | 刘建国 | Multi-view angle three-dimensional method for reconstructing based on unmanned plane and system |
CN106326334A (en) * | 2016-07-14 | 2017-01-11 | 微梦创科网络科技(中国)有限公司 | Display method and device for electronic map and generation method and device for electronic map |
CN107341851A (en) * | 2017-06-26 | 2017-11-10 | 深圳珠科创新技术有限公司 | Real-time three-dimensional modeling method and system based on unmanned plane image data |
CN107479706A (en) * | 2017-08-14 | 2017-12-15 | 中国电子科技集团公司第二十八研究所 | A kind of battlefield situation information based on HoloLens is built with interacting implementation method |
CN107797665A (en) * | 2017-11-15 | 2018-03-13 | 王思颖 | A kind of 3-dimensional digital sand table deduction method and its system based on augmented reality |
CN107945270A (en) * | 2016-10-12 | 2018-04-20 | 阿里巴巴集团控股有限公司 | A kind of 3-dimensional digital sand table system |
CN108509848A (en) * | 2018-02-13 | 2018-09-07 | 视辰信息科技(上海)有限公司 | The real-time detection method and system of three-dimension object |
WO2019037038A1 (en) * | 2017-08-24 | 2019-02-28 | 深圳前海达闼云端智能科技有限公司 | Image processing method and device, and server |
CN109598021A (en) * | 2018-10-31 | 2019-04-09 | 顺丰航空有限公司 | A kind of information display method, device, equipment and storage medium |
CN109657396A (en) * | 2018-12-29 | 2019-04-19 | 中铁四局集团第五工程有限公司 | A kind of visualization electronic sand table method and system based on BIM technology |
CN109872397A (en) * | 2019-02-18 | 2019-06-11 | 北京工业大学 | A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision |
CN109992809A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | A kind of construction method of buildings model, device and storage device |
CN110163831A (en) * | 2019-04-19 | 2019-08-23 | 深圳市思为软件技术有限公司 | The object Dynamic Display method, apparatus and terminal device of three-dimensional sand table |
CN110208271A (en) * | 2019-06-06 | 2019-09-06 | 中国人民解放军陆军工程大学 | A kind of damage detecting method of phased array antenna, damage detection apparatus and terminal |
-
2019
- 2019-09-17 CN CN201910875678.7A patent/CN110648396A/en active Pending
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116904A (en) * | 2012-10-23 | 2013-05-22 | 北京航空航天大学深圳研究院 | Two-dimensional feature extraction system and two-dimensional feature extraction method of three-dimensional model |
CN202974277U (en) * | 2012-11-29 | 2013-06-05 | 北京四维远见信息技术有限公司 | Photogrammetric measurement system for S-shaped orbits |
CN104463948A (en) * | 2014-09-22 | 2015-03-25 | 北京大学 | Seamless visualization method for three-dimensional virtual reality system and geographic information system |
CN104834784A (en) * | 2015-05-13 | 2015-08-12 | 西南交通大学 | Railway emergency auxiliary rescue three-dimensional virtual electronic sand table system |
CN106326334A (en) * | 2016-07-14 | 2017-01-11 | 微梦创科网络科技(中国)有限公司 | Display method and device for electronic map and generation method and device for electronic map |
CN106296821A (en) * | 2016-08-19 | 2017-01-04 | 刘建国 | Multi-view angle three-dimensional method for reconstructing based on unmanned plane and system |
CN107945270A (en) * | 2016-10-12 | 2018-04-20 | 阿里巴巴集团控股有限公司 | A kind of 3-dimensional digital sand table system |
CN107341851A (en) * | 2017-06-26 | 2017-11-10 | 深圳珠科创新技术有限公司 | Real-time three-dimensional modeling method and system based on unmanned plane image data |
CN107479706A (en) * | 2017-08-14 | 2017-12-15 | 中国电子科技集团公司第二十八研究所 | A kind of battlefield situation information based on HoloLens is built with interacting implementation method |
WO2019037038A1 (en) * | 2017-08-24 | 2019-02-28 | 深圳前海达闼云端智能科技有限公司 | Image processing method and device, and server |
CN107797665A (en) * | 2017-11-15 | 2018-03-13 | 王思颖 | A kind of 3-dimensional digital sand table deduction method and its system based on augmented reality |
CN109992809A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | A kind of construction method of buildings model, device and storage device |
CN108509848A (en) * | 2018-02-13 | 2018-09-07 | 视辰信息科技(上海)有限公司 | The real-time detection method and system of three-dimension object |
CN109598021A (en) * | 2018-10-31 | 2019-04-09 | 顺丰航空有限公司 | A kind of information display method, device, equipment and storage medium |
CN109657396A (en) * | 2018-12-29 | 2019-04-19 | 中铁四局集团第五工程有限公司 | A kind of visualization electronic sand table method and system based on BIM technology |
CN109872397A (en) * | 2019-02-18 | 2019-06-11 | 北京工业大学 | A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision |
CN110163831A (en) * | 2019-04-19 | 2019-08-23 | 深圳市思为软件技术有限公司 | The object Dynamic Display method, apparatus and terminal device of three-dimensional sand table |
CN110208271A (en) * | 2019-06-06 | 2019-09-06 | 中国人民解放军陆军工程大学 | A kind of damage detecting method of phased array antenna, damage detection apparatus and terminal |
Non-Patent Citations (7)
Title |
---|
徐祖舰 等编著: "《机载激光雷达测量技术及工程应用实践》", 31 May 2009, 武汉:武汉大学出版社, pages: 179 - 181 * |
杨智勋: "三维电子沙盘系统的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 5, pages 138 - 200 * |
林晓斌 等: "基于多图像数据的三维重构方法研究", 《北京印刷学院学报》, vol. 25, no. 8, pages 21 - 23 * |
王莉 等: "构建面向应急任务的三维实景沙盘标绘及分析系统", 《信息技术与信息化》, no. 5, pages 35 - 37 * |
申亚鹏 等: "基于ArcEngine的三维标绘研究", 《测绘技术装备》, vol. 12, no. 1, pages 9 - 11 * |
郭凯 等编著: "《弹丸毁伤参数三维重构技术》", 31 December 2018, 北京:国防工业出版社, pages: 7 - 8 * |
齐越 等编著: "《博物馆数字资源的管理与展示》", 30 June 2008, 上海:上海科学技术出版社, pages: 139 - 141 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022016756A1 (en) * | 2020-07-23 | 2022-01-27 | 西安万像电子科技有限公司 | Image processing method and apparatus, device, and storage medium |
CN113055543A (en) * | 2021-03-31 | 2021-06-29 | 上海市东方医院(同济大学附属东方医院) | Construction method of digital twin command sand table of mobile hospital |
CN113055543B (en) * | 2021-03-31 | 2022-08-19 | 上海市东方医院(同济大学附属东方医院) | Construction method of digital twin command sand table of mobile hospital |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106767706B (en) | A kind of unmanned plane reconnoitres the Aerial Images acquisition method and system of the scene of a traffic accident | |
Neitzel et al. | Mobile 3D mapping with a low-cost UAV system | |
KR102007567B1 (en) | Stereo drone and method and system for calculating earth volume in non-control points using the same | |
WO2022078240A1 (en) | Camera precise positioning method applied to electronic map, and processing terminal | |
KR20190051704A (en) | Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone | |
DE102016104463A1 (en) | Multidimensional merging of images in real time | |
US20190356936A9 (en) | System for georeferenced, geo-oriented realtime video streams | |
EP2946368B1 (en) | A method and arrangement for providing a 3d model | |
Ahmad et al. | Digital aerial imagery of unmanned aerial vehicle for various applications | |
CN112710318A (en) | Map generation method, route planning method, electronic device, and storage medium | |
WO2012018497A2 (en) | ENHANCED SITUATIONAL AWARENESS AND TARGETING (eSAT) SYSTEM | |
MX2013000158A (en) | Real-time moving platform management system. | |
KR101692709B1 (en) | Digital Mapping imaging system using drones | |
Erenoglu et al. | Accuracy assessment of low cost UAV based city modelling for urban planning | |
WO2019080768A1 (en) | Information processing apparatus, aerial photography path generation method, program and recording medium | |
CN110880202B (en) | Three-dimensional terrain model creating method, device, equipment and storage medium | |
CN102831816B (en) | Device for providing real-time scene graph | |
CN110648396A (en) | Image processing method, device and system | |
US20210264666A1 (en) | Method for obtaining photogrammetric data using a layered approach | |
JP2017201261A (en) | Shape information generating system | |
Kaimaris et al. | UAV and the comparison of image processing software | |
KR20220166689A (en) | Drone used 3d mapping method | |
JP3808833B2 (en) | Aerial photogrammetry | |
KR102475790B1 (en) | Map making Platform apparatus and map making method using the platform | |
KR102262120B1 (en) | Method of providing drone route |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |