CN110298912B - Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene - Google Patents

Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene Download PDF

Info

Publication number
CN110298912B
CN110298912B CN201910392509.8A CN201910392509A CN110298912B CN 110298912 B CN110298912 B CN 110298912B CN 201910392509 A CN201910392509 A CN 201910392509A CN 110298912 B CN110298912 B CN 110298912B
Authority
CN
China
Prior art keywords
scene information
current
aerial vehicle
unmanned aerial
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910392509.8A
Other languages
Chinese (zh)
Other versions
CN110298912A (en
Inventor
蒙山
严方林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acus Technologies Co ltd
Original Assignee
Acus Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acus Technologies Co ltd filed Critical Acus Technologies Co ltd
Priority to CN201910392509.8A priority Critical patent/CN110298912B/en
Publication of CN110298912A publication Critical patent/CN110298912A/en
Application granted granted Critical
Publication of CN110298912B publication Critical patent/CN110298912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a reproduction method of a three-dimensional scene, a system electronic device and a storage medium, relates to the technical field of image processing, and is used for reproducing the indoor three-dimensional scene, solving the problem that a STEAM educational appliance in the prior art cannot reproduce the scene in a three-dimensional space, and comprising the following steps: controlling the unmanned aerial vehicle to collect indoor current scene information; preprocessing the current scene information to obtain preprocessing information; inputting the preprocessing information into a pre-generated sequence generation model to generate a current instruction sequence; the client sends a control instruction to the unmanned aerial vehicle by using the client according to the current instruction sequence; after receiving the control instruction, the unmanned aerial vehicle collects indoor two-dimensional scene information and sends the two-dimensional scene information to the client; reconstructing and reproducing a scene in the three-dimensional space according to the two-dimensional scene information at the client; after the client reconstructs and reproduces the scene in the three-dimensional space, the real scene experience and good learning experience can be given to the learner.

Description

Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and a system for reproducing a three-dimensional scene, an electronic device, and a storage medium.
Background
In recent years, STEAM education has come up on the market, STEAM being an educational discipline, representing Science, technology, engineering, art, math, and STEAM itself, also being an acronym for english for the five representatives. STEAM education is a comprehensive education integrating science, technology, engineering, art and mathematical multidisciplinary.
The education appliances in the STEAM education class at present are mainly robot products represented by Legao robots. These educational appliances are increasingly highly integrated, and a user can perform a visual operation through a terminal interface provided at the educational appliance, and an operation result is transmitted to the educational appliance device in a control instruction manner, and the educational appliance device performs a corresponding scene operation according to the received operation control instruction.
However, the conventional operation mode of the educational apparatus is only convenient for a person using the educational apparatus to operate, and cannot reproduce a scene in an actual three-dimensional space, so that a learner cannot be given a real scene experience and a good learning experience.
Disclosure of Invention
The invention mainly aims to provide a three-dimensional scene reproduction method, a system electronic device and a storage medium, and aims to solve the technical problems that STEAM educational appliances in the prior art cannot reproduce scenes in an actual three-dimensional space and cannot give a learner real scene experience and good learning experience.
To achieve the above object, a first aspect of the present invention provides a reproduction method of a three-dimensional scene, including: controlling an unmanned aerial vehicle to collect indoor current scene information, wherein the current scene information is sensor data collected by a sensor on the unmanned aerial vehicle at a current time point; preprocessing the current scene information to obtain preprocessing information; inputting the preprocessing information into a pre-generated sequence generation model, generating a current instruction sequence, and sending the current instruction sequence to a client; after receiving the current instruction sequence, the client sends a control instruction to the unmanned aerial vehicle by using the client according to the current instruction sequence; after receiving the control instruction, the unmanned aerial vehicle collects indoor two-dimensional scene information according to the control instruction, and sends the two-dimensional scene information to a client; reconstructing and reproducing the scene in the three-dimensional space according to the two-dimensional scene information at the client.
Further, the method for generating the sequence generation model comprises the following steps: the method comprises the steps of pre-controlling and using an unmanned aerial vehicle to collect indoor scene information to obtain historical scene information, wherein the historical scene information is sensor data collected by a sensor on the unmanned aerial vehicle at a past time point, and recording a sample instruction sequence received by the unmanned aerial vehicle under control at the time; preprocessing the historical scene information to obtain preprocessed historical scene information, and training a convolutional neural network by using the preprocessed historical scene information, wherein the preprocessing comprises standardization and vectorization; vectorizing the sample instruction sequence, and training a first cyclic neural network by using the vectorized sample instruction sequence; and combining the convolutional neural network with the first cyclic neural network by using a second cyclic neural network so as to obtain a sequence generation model after the historical scene information is matched with the sample instruction sequence, wherein the sequence generation model is used for matching the current scene information with the historical scene information after receiving the current scene information, and taking the sample instruction sequence matched with the historical scene information as the current instruction sequence matched with the current scene information.
Further, the preprocessing the scene information to obtain preprocessed information includes: normalizing the scene information to obtain normalized information; and vectorizing the standardized information to obtain preprocessing information.
Further, the controlling and using the unmanned aerial vehicle to collect indoor current scene information includes: acquiring first current scene information in a room by using a first scene acquisition sensor carried by the unmanned aerial vehicle; and acquiring indoor second current scene information by using a second scene acquisition sensor mounted on the unmanned aerial vehicle.
A second aspect of the present invention provides a reproduction system of a three-dimensional scene, including: the current scene information acquisition module is used for controlling the unmanned aerial vehicle to acquire indoor current scene information, wherein the current scene information is sensor data acquired by a sensor on the unmanned aerial vehicle at a current time point; the preprocessing module is used for preprocessing the current scene information acquired by the current scene information acquisition module to obtain preprocessed information; the instruction sequence generation module is used for transmitting the preprocessing information obtained by the preprocessing module to the sequence generation module and sending the instruction sequence generated by the instruction sequence generation module according to the preprocessing information to the client; the instruction sending module is used for sending a control instruction to the unmanned aerial vehicle at the client according to the instruction sequence generated by the instruction sequence generating module so as to control the unmanned aerial vehicle; the two-dimensional scene acquisition module is used for acquiring indoor two-dimensional scene information by using the unmanned aerial vehicle according to the instruction sent by the instruction sending module and sending the two-dimensional scene information to the client; and the reproduction module is used for reconstructing and reproducing the scene in the three-dimensional space according to the two-dimensional scene information acquired by the two-dimensional scene acquisition module.
Further, the sequence generation module includes: the system comprises a historical scene information acquisition unit, a control unit and a control unit, wherein the historical scene information acquisition unit is used for controlling and using an unmanned aerial vehicle to acquire indoor scene information in advance, the historical scene information is sensor data acquired by a sensor on the unmanned aerial vehicle at a past time point, the historical scene information is obtained, and a sample instruction sequence received by the unmanned aerial vehicle under control at the moment is recorded; the preprocessing unit is used for preprocessing the historical scene information acquired by the historical scene information acquisition unit to obtain preprocessed historical scene information, and training a convolutional neural network by using the preprocessed historical scene information, wherein the preprocessing comprises standardization and vectorization; the vectorization unit is used for vectorizing the sample instruction sequence recorded by the historical scene information acquisition unit and training a first cyclic neural network by using the vectorized sample instruction sequence; the sequence model generation unit is used for combining the convolutional neural network of the preprocessing unit and the first cyclic neural network of the vectorization unit by using the second cyclic neural network so that a sequence generation model is obtained after the historical scene information acquired by the historical scene information acquisition unit is matched with the sample instruction sequence, and the sequence generation model is used for matching the current scene information with the historical scene information after receiving the current scene information acquired by the current scene information acquisition module and taking a sample instruction sequence matched with the historical scene information as a current instruction sequence matched with the current scene information.
Further, the preprocessing module includes: the standardized unit is used for standardizing the current scene information acquired by the current scene information acquisition module to obtain standardized information; and the vectorization unit is used for vectorizing the standardized information obtained by the standardization unit to obtain the preprocessing information.
Further, the current scene information acquisition module includes: the first current scene information acquisition unit is used for acquiring indoor first current scene information by using a first scene acquisition sensor of the unmanned aerial vehicle; the second current scene information acquisition unit is used for acquiring indoor second current scene information by using a second scene acquisition sensor mounted on the unmanned aerial vehicle.
A third aspect of the present invention provides an electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of the above when executing the computer program.
A fourth aspect of the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to any of the preceding claims.
The invention provides a reproduction method of a three-dimensional scene, a system electronic device and a storage medium, which have the beneficial effects that: after preprocessing the current scene information acquired by the unmanned aerial vehicle to obtain preprocessing information, the position of the unmanned aerial vehicle in the three-dimensional space can be positioned through a sequence generation model, so that a current instruction sequence is generated, a client can send a control instruction to the unmanned aerial vehicle according to the instruction sequence, so that the unmanned aerial vehicle can acquire indoor two-dimensional scene information based on the current position, reconstruction and reproduction of a scene in the three-dimensional space can be realized at the client according to the two-dimensional scene information, and real scene experience and good learning experience can be given to learners.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other drawings may be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a schematic block diagram of a method for reproducing a three-dimensional scene according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a reproduction system of a three-dimensional scene according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a method for reproducing a three-dimensional scene includes: s1, controlling an unmanned aerial vehicle to collect indoor current scene information, wherein the current scene information is sensor data collected by a sensor on the unmanned aerial vehicle at a current time point; s2, preprocessing the current scene information to obtain preprocessing information; s3, inputting the preprocessing information into a pre-generated sequence generation model, generating a current instruction sequence, and sending the current instruction sequence to the client; s4, after receiving the current instruction sequence, the client sends a control instruction to the unmanned aerial vehicle by using the client according to the current instruction sequence; after receiving the control instruction, the unmanned aerial vehicle collects indoor two-dimensional scene information according to the control instruction, and sends the two-dimensional scene information to the client; s5, reconstructing and reproducing the scene in the three-dimensional space according to the two-dimensional scene information at the client.
The generation method of the sequence generation model comprises the following steps: the method comprises the steps of controlling and using an unmanned aerial vehicle to collect indoor scene information in advance to obtain historical scene information, wherein the historical scene information is sensor data collected by a sensor on the unmanned aerial vehicle at a past time point, and recording a sample instruction sequence received by the unmanned aerial vehicle under control at the time; preprocessing the historical scene information to obtain preprocessed historical scene information, and training a convolutional neural network by using the preprocessed historical scene information, wherein the preprocessing comprises standardization and vectorization; vectorizing the sample instruction sequence, and training a first cyclic neural network by using the vectorized sample instruction sequence; and combining the convolutional neural network with the first cyclic neural network by using the second cyclic neural network so as to obtain a sequence generation model after the historical scene information is matched with the sample instruction sequence, wherein the sequence generation model is used for matching the current scene information with the historical scene information after receiving the current scene information, and taking the sample instruction sequence matched with the historical scene information as the current instruction sequence matched with the current scene information.
When controlling the unmanned aerial vehicle to collect indoor historical scene information, in the embodiment, the unmanned aerial vehicle is controlled by using a client to send an instruction sequence indoors, for example, the unmanned aerial vehicle is operated to move back and forth forwards or backwards, or move upwards and downwards to generate sensor data information, the instruction sequence and the corresponding sensor data information are stored, and the instruction sequence at the moment is recorded as a sample instruction sequence; in other embodiments, the unmanned aerial vehicle is held by a hand to change the position of the unmanned aerial vehicle, so that a sensor on the unmanned aerial vehicle generates sensor data information, the sensor data information is calibrated, namely, the corresponding relation between the sensor data information and the instruction sequence is calibrated, and the instruction sequence at the moment is recorded as a sample instruction sequence.
In the embodiment, a first cyclic neural network convolutional neural network and a second cyclic neural network are adopted to generate a sequence generation model; the first and second cyclic neural networks are cyclic neural networks, the English name of the cyclic neural network is Long Short-Term Memory Networks, and the English name is abbreviated as LSTM; the English name of the convolutional neural network is Convolutional Neural Networks, and the English name is abbreviated as CNN; when the historical scene information is preprocessed, the historical scene information is output to CNN for feature extraction, word embedding vectorization is carried out after sample instruction sequence data are encoded, the obtained sample instruction sequence data are output to a first cyclic neural network for extraction, finally, the CNN and the features extracted by the first cyclic neural network are connected and matched, the extracted CNN and the features extracted by the first cyclic neural network are input to a second cyclic neural network for learning, so that the second cyclic neural network can generate an instruction sequence corresponding to the sensor data information based on the sensor data information as a condition, a sequence generation model is obtained, and the sequence generation model can generate a current instruction sequence for driving the unmanned aerial vehicle based on the sensor data information represented in the current scene information collected by the unmanned aerial vehicle.
In other embodiments, the sequence generation model is generated by antagonizing a neural network; the antagonistic neural network has an English name generative adversarial nets, the English name is abbreviated as GAN, and specifically, the antagonistic neural network uses DiscoGAN; in the process of generating a sequence generation model by using the antagonism neural network, firstly mapping sensor data information represented by historical scene information into an image to obtain a sensor image, then mapping a sample instruction sequence into the image after encoding to obtain an instruction image, and inputting the sensor image and the instruction image into the antagonism neural network for learning, so that the antagonism neural network can complete mapping from the sensor image to the instruction image, thereby generating the sequence generation model, and the sequence generation model can generate a current instruction sequence for driving the unmanned aerial vehicle based on the sensor data information represented in the current scene information acquired by the unmanned aerial vehicle.
In other embodiments, if the indoor scene is monotonous and the sensor on the unmanned aerial vehicle is a TOF sensor, extracting feature points of peak values and trough values of distance change data of TOF generated when the unmanned aerial vehicle executes the instruction sequence by using a time series feature extraction method in a data signal processing technology, and performing corresponding screening and filtering processing on the extracted feature points, so as to obtain the number of execution of the instruction and judgment of flight trend of the unmanned aerial vehicle, and finally finish the task of generating the unmanned instruction sequence by driving the TOF sensor data, wherein the TOF sensor is a flight time sensor.
Preprocessing scene information to obtain preprocessing information, wherein the preprocessing information comprises: the scene information is standardized to obtain standardized information; vectorizing the standardized information to obtain the preprocessing information.
Controlling and using the unmanned aerial vehicle to collect indoor current scene information comprises: acquiring first current scene information in a room by using a first scene acquisition sensor carried by the unmanned aerial vehicle; and acquiring indoor second current scene information by using a second scene acquisition sensor mounted on the unmanned aerial vehicle.
In this embodiment, since in actual situations, the number of sensors carried by the unmanned aerial vehicle itself is fixed, in order to enable the unmanned aerial vehicle to acquire richer current scene information, the sensors are mounted on the unmanned aerial vehicle, the sensors carried by the unmanned aerial vehicle itself are recorded as first scene acquisition sensors, and the sensors mounted on the unmanned aerial vehicle are recorded as second scene acquisition sensors; and in order to be convenient for acquire sensor, second scene acquisition sensor, unmanned aerial vehicle and the data transmission between the customer end, carry wireless relay module on unmanned aerial vehicle, the customer end, unmanned aerial vehicle all is connected with wireless relay module wireless, the second scene acquisition sensor is connected with wireless relay module electricity, when carrying out data transmission, the customer end sends instruction sequence to wireless relay module, wireless relay module forwards instruction sequence to unmanned aerial vehicle, thereby control unmanned aerial vehicle's flight, and the first scene on the unmanned aerial vehicle acquires sensor and second scene acquisition sensor, then the first current scene information and the second current scene information that will gather respectively send to the customer end through wireless relay module, unmanned aerial vehicle's state information also sends to the customer end through wireless relay module.
Specifically, the software on the client can develop the application program of the corresponding version according to the scene requirement, so the client can be a computer and the application program on the computer, and can also be mobile equipment and APP software on the mobile equipment; the wireless relay module has an AP mode (Access Point) and a Station mode, namely, provides wireless Access service, allows other wireless devices to Access, can be connected to the AP of other devices, and has certain data processing function, namely, has a serial port or IIC or SPI and other communication interfaces, so that the wireless relay module selects a wireless module with an ESP8266 or ESP32 model and a wireless module with a WIFI function and a data processing function and expansion equipment thereof; the unmanned aerial vehicle is provided with an API interface for secondary development and an AP mode wireless network function, so that the unmanned aerial vehicle is a small unmanned aerial vehicle of a Tello model; the sensor mounted on the unmanned plane, namely the second scene acquisition sensor is provided with an interface capable of transmitting own data to the wireless relay module in a serial port or IIC or SPI communication mode, and the second scene acquisition sensor is a TOF sensor.
Referring to fig. 2, an embodiment of the present application further provides a reproduction system of a three-dimensional scene, where the system includes: the device comprises a current scene information acquisition module, a preprocessing module, an instruction sequence generation module, an instruction sending module, a two-dimensional scene acquisition module and a reproduction module; the current scene information acquisition module is used for controlling the unmanned aerial vehicle to acquire indoor current scene information, wherein the current scene information is sensor data acquired by a sensor on the unmanned aerial vehicle at a current time point; the preprocessing module is used for preprocessing the current scene information acquired by the current scene information acquisition module to obtain preprocessing information; the instruction sequence generation module is used for transmitting the preprocessing information obtained by the preprocessing module to the sequence generation module and sending the instruction sequence generated by the instruction sequence generation module according to the preprocessing information to the client; the instruction sending module is used for sending a control instruction to the unmanned aerial vehicle at the client according to the instruction sequence generated by the instruction sequence generating module so as to control the unmanned aerial vehicle; the two-dimensional scene acquisition module is used for acquiring indoor two-dimensional scene information by using the unmanned aerial vehicle according to the instruction sent by the instruction sending module and sending the two-dimensional scene information to the client; the reproduction module is used for reconstructing and reproducing the scene in the three-dimensional space according to the two-dimensional scene information acquired by the two-dimensional scene acquisition module.
The sequence generation module comprises: the system comprises a historical scene information acquisition unit, a preprocessing unit, a vectorization unit and a sequence model generation unit; the system comprises a historical scene information acquisition unit, a control unit and a control unit, wherein the historical scene information acquisition unit is used for controlling and using the unmanned aerial vehicle to acquire indoor scene information in advance, the historical scene information is sensor data acquired by a sensor on the unmanned aerial vehicle at a past time point, the historical scene information is obtained, and a sample instruction sequence received by the unmanned aerial vehicle under control at the moment is recorded; the preprocessing unit is used for preprocessing the historical scene information acquired by the historical scene information acquisition unit to obtain preprocessed historical scene information, and training the convolutional neural network by using the preprocessed historical scene information, wherein the preprocessing comprises standardization and vectorization; the vectorization unit is used for vectorizing the sample instruction sequence recorded by the historical scene information acquisition unit and training the first cyclic neural network by using the vectorized sample instruction sequence; the sequence model generation unit is used for combining the convolutional neural network of the preprocessing unit and the first cyclic neural network of the vectorization unit by using the second cyclic neural network so that the sequence generation model is obtained after the historical scene information acquired by the historical scene information acquisition unit is matched with the sample instruction sequence, the sequence generation model is used for matching the current scene information with the historical scene information after receiving the current scene information acquired by the current scene information acquisition module, and the sample instruction sequence matched with the historical scene information is used as the current instruction sequence matched with the current scene information.
The preprocessing module comprises: a normalization unit and a vectorization unit; the standardized unit is used for standardizing the current scene information acquired by the current scene information acquisition module to obtain standardized information; and the vectorization unit is used for vectorizing the standardized information obtained by the standardization unit to obtain the preprocessing information.
The current scene information acquisition module comprises: the first current scene information acquisition unit and the second current scene information acquisition unit; the first current scene information acquisition unit is used for acquiring indoor first current scene information by using a first scene acquisition sensor of the unmanned aerial vehicle; the second current scene information acquisition unit is used for acquiring indoor second current scene information by using a second scene acquisition sensor mounted on the unmanned aerial vehicle.
Referring to fig. 3, an embodiment of the present application further provides an electronic device, including: the apparatus comprises a memory 601, a processor 602 and a computer program stored in the memory 601 and executable on the processor 602, wherein the processor 602 implements the reproduction method of the three-dimensional scene described in the foregoing description when executing the computer program.
Further, the electronic device further includes: at least one input device 603 and at least one output device 604.
The memory 601, the processor 602, the input device 603, and the output device 604 are connected via a bus 605.
The input device 603 may be a camera, a touch panel, a physical key, a mouse, or the like. The output device 604 may be, in particular, a display screen.
The memory 601 may be a high-speed random access memory (RAM, random Access Memory) memory or a non-volatile memory (non-volatile memory), such as a disk memory. The memory 601 is used for storing a set of executable program codes and the processor 602 is coupled to the memory 601.
Further, the embodiments of the present application also provide a computer readable storage medium, which may be provided in the electronic device in the foregoing embodiments, and the computer readable storage medium may be the memory 601 in the foregoing embodiments. The computer-readable storage medium has stored thereon a computer program which, when executed by the processor 602, implements the reproduction method of a three-dimensional scene described in the foregoing embodiment.
Further, the computer-readable medium may be any medium capable of storing a program code, such as a usb (universal serial bus), a removable hard disk, a Read-Only Memory 601 (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present invention is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The foregoing describes a three-dimensional scene reproduction method, system electronic device and storage medium provided by the present invention, and those skilled in the art should not understand the present invention to limit the scope of the present invention in view of the foregoing description of the embodiments of the present invention.

Claims (8)

1. A method of reproducing a three-dimensional scene, comprising:
controlling an unmanned aerial vehicle to collect indoor current scene information, wherein the current scene information is sensor data collected by a sensor on the unmanned aerial vehicle at a current time point;
preprocessing the current scene information to obtain preprocessing information;
inputting the preprocessing information into a pre-generated sequence generation model, generating a current instruction sequence, and sending the current instruction sequence to a client;
after receiving the current instruction sequence, the client sends a control instruction to the unmanned aerial vehicle by using the client according to the current instruction sequence; after receiving the control instruction, the unmanned aerial vehicle collects indoor two-dimensional scene information according to the control instruction, and sends the two-dimensional scene information to a client;
reconstructing and reproducing a scene in a three-dimensional space according to the two-dimensional scene information at a client;
inputting the preprocessing information into a pre-generated sequence generation model, and generating a current instruction sequence comprises the following steps:
the method comprises the steps of pre-controlling and using an unmanned aerial vehicle to collect indoor scene information to obtain historical scene information, wherein the historical scene information is sensor data collected by a sensor on the unmanned aerial vehicle at a past time point, and recording a sample instruction sequence received by the unmanned aerial vehicle under control at the time;
preprocessing the historical scene information, and training a convolutional neural network by using the preprocessed historical scene information, wherein the preprocessing comprises standardization and vectorization;
vectorizing the sample instruction sequence, and training a first cyclic neural network by using the vectorized sample instruction sequence;
and combining the convolutional neural network with the first cyclic neural network by using a second cyclic neural network so as to obtain a sequence generation model after the historical scene information is matched with the sample instruction sequence, wherein the sequence generation model is used for matching the current scene information with the historical scene information after receiving the current scene information, and taking the sample instruction sequence matched with the historical scene information as the current instruction sequence matched with the current scene information.
2. The method for reproducing three-dimensional scene as defined in claim 1, wherein,
the preprocessing the scene information to obtain preprocessing information comprises the following steps:
normalizing the scene information to obtain normalized information;
and vectorizing the standardized information to obtain preprocessing information.
3. The method for reproducing three-dimensional scene as defined in claim 1, wherein,
the controlling and using the unmanned aerial vehicle to collect indoor current scene information comprises the following steps:
acquiring first current scene information in a room by using a first scene acquisition sensor carried by the unmanned aerial vehicle;
and acquiring indoor second current scene information by using a second scene acquisition sensor mounted on the unmanned aerial vehicle.
4. A reproduction system of a three-dimensional scene, comprising:
the current scene information acquisition module is used for controlling the unmanned aerial vehicle to acquire indoor current scene information, wherein the current scene information is sensor data acquired by a sensor on the unmanned aerial vehicle at a current time point;
the preprocessing module is used for preprocessing the current scene information acquired by the current scene information acquisition module to obtain preprocessed information;
the instruction sequence generation module is used for transmitting the preprocessing information obtained by the preprocessing module to the sequence generation module and sending the instruction sequence generated by the instruction sequence generation module according to the preprocessing information to the client;
the instruction sending module is used for sending a control instruction to the unmanned aerial vehicle at the client according to the instruction sequence generated by the instruction sequence generating module so as to control the unmanned aerial vehicle;
the two-dimensional scene acquisition module is used for acquiring indoor two-dimensional scene information by using the unmanned aerial vehicle according to the instruction sent by the instruction sending module and sending the two-dimensional scene information to the client;
the reproduction module is used for reconstructing and reproducing the scene in the three-dimensional space according to the two-dimensional scene information acquired by the two-dimensional scene acquisition module;
the instruction sequence generation module includes:
the system comprises a historical scene information acquisition unit, a control unit and a control unit, wherein the historical scene information acquisition unit is used for controlling and using an unmanned aerial vehicle to acquire indoor scene information in advance, the historical scene information is sensor data acquired by a sensor on the unmanned aerial vehicle at a past time point, the historical scene information is obtained, and a sample instruction sequence received by the unmanned aerial vehicle under control at the moment is recorded;
the preprocessing unit is used for preprocessing the historical scene information acquired by the historical scene information acquisition unit to obtain preprocessed historical scene information, and training a convolutional neural network by using the preprocessed historical scene information, wherein the preprocessing comprises standardization and vectorization;
the vectorization unit is used for vectorizing the sample instruction sequence recorded by the historical scene information acquisition unit and training a first cyclic neural network by using the vectorized sample instruction sequence;
the sequence model generation unit is used for combining the convolutional neural network of the preprocessing unit and the first cyclic neural network of the vectorization unit by using the second cyclic neural network so that a sequence generation model is obtained after the historical scene information acquired by the historical scene information acquisition unit is matched with the sample instruction sequence, and the sequence generation model is used for matching the current scene information with the historical scene information after receiving the current scene information acquired by the current scene information acquisition module and taking a sample instruction sequence matched with the historical scene information as a current instruction sequence matched with the current scene information.
5. The reproduction system of a three-dimensional scene according to claim 4, wherein,
the preprocessing module comprises:
the standardized unit is used for standardizing the current scene information acquired by the current scene information acquisition module to obtain standardized information;
and the vectorization unit is used for vectorizing the standardized information obtained by the standardization unit to obtain the preprocessing information.
6. The reproduction system of a three-dimensional scene according to claim 4, wherein,
the current scene information acquisition module comprises:
the first current scene information acquisition unit is used for acquiring indoor first current scene information by using a first scene acquisition sensor of the unmanned aerial vehicle;
the second current scene information acquisition unit is used for acquiring indoor second current scene information by using a second scene acquisition sensor mounted on the unmanned aerial vehicle.
7. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 3 when executing the computer program.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1 to 3.
CN201910392509.8A 2019-05-13 2019-05-13 Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene Active CN110298912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910392509.8A CN110298912B (en) 2019-05-13 2019-05-13 Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910392509.8A CN110298912B (en) 2019-05-13 2019-05-13 Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene

Publications (2)

Publication Number Publication Date
CN110298912A CN110298912A (en) 2019-10-01
CN110298912B true CN110298912B (en) 2023-06-27

Family

ID=68026893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910392509.8A Active CN110298912B (en) 2019-05-13 2019-05-13 Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene

Country Status (1)

Country Link
CN (1) CN110298912B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112146660B (en) * 2020-09-25 2022-05-03 电子科技大学 Indoor map positioning method based on dynamic word vector
CN112529248B (en) * 2020-11-09 2024-06-04 北京宇航系统工程研究所 Data-driven intelligent flying space mirror image system of carrier rocket
CN114078347B (en) * 2021-11-24 2024-03-22 武汉小绿人动力技术股份有限公司 Teenager STEAM education system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107707900A (en) * 2017-10-17 2018-02-16 西安万像电子科技有限公司 Processing method, the device and system of content of multimedia
WO2018119676A1 (en) * 2016-12-27 2018-07-05 深圳前海达闼云端智能科技有限公司 Display data processing method and apparatus
WO2018119889A1 (en) * 2016-12-29 2018-07-05 深圳前海达闼云端智能科技有限公司 Three-dimensional scene positioning method and device
CN109407679A (en) * 2018-12-28 2019-03-01 百度在线网络技术(北京)有限公司 Method and apparatus for controlling pilotless automobile

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119676A1 (en) * 2016-12-27 2018-07-05 深圳前海达闼云端智能科技有限公司 Display data processing method and apparatus
WO2018119889A1 (en) * 2016-12-29 2018-07-05 深圳前海达闼云端智能科技有限公司 Three-dimensional scene positioning method and device
CN107707900A (en) * 2017-10-17 2018-02-16 西安万像电子科技有限公司 Processing method, the device and system of content of multimedia
CN109407679A (en) * 2018-12-28 2019-03-01 百度在线网络技术(北京)有限公司 Method and apparatus for controlling pilotless automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
三维激光点云联合无人机影像的三维场景重建研究;闫阳阳等;《测绘通报》;20160125(第01期);全文 *
基于无人机的大场景序列图像自动采集和三维建模;李康等;《西北大学学报(自然科学版)》;20170225(第01期);全文 *

Also Published As

Publication number Publication date
CN110298912A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN106254848B (en) A kind of learning method and terminal based on augmented reality
CN110298912B (en) Reproduction method, reproduction system, electronic device and storage medium for three-dimensional scene
US11321583B2 (en) Image annotating method and electronic device
CN109902659B (en) Method and apparatus for processing human body image
EP4030381A1 (en) Artificial-intelligence-based image processing method and apparatus, and device and storage medium
CN112562019A (en) Image color adjusting method and device, computer readable medium and electronic equipment
KR101548160B1 (en) Interacting system and method for wargame model
CN109460482B (en) Courseware display method and device, computer equipment and computer readable storage medium
CN107908641A (en) A kind of method and system for obtaining picture labeled data
CN109271153B (en) Method for acquiring programming language based on programming education system and electronic equipment
CN110516749A (en) Model training method, method for processing video frequency, device, medium and calculating equipment
CN113391992B (en) Test data generation method and device, storage medium and electronic equipment
CN111124902A (en) Object operating method and device, computer-readable storage medium and electronic device
CN109784185A (en) Client's food and drink evaluation automatic obtaining method and device based on micro- Expression Recognition
CN113867532A (en) Evaluation system and evaluation method based on virtual reality skill training
CN110516153B (en) Intelligent video pushing method and device, storage medium and electronic device
CN115690592B (en) Image processing method and model training method
CN115272667B (en) Farmland image segmentation model training method and device, electronic equipment and medium
CN110118603A (en) Localization method, device, terminal and the storage medium of target object
CN114926665A (en) Method and terminal for off-line single training by utilizing AR technology
CN113568735A (en) Data processing method and system
CN114764930A (en) Image processing method, image processing device, storage medium and computer equipment
CN108932088B (en) Virtual object collection method and portable electronic device
Alvarez et al. Ariane: a web-based and mobile tool to guide the design of augmented reality learning activities
CN105702109B (en) Internet of Things net operation teaching method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant