CN110753218B - Digital twinning system and method and computer equipment - Google Patents

Digital twinning system and method and computer equipment Download PDF

Info

Publication number
CN110753218B
CN110753218B CN201911145077.7A CN201911145077A CN110753218B CN 110753218 B CN110753218 B CN 110753218B CN 201911145077 A CN201911145077 A CN 201911145077A CN 110753218 B CN110753218 B CN 110753218B
Authority
CN
China
Prior art keywords
virtual
source data
real
dimensional
dimensional scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911145077.7A
Other languages
Chinese (zh)
Other versions
CN110753218A (en
Inventor
石立阳
程远初
徐建明
陈奇毅
高星
朱文辉
赵康嘉
秦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PCI Technology Group Co Ltd
Original Assignee
PCI Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PCI Technology Group Co Ltd filed Critical PCI Technology Group Co Ltd
Publication of CN110753218A publication Critical patent/CN110753218A/en
Application granted granted Critical
Publication of CN110753218B publication Critical patent/CN110753218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1045Proxies, e.g. for session initiation protocol [SIP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a digital twinning system, a digital twinning method and computer equipment. According to the technical scheme, the multi-channel images acquired by the multi-channel image acquisition system are transmitted back to the video real-time calculation system in real time through the multi-channel image real-time transmission control system and the data synchronization system to be calculated in real time, the calculation result, the three-dimensional scene and the multi-source data are mapped, fused and visually displayed through the virtual three-dimensional rendering system, interactive operation can be conducted on virtual-real interactive middleware in an interactive interface, the entity control system is controlled, and control over field equipment is achieved.

Description

Digital twinning system and method and computer equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a digital twinning system, a digital twinning method and computer equipment.
Background
At present, the digital wave mat represented by new technologies such as internet of things, big data, artificial intelligence and the like is global, and the physical world and the corresponding digital world form two major systems for parallel development and interaction. The digital world exists for serving the physical world, the physical world is high-efficient and orderly because the digital world becomes, and the digital twin technology comes with fortune, gradually extends and expands from the manufacturing industry to the urban space, and deeply influences urban planning, construction and development.
The city information model based on multi-source data fusion is a core, intelligent facilities and a perception system deployed in the city universe are a premise, an intelligent private network supporting efficient operation of the twin city is a guarantee, and the digital twin city can provide support in the directions of relatively high digitization level, modeling of an operation mechanism, realization of collaborative optimization of virtual and real spaces, revealing of multi-dimensional intelligent decision support and the like.
At present, a digital twin system only displays a three-dimensional model in a three-dimensional display interface, and cannot control field equipment according to real-time detection conditions.
Disclosure of Invention
The embodiment of the application provides a digital twin system, a digital twin method and computer equipment, so that when a data scene is constructed and displayed, field equipment can be controlled according to a real-time detection condition, and the scene elements can be known, measured and controlled.
In a first aspect, an embodiment of the present application provides a digital twinning system, which includes a scene superposition and analysis system, a video real-time calculation system, a multi-source data acquisition and processing analysis system, a virtual three-dimensional rendering system, and a virtual-real interaction middleware, where:
the scene superposition and analysis system stores a field three-dimensional scene and takes the three-dimensional scene as a base map;
the video real-time calculating system is used for calculating the received video stream in real time to obtain a video frame;
the multi-source data acquisition, processing and analysis system receives and stores multi-source data, performs basic analysis and processing, and converts the multi-source data into a three-dimensional representation format;
the virtual three-dimensional rendering system is used for mapping and fusing the video frame in the three-dimensional scene by taking the three-dimensional scene as a base map, performing position matching and fusing on the multi-source data in the three-dimensional scene, mapping a virtual-real interaction middleware in the three-dimensional scene, rendering and interacting the fused three-dimensional scene, generating an interaction instruction in response to the interaction operation on the virtual-real interaction middleware, and sending the interaction instruction to the virtual-real interaction middleware;
and the virtual-real interaction middleware is used for sending an interaction instruction sent by the virtual three-dimensional rendering system to the outside.
Furthermore, the video stream is generated by acquiring images of a plurality of positions on the site by a multi-path image acquisition system, and the video stream generated by the multi-path image acquisition system is returned by a multi-path image real-time return control system.
Furthermore, the system further comprises a data synchronization system for performing data synchronization on the video streams returned by the multi-channel image real-time return control system, wherein the data synchronization is specifically time synchronization, so that the returned video streams in the same batch are located in the same time slice space.
Further, the video real-time solution system comprises a video frame extraction module and a hardware decoder, wherein:
the video frame extraction module extracts frame data from the video stream by using an FFMPEG library;
and the hardware decoder is used for resolving the frame data to obtain the video frame.
Further, the multi-source data collecting, processing and analyzing system comprises a multi-source data collecting system and a multi-source data analyzing system, wherein:
the multi-source data acquisition system is used for receiving and storing multi-source data returned by the multi-source sensor;
and the multi-source data analysis system is used for carrying out basic analysis processing on the multi-source data and converting the multi-source data into a three-dimensional representation format.
Further, the virtual-real interaction middleware comprises an instruction receiving module and an instruction transmitting module, wherein:
the instruction receiving module is used for receiving an interactive instruction sent by the virtual three-dimensional rendering system;
and the instruction transmission module is used for transmitting the interactive instruction to the entity control system pointed by the interactive instruction.
Furthermore, according to the position corresponding relation between the multi-source sensor for returning the multi-source data and the virtual-real interaction middleware, the rendering position of the multi-source data in the three-dimensional scene corresponds to the position of the virtual-real interaction middleware.
Further, the virtual three-dimensional rendering system also responds to the change of the multi-source data to change the rendering state of the virtual-real interaction middleware in the three-dimensional scene.
In a second aspect, embodiments of the present application provide a digital twinning method, including:
the scene superposition and analysis system stores a field three-dimensional scene and takes the three-dimensional scene as a base map;
the video real-time resolving system performs real-time resolving on the received video stream to obtain a video frame;
the multi-source data acquisition, processing and analysis system receives and stores multi-source data, performs basic analysis and processing, and converts the multi-source data into a three-dimensional representation format;
the virtual three-dimensional rendering system takes a three-dimensional scene as a base map, maps and fuses the video frames in the three-dimensional scene, performs position matching and fusion on the multi-source data in the three-dimensional scene, maps virtual-real interaction middleware in the three-dimensional scene, and renders and interacts the fused three-dimensional scene;
the virtual three-dimensional rendering system responds to the interactive operation on the virtual-real interactive middleware to generate an interactive instruction and send the interactive instruction to the virtual-real interactive middleware;
and the virtual-real interaction middleware sends an interaction instruction sent by the virtual three-dimensional rendering system outwards.
In a third aspect, an embodiment of the present application provides a computer device, including: a display screen, an input device, a memory, and one or more processors;
the display screen is used for displaying a virtual-real interactive interface;
the input device is used for receiving interactive operation;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the digital twinning method of the second aspect.
In a fourth aspect, the present application provides a storage medium containing computer-executable instructions, wherein the computer-executable instructions, when executed by a computer processor, are configured to perform the digital twinning method according to the second aspect.
The method comprises the steps that a multi-path image acquisition system acquires multi-path images of a site, a multi-path image real-time feedback control system returns a video stream in real time and synchronizes the time of the video stream, and the synchronized video stream is resolved in real time by a video real-time resolving system to obtain a video frame; the multi-source sensor detects the field environment, generates corresponding multi-source data, transmits the data to the multi-source data acquisition and processing analysis system for analysis and processing, and converts the data into a three-dimensional representation format which can be displayed in a three-dimensional scene; then, the three-dimensional scene of the scene is used as a base map, the virtual three-dimensional rendering system is used for matching, mapping and fusing the video frames and the multi-source data in the three-dimensional scene, rendering the fused three-dimensional scene, simultaneously visually and visually displaying the fused three-dimensional scene, interacting the fused three-dimensional scene through the virtual three-dimensional rendering system, sending an interaction instruction generated by interaction operation to the entity control system through the virtual-real interaction middleware, and the entity control system responds to the interaction instruction to control the field equipment. According to the embodiment of the application, the digital twin system maps, fuses and visually displays the three-dimensional scene, the real-time video frame and the on-site multi-source data, the three-dimensional display interface is more real and comprehensive, meanwhile, the fused three-dimensional scene is interacted through the virtual three-dimensional rendering system, and when the field equipment needs to be controlled, an interaction instruction is sent to an entity control system for controlling the field equipment through the virtual-real interaction middleware, so that the field equipment is controlled, and the scene elements are known, measurable and controllable really.
Drawings
FIG. 1 is a schematic structural diagram of a digital twinning system provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of another digital twinning system provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another digital twinning system provided in an embodiment of the present application;
FIG. 4 is a flow chart of a digital twinning method provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 shows a schematic structural diagram of a digital twinning system provided by an embodiment of the present application. Referring to fig. 1, the digital twin system includes a scene superposition and analysis system 110, a video real-time solution system 140, a multi-source data acquisition and processing analysis system 150, a virtual three-dimensional rendering system 160, and a virtual-real interaction middleware 170. Wherein:
the scene overlaying and analyzing system 110 stores a three-dimensional scene of a scene, and uses the three-dimensional scene as a base map. Specifically, the source of the three-dimensional scene may be obtained by adding from an external server, or may be obtained by performing local three-dimensional modeling, the three-dimensional scene is stored locally after being obtained, the three-dimensional scene is used as a base map, other concerned data is fused on the three-dimensional scene, and the three-dimensional scene is used as a starting point of basic analysis.
And the video real-time calculating system 140 is used for calculating the received video stream in real time to obtain video frames.
The multi-source data acquisition, processing and analysis system 150 receives and stores multi-source data, performs basic analysis processing, and converts the multi-source data into a three-dimensional representation format.
The virtual three-dimensional rendering system 160 takes a three-dimensional scene as a base map, maps and fuses video frames in the three-dimensional scene, performs position matching and fusion on multi-source data in the three-dimensional scene, maps the virtual-real interaction middleware 170 in the three-dimensional scene, renders and interacts with the fused three-dimensional scene, generates an interaction instruction in response to an interaction operation on the virtual-real interaction middleware 170, and sends the interaction instruction to the virtual-real interaction middleware 170.
Specifically, when the video frame is mapped and fused in the three-dimensional scene, the virtual three-dimensional rendering system 160 determines a mapping relationship between a pixel in the video frame and a three-dimensional point in the three-dimensional scene, performs texture mapping on the video frame in the three-dimensional scene according to the mapping relationship, and performs smooth transition processing on an overlapped region of the texture mapping, thereby fusing the video frame in the three-dimensional scene.
Further, when the virtual three-dimensional rendering system 160 performs position matching and fusion on the multi-source data in the three-dimensional scene, the position in the three-dimensional scene corresponding to the multi-source data is determined according to the position corresponding relationship between the multi-source sensor and the virtual-real interaction middleware 170, that is, according to the position information or the device identification number carried in the multi-source data, and the multi-source data is mapped in the three-dimensional scene according to the target representation form, so that the rendering position of the multi-source data in the three-dimensional scene corresponds to the position of the virtual-real interaction middleware 170, thereby completing the position matching and fusion of the multi-source data in the three-dimensional scene.
And the virtual-real interaction middleware 170 is used for sending out interaction instructions sent by the virtual three-dimensional rendering system 160. Wherein the interactive instructions are directed to the device to be controlled and are used to instruct the field devices to perform corresponding actions.
The video real-time calculating system 140 calculates the received video stream in real time to obtain a video frame, the multi-source data collecting, processing and analyzing system 150 processes the received multi-source data and converts the multi-source data into a three-dimensional representation format, the video frame, the multi-source data and the three-dimensional scene are mapped, fused and visually displayed through the virtual three-dimensional rendering system 160, the rendered three-dimensional scene can be interactively operated, an interactive instruction generated by interaction is sent to a control module of the field device through the virtual-real interaction middleware 170, and the field device can respond to the interactive instruction to execute corresponding action, so that the field device can be controlled.
Fig. 2 shows a schematic structural diagram of another digital twinning system provided by an embodiment of the present application. Referring to fig. 2, the digital twin system includes a scene superposition and analysis system 110, a video real-time solution system 140, a multi-source data acquisition and processing analysis system 150, a virtual three-dimensional rendering system 160, and a virtual-real interaction middleware 170. Wherein:
the scene overlaying and analyzing system 110 stores a three-dimensional scene of a scene, and uses the three-dimensional scene as a base map.
And the video real-time calculating system 140 is used for calculating the received video stream in real time to obtain video frames.
Specifically, the video real-time solution system 140 includes a video frame extraction module 141 and a hardware decoder 142, wherein:
the video frame extracting module 141 extracts frame data from the video stream using the FFMPEG library. The FFMPEG library is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams, and can fulfill the requirement of extracting frame data in this embodiment.
And a hardware decoder 142 for resolving the frame data to obtain a video frame. In this embodiment, the hardware decoder 142 is an independent video decoding module built in the NVIDIA graphics card.
Further, the video stream is generated by a multi-channel image capturing system 190 (composed of multiple video capturing devices) capturing images of multiple locations on the scene.
The multi-source data acquisition, processing and analysis system 150 receives and stores multi-source data, performs basic analysis processing, and converts the multi-source data into a three-dimensional representation format.
Specifically, the multi-source data collecting, processing and analyzing system 150 includes a multi-source data collecting system 151 and a multi-source data analyzing system 152, wherein:
and the multi-source data acquisition system 151 is used for receiving and storing multi-source data returned by the multi-source sensor.
The multi-source sensor is selected according to the target, equipment and actual conditions concerned by the field and is arranged at the corresponding position. The multi-source data access switch is installed on site and used for accessing and converging monitoring data of the multi-source sensor and transmitting the monitoring data to the multi-source data access switch arranged on the side of the multi-source data acquisition system 151 in a wired or wireless mode, the multi-source data access switch sends the received multi-source data to the multi-source data acquisition system 151, and the multi-source data acquisition system 151 sends the received multi-source data to the multi-source data analysis system 152.
And the multi-source data analysis system 152 is used for performing basic analysis processing on the multi-source data and converting the multi-source data into a three-dimensional representation format.
Illustratively, after the multi-source data acquisition system 151 outputs multi-source data reflecting the condition of the monitored equipment, the multi-source data analysis system 152 receives the multi-source data according to the requirement and performs basic analysis processing on the multi-source data, such as AD conversion, threshold analysis, trend analysis, early warning analysis, value range, working state, etc., and the three-dimensional representation format thereof is understood as a format corresponding to a target representation form in the virtual three-dimensional rendering system 160, wherein the target representation form may be one or a combination of real-time values, real-time states, data tables, colors, etc. of the monitored data.
The virtual three-dimensional rendering system 160 takes a three-dimensional scene as a base map, maps and fuses video frames in the three-dimensional scene, performs position matching and fusion on multi-source data in the three-dimensional scene, maps the virtual-real interaction middleware 170 in the three-dimensional scene, renders and interacts with the fused three-dimensional scene, generates an interaction instruction in response to an interaction operation on the virtual-real interaction middleware 170, and sends the interaction instruction to the virtual-real interaction middleware 170.
And the virtual-real interaction middleware 170 is used for sending out interaction instructions sent by the virtual three-dimensional rendering system 160.
Specifically, the virtual-real interaction middleware 170 includes an instruction receiving module 171 and an instruction transmitting module 172, where the instruction receiving module 171 is configured to receive an interaction instruction sent by the virtual three-dimensional rendering system 160; and the instruction transmission module 172 is used for transmitting the interactive instruction to the equipment pointed by the interactive instruction.
The video frame is obtained by real-time resolving the received video stream through the video frame extraction module 141 and the hardware decoder 142, meanwhile, the multi-source data acquisition system 151 receives the multi-source data, the multi-source data are processed by the multi-source data analysis system 152 and converted into a three-dimensional representation format, then the video frame, the multi-source data and the three-dimensional scene are mapped, fused and visually displayed through the virtual three-dimensional rendering system 160, the rendered three-dimensional scene can be interactively operated, an interactive instruction generated by interaction is sent to the instruction receiving module 171 and sent to a control module of the field device through the instruction transmission module 172, and the field device can respond to the interactive instruction to execute corresponding action, so that the field device is controlled.
Fig. 3 is a schematic structural diagram of another digital twinning system provided by an embodiment of the present application. Referring to fig. 3, the digital twin system includes a scene superposition and analysis system 110, a data synchronization system 120, a video real-time calculation system 140, a multi-source data acquisition and processing analysis system 150, a virtual three-dimensional rendering system 160, and a virtual-real interaction middleware 170, wherein the data synchronization system 120 is connected to a multi-channel image real-time feedback control system 130, the multi-channel image real-time feedback control system 130 is connected to a multi-channel image acquisition system 190, and the virtual-real interaction middleware 170 is connected to an entity control system 180.
Specifically, the scene overlaying and analyzing system 110 stores a three-dimensional scene of a scene, and uses the three-dimensional scene as a base map. The source of the three-dimensional scene can be obtained by adding from an external server or by performing local three-dimensional modeling, the three-dimensional scene is stored locally after being obtained, the three-dimensional scene is used as a base map, namely, other concerned data is fused on the three-dimensional scene, and the three-dimensional scene is used as a starting point of basic analysis.
Further, the scene superposition and analysis system 110 divides the three-dimensional data of the three-dimensional scene into blocks, and when the three-dimensional scene on the site is updated, the scene superposition and analysis system 110 receives a three-dimensional update data packet corresponding to the blocks, where the three-dimensional update data packet should include the pointed blocks for updating the three-dimensional data, and the scene superposition and analysis system 110 changes the three-dimensional data of the corresponding blocks into the three-dimensional data in the three-dimensional update data packet, so as to ensure timeliness of the three-dimensional scene.
Specifically, the multi-channel image capturing system 190 includes a multi-channel video capturing device for capturing images at a plurality of locations on a site and generating a video stream.
In this embodiment, the multi-channel video capture device should include a video capture device (e.g., a camera) that supports a maximum number of not less than 100. Wherein, every video acquisition device is not less than 200 ten thousand pixels, and the resolution ratio is 1920X1080, still can select following function according to actual need: the integrated ICR double-optical-filter day and night switching has the advantages of fog penetration function, electronic anti-shaking, switching of various white balance modes, automatic video aperture, support of H.264 coding and the like.
Each video acquisition device monitors different areas of a site, and the monitoring range of the multi-channel video acquisition device covers the site range corresponding to the three-dimensional scene, namely the site concerned range is monitored.
Further, the multi-channel image real-time feedback control system 130 is configured to feedback the video stream generated by the multi-channel image capturing system 190.
In this embodiment, the effective transmission distance of the multi-channel image real-time feedback control system 130 should be not less than 3KM, the video code stream should be not less than 8 mbps, and the time delay should be not more than 80ms, so as to ensure the timeliness of the display effect.
Illustratively, an access switch is arranged at the side of the multi-path image acquisition system 190 to collect the video stream generated by the multi-path image acquisition system 190 and to converge the collected video stream into a convergence switch or a middle station, the convergence switch or the middle station preprocesses the video stream and then sends the video stream to the multi-path image real-time backhaul control system 130, and the multi-path image real-time backhaul control system 130 transmits the video stream to the data synchronization system 120 for synchronization processing.
Optionally, the connection between the aggregation switch and the access switches on both sides may be a communication connection in a wired and/or wireless manner. When through wired connection, modes such as accessible RS232, RS458, RJ45, bus are connected, when connecting through wireless, if near field communication modules such as mutual distance is close, accessible WiFi, zigBee, bluetooth carry out wireless communication, when far away from, accessible wireless bridge, 4G module, 5G module etc. carry out long-range wireless communication and connect.
The data synchronization system 120 receives the video streams returned by the multi-channel video real-time return control system 130 and is used for performing data synchronization on the returned video streams. The synchronized video stream is sent to the video real-time calculation system 140 for calculation. The data synchronization is specifically time synchronization, so that the returned video streams of the same batch are located in the same time slice space. In this embodiment, the data synchronization system 120 should support data synchronization of the video streams returned by not less than 100 video capture devices in maximum number. Where the time-sliced space can be understood as a number of real time interval abstractions of fixed size.
Specifically, the video real-time calculation system 140 is configured to perform real-time calculation on the video stream to obtain a video frame.
Further, the video real-time solution system 140 includes a video frame extraction module 141 and a hardware decoder 142, wherein:
the video frame extracting module 141 extracts frame data from the video stream using the FFMPEG library. The FFMPEG library is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams, and can fulfill the requirement of extracting frame data in this embodiment.
And a hardware decoder 142 for resolving the frame data to obtain a video frame. In this embodiment, the hardware decoder 142 is an independent video decoding module built in the NVIDIA graphics card, and supports h.264 and h.265 decoding, with a maximum resolution of 8K.
Specifically, the multi-source data collecting, processing and analyzing system 150 receives and stores the multi-source data returned by the multi-source sensor, performs basic analysis and processing, and converts the multi-source data into a three-dimensional representation format.
Further, the multi-source data collecting and processing analysis system 150 includes a multi-source data collecting system 151 and a multi-source data analysis system 152, wherein:
and the multi-source data acquisition system 151 is used for receiving and storing multi-source data returned by the multi-source sensor.
Illustratively, the multi-source sensors include at least one or more of resistive sensors, capacitive sensors, inductive sensors, voltage sensors, pyroelectric sensors, impedance sensors, magnetoelectric sensors, photoelectric sensors, resonant sensors, hall sensors, ultrasonic sensors, isotopic sensors, electrochemical sensors, microwave sensors, and the like.
The multi-source sensor is selected according to the target, equipment and actual conditions concerned by the field and is arranged at the corresponding position. The multi-source data access switch is installed on site and used for accessing and converging monitoring data of the multi-source sensor and transmitting the monitoring data to the multi-source data access switch arranged on the side of the multi-source data acquisition system 151 in a wired or wireless mode, the multi-source data access switch sends the received multi-source data to the multi-source data acquisition system 151, and the multi-source data acquisition system 151 sends the received multi-source data to the multi-source data analysis system 152.
Specifically, the multi-source data analysis system 152 is configured to perform basic analysis processing on the multi-source data and convert the multi-source data into a three-dimensional representation format.
Illustratively, after the multi-source data acquisition system 151 outputs multi-source data reflecting the condition of the monitored equipment, the multi-source data analysis system 152 receives the multi-source data according to the requirement and performs basic analysis processing on the multi-source data, such as AD conversion, threshold analysis, trend analysis, early warning analysis, value range, working state, etc., and the three-dimensional representation format thereof is understood as a format corresponding to a target representation form in the virtual three-dimensional rendering system 160, wherein the target representation form may be one or a combination of real-time values, real-time states, data tables, colors, etc. of the monitored data.
Specifically, the virtual three-dimensional rendering system 160 maps and fuses video frames in a three-dimensional scene with the three-dimensional scene as a base map, performs position matching and fusion on multi-source data in the three-dimensional scene, maps the virtual-real interaction middleware 170 in the three-dimensional scene, renders and interacts with the fused three-dimensional scene, generates an interaction instruction in response to an interaction operation on the virtual-real interaction middleware 170, and sends the interaction instruction to the virtual-real interaction middleware 170.
Specifically, when mapping and fusing a video frame in a three-dimensional scene, determining a mapping relationship between pixels in the video frame and three-dimensional points in the three-dimensional scene, performing texture mapping on the video frame in the three-dimensional scene according to the mapping relationship, and performing smooth transition processing on a superposition region of the texture mapping, so as to fuse the video frame in the three-dimensional scene.
Further, when the multi-source data is subjected to position matching and fusion in the three-dimensional scene, the position of the multi-source data in the three-dimensional scene is determined according to the position corresponding relation between the multi-source sensor and the virtual-real interaction middleware 170, namely according to position information or equipment identification number carried in the multi-source data, and the multi-source data is mapped in the three-dimensional scene according to the target expression form, so that the position of the multi-source data rendered in the three-dimensional scene corresponds to the position of the virtual-real interaction middleware 170, and the position matching and fusion of the multi-source data in the three-dimensional scene is completed.
Further, the virtual three-dimensional rendering system 160 also changes the rendering state of the virtual-real interaction middleware 170 in the three-dimensional scene in response to the change of the multi-source data. For example, the color or the representation state of the virtual-real interaction middleware 170 in the three-dimensional scene may be correspondingly changed according to the numerical range or the working state of the multi-source data of the corresponding device, such as distinguishing different numerical ranges with different colors, and representing different working states with the form of on-off states.
Specifically, the virtual-real interaction middleware 170 is configured to implement transmission of an interaction instruction between the virtual three-dimensional rendering system 160 and the entity control system 180.
Specifically, the entity control system 180 receives the interactive command from the virtual-real interactive middleware 170, and performs corresponding control on the field device in response to the interactive command.
Specifically, the virtual-real interaction middleware 170 includes an instruction receiving module 171 and an instruction transmitting module 172, where the instruction receiving module 171 is configured to receive an interaction instruction sent by the virtual three-dimensional rendering system 160; the command transmission module 172 is configured to transmit the interactive command to the entity control system 180 to which the interactive command is directed.
Illustratively, according to the requirement of multi-source data interaction in the three-dimensional scene, the instruction receiving module 171 of the virtual-real interaction middleware 170 may be displayed at a position corresponding to the multi-source data in the three-dimensional scene, and the representation form of the instruction receiving module 171 may be in the form of a physical button three-dimensional model or a three-dimensional model of a corresponding device.
Further, the entity control system 180 includes a controller for controlling the devices, which is provided in the field, and the controller can control the devices in response to the interactive instructions. The command transmission module 172 and the controller may be connected in a wired and/or wireless manner. When the controller is connected in a wired mode, the controller can be connected in modes of RS232, RS458, RJ45, a bus and the like, when the controller is connected in a wireless mode, the controller can be connected in a wireless communication mode through WiFi, ZigBee, Bluetooth, a wireless bridge, a 4G module, a 5G module and the like, and when the number of the controllers is large, data can be collected and distributed through the switch.
When a user selects the instruction receiving module 171 in the three-dimensional scene for interactive operation, the virtual three-dimensional rendering system 160 generates a corresponding interactive instruction according to the multi-source data of the corresponding device and a preset interactive response mode, and sends the interactive instruction to the instruction receiving module 171, where the interactive instruction includes a control instruction and position information or a device identification number for the device. The command transmission module 172 transmits the interactive command to the corresponding controller according to the location information or the device identification number, and the controller controls the field device in response to the control command.
As described above, the multi-channel images collected by the multi-channel image collection system 190 are fed back in real time to the data synchronization system 120 for time synchronization through the multi-channel image real-time feedback control system 130, the video real-time calculation system 140 calculates the synchronized video stream in real time to obtain video frames, the multi-source data collection system 151 receives the multi-source data, processes the multi-source data by the multi-source data analysis system 152 and converts the multi-source data into a three-dimensional representation format, then the video frames, the multi-source data and the three-dimensional scenes are mapped, fused and visually displayed by the virtual three-dimensional rendering system 160, the interaction commands generated by interaction are sent to the command receiving module 171 in the virtual interaction middleware and sent to the entity control system 180 by the command transmission module 172, the controller in the entity control system 180 executes corresponding actions according to the position information of the device pointed by the interaction commands and the control command control device, thereby enabling control of the field device.
Fig. 4 is a schematic flow chart of a digital twinning method provided by an embodiment of the present application, which may be performed by a digital twinning system, which may be implemented by hardware and/or software and integrated in a computer. Referring to fig. 4, the digital twinning method includes:
s201: the scene superposition and analysis system stores the three-dimensional scene of the scene and takes the three-dimensional scene as a base map.
Specifically, the source of the three-dimensional scene may be obtained by adding from an external server, or by performing local three-dimensional modeling, and the three-dimensional scene is stored locally after being obtained, and is used as a base map, that is, other concerned data is fused on the three-dimensional scene, and the three-dimensional scene is used as a starting point of basic analysis.
Furthermore, the scene superposition and analysis system divides the three-dimensional data of the three-dimensional scene into blocks, and when the three-dimensional scene on site is updated, the scene superposition and analysis system receives a three-dimensional update data packet corresponding to the blocks, the three-dimensional update data packet should include the pointed blocks for updating the three-dimensional data, and the scene superposition and analysis system replaces the three-dimensional data of the corresponding blocks with the three-dimensional data in the three-dimensional update data packet, so that the timeliness of the three-dimensional scene is guaranteed.
S202: and the video real-time calculating system carries out real-time calculation on the received video stream to obtain a video frame.
Specifically, further, the video real-time solution system includes a video frame extraction module and a hardware decoder, wherein:
and the video frame extraction module is used for extracting frame data from the video stream by using the FFMPEG library. The FFMPEG library is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams, and can fulfill the requirement of extracting frame data in this embodiment.
And the hardware decoder is used for resolving the frame data to obtain the video frame. In this embodiment, the hardware decoder is an independent video decoding module built in the NVIDIA graphics card.
S203: the multi-source data acquisition, processing and analysis system receives and stores multi-source data, performs basic analysis and processing, and converts the multi-source data into a three-dimensional representation format.
Specifically, the multi-source data acquisition and processing analysis system comprises a multi-source data acquisition system and a multi-source data analysis system, wherein:
and the multi-source data acquisition system is used for receiving and storing multi-source data returned by the multi-source sensor.
The multi-source sensor is selected according to the target, equipment and actual conditions concerned by the field and is arranged at the corresponding position. The multi-source data access switch is installed on site and used for accessing and converging monitoring data of the multi-source sensor and transmitting the monitoring data to the multi-source data access switch arranged on the side of the multi-source data acquisition system in a wired or wireless mode, the communication mode of the multi-source data access switch is similar to that of the data return system, the multi-source data access switch transmits the received multi-source data to the multi-source data acquisition system, and the multi-source data acquisition system transmits the received multi-source data to the multi-source data analysis system.
The multi-source data analysis system is used for carrying out basic analysis processing on multi-source data and converting the multi-source data into a three-dimensional representation format.
Illustratively, after the multi-source data acquisition system outputs multi-source data reflecting the condition of the monitored equipment, the multi-source data analysis system receives the multi-source data according to requirements and performs basic analysis processing on the multi-source data, such as AD conversion, threshold analysis, trend analysis, early warning analysis, numerical range, working state and the like, wherein the three-dimensional representation format is understood as a format corresponding to a target representation form in the virtual three-dimensional rendering system, and the target representation form can be one or a combination of a plurality of forms of real-time numerical values, real-time states, data tables, colors and the like of the monitored data.
S204: the virtual three-dimensional rendering system takes a three-dimensional scene as a base map, maps and fuses the video frames in the three-dimensional scene, performs position matching and fusion on the multi-source data in the three-dimensional scene, maps virtual-real interaction middleware in the three-dimensional scene, and renders and interacts the fused three-dimensional scene.
S205: and the virtual three-dimensional rendering system responds to the interactive operation on the virtual-real interactive middleware to generate an interactive instruction and send the interactive instruction to the virtual-real interactive middleware.
Specifically, when mapping and fusing a video frame in a three-dimensional scene, determining a mapping relationship between pixels in the video frame and three-dimensional points in the three-dimensional scene, performing texture mapping on the video frame in the three-dimensional scene according to the mapping relationship, and performing smooth transition processing on a superposition region of the texture mapping, so as to fuse the video frame in the three-dimensional scene.
Specifically, when the position matching and fusion of the multi-source data are carried out in the three-dimensional scene, the position of the multi-source data in the three-dimensional scene is determined according to the position corresponding relation between the multi-source sensor and the virtual-real interaction middleware, namely according to position information or equipment identification number carried in the multi-source data, and the multi-source data are mapped in the three-dimensional scene according to the target expression form, so that the rendering position of the multi-source data in the three-dimensional scene corresponds to the position of the virtual-real interaction middleware, and the position matching and fusion of the multi-source data in the three-dimensional scene are completed.
Further, the virtual three-dimensional rendering system also responds to the change of the multi-source data to change the rendering state of the virtual-real interaction middleware in the three-dimensional scene. For example, the color or the expression state of the virtual-real interaction middleware in the three-dimensional scene may be correspondingly changed according to the numerical range or the working state of the multi-source data of the corresponding device, for example, different color is used for distinguishing different numerical ranges, and different working states are represented in the form of on-off states.
S206: and the virtual-real interaction middleware sends an interaction instruction sent by the virtual three-dimensional rendering system outwards.
Specifically, the virtual-real interaction middleware is used for realizing transmission of an interaction instruction between the virtual three-dimensional rendering system and the entity control system, and the entity control system receives the interaction instruction from the virtual-real interaction middleware and correspondingly controls the field device in response to the interaction instruction.
Further, the virtual-real interaction middleware comprises an instruction receiving module and an instruction transmission module, wherein the instruction receiving module is used for receiving an interaction instruction sent by the virtual three-dimensional rendering system; and the instruction transmission module is used for transmitting the interactive instruction to the entity control system pointed by the interactive instruction.
The received video stream is resolved in real time through the video real-time resolving system, a video frame obtained by a resolving result is resolved, meanwhile, the multi-source data acquisition and processing analysis system processes the received multi-source data and converts the multi-source data into a three-dimensional representation format, then the video frame, the multi-source data and a three-dimensional scene are mapped, fused and visually displayed through the virtual three-dimensional rendering system, interactive operation can be carried out on the rendered three-dimensional scene, an interactive instruction generated by interaction is sent to a control module of the field equipment through the virtual-real interaction middleware, and the field equipment can respond to the interactive instruction to execute corresponding action, so that the field equipment is controlled.
On the basis of the foregoing embodiments, fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application. Referring to fig. 5, the present embodiment provides a computer device including: a display screen 24, an input device 25, a memory 22, a communication module 23, and one or more processors 21; the communication module 23 is configured to communicate with the outside; the display screen 24 is used for displaying virtual and real interactive interfaces; the input device 25 is used for receiving interactive operation; the memory 22 for storing one or more programs; when executed by the one or more processors 21, cause the one or more processors 21 to implement the digital twinning method and system functions as provided by embodiments of the present application.
The memory 22 is provided as a computer readable storage medium that may be used to store software programs, computer executable programs, and modules, such as the digital twinning method and system functions described in any of the embodiments of the present application. The memory 22 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 22 may further include memory located remotely from the processor, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Further, the computer device further includes a communication module 23, and the communication module 23 is configured to establish a wired and/or wireless connection with other devices and perform data transmission.
The processor 21 executes various functional applications of the device and data processing, i.e. implements the above-described digital twin method and system functions, by running software programs, instructions and modules stored in the memory 22.
The digital twinning system and the computer device provided by the above can be used for executing the digital twinning method provided by the above embodiments, and have corresponding functions and beneficial effects.
Embodiments of the present application also provide a storage medium containing computer executable instructions, which when executed by a computer processor, are used to perform the digital twinning method provided by the embodiments of the present application, and implement the functions of the digital twinning system provided by the embodiments of the present application.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors 21.
Of course, the storage medium provided by the embodiments of the present application contains computer executable instructions, and the computer executable instructions are not limited to the digital twinning method described above, and may also execute the relevant operations in the digital twinning method provided by any embodiments of the present application, so as to implement the functions of the digital twinning system provided by any embodiments of the present application.
The digital twinning system and the computer device provided in the above embodiments can perform the digital twinning method provided in any embodiments of the present application, and the technical details not described in the above embodiments can be referred to the digital twinning system and method provided in any embodiments of the present application.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (7)

1. The digital twin system is characterized by comprising a scene superposition and analysis system, a video real-time calculation system, a multi-source data acquisition and processing analysis system, a virtual three-dimensional rendering system and a virtual-real interaction middleware, wherein:
the scene superposition and analysis system stores a field three-dimensional scene and takes the three-dimensional scene as a base map;
the video real-time calculating system is used for calculating the received video stream in real time to obtain a video frame;
the multi-source data acquisition, processing and analysis system receives and stores multi-source data, performs basic analysis and processing, and converts the multi-source data into a three-dimensional representation format;
the virtual three-dimensional rendering system is used for mapping and fusing the video frame in the three-dimensional scene by taking the three-dimensional scene as a base map, performing position matching and fusing on the multi-source data in the three-dimensional scene, mapping a virtual-real interaction middleware in the three-dimensional scene, rendering and interacting the fused three-dimensional scene, generating an interaction instruction in response to the interaction operation on the virtual-real interaction middleware, and sending the interaction instruction to the virtual-real interaction middleware; according to the position corresponding relation between a multi-source sensor for returning multi-source data and a virtual-real interaction middleware, enabling the rendering position of the multi-source data in a three-dimensional scene to correspond to the position of the virtual-real interaction middleware; the virtual three-dimensional rendering system also responds to the change of the multi-source data and changes the rendering state of the virtual-real interaction middleware in the three-dimensional scene;
the virtual-real interaction middleware is used for sending an interaction instruction sent by the virtual three-dimensional rendering system to the outside; the virtual-real interaction middleware comprises an instruction receiving module and an instruction transmitting module, wherein:
the instruction receiving module is used for receiving an interactive instruction sent by the virtual three-dimensional rendering system;
and the instruction transmission module is used for transmitting the interactive instruction to the entity control system pointed by the interactive instruction.
2. The digital twinning system of claim 1, wherein the video stream is generated by a multi-path image capturing system capturing images of a plurality of locations on site, and the video stream generated by the multi-path image capturing system is transmitted back by a multi-path image real-time transmission control system.
3. The digital twin system according to claim 2, further comprising a data synchronization system for performing data synchronization, specifically time synchronization, on the video streams returned by the multi-channel real-time return control system, so that the returned video streams of the same batch are located in the same time slice space.
4. The digital twinning system of claim 1, wherein the video real-time solution system includes a video frame extraction module and a hardware decoder, wherein:
the video frame extraction module extracts frame data from the video stream by using an FFMPEG library;
and the hardware decoder is used for resolving the frame data to obtain the video frame.
5. The digital twinning system of claim 1, wherein the multi-source data collection and processing analysis system includes a multi-source data collection system and a multi-source data analysis system, wherein:
the multi-source data acquisition system is used for receiving and storing multi-source data returned by the multi-source sensor;
and the multi-source data analysis system is used for carrying out basic analysis processing on the multi-source data and converting the multi-source data into a three-dimensional representation format.
6. A digital twinning method, comprising:
the scene superposition and analysis system stores a field three-dimensional scene and takes the three-dimensional scene as a base map;
the video real-time resolving system performs real-time resolving on the received video stream to obtain a video frame;
the multi-source data acquisition, processing and analysis system receives and stores multi-source data, performs basic analysis and processing, and converts the multi-source data into a three-dimensional representation format;
the virtual three-dimensional rendering system takes a three-dimensional scene as a base map, maps and fuses the video frames in the three-dimensional scene, performs position matching and fusion on the multi-source data in the three-dimensional scene, maps virtual-real interaction middleware in the three-dimensional scene, and renders and interacts the fused three-dimensional scene;
the virtual three-dimensional rendering system responds to the interactive operation on the virtual-real interactive middleware to generate an interactive instruction and send the interactive instruction to the virtual-real interactive middleware; according to the position corresponding relation between a multi-source sensor for returning multi-source data and a virtual-real interaction middleware, enabling the rendering position of the multi-source data in a three-dimensional scene to correspond to the position of the virtual-real interaction middleware; the virtual three-dimensional rendering system also responds to the change of the multi-source data and changes the rendering state of the virtual-real interaction middleware in the three-dimensional scene;
the virtual-real interaction middleware sends an interaction instruction sent by the virtual three-dimensional rendering system outwards; the virtual-real interaction middleware comprises an instruction receiving module and an instruction transmitting module, wherein:
the instruction receiving module is used for receiving an interactive instruction sent by the virtual three-dimensional rendering system;
and the instruction transmission module is used for transmitting the interactive instruction to the entity control system pointed by the interactive instruction.
7. A computer device, comprising: a display screen, an input device, a memory, and one or more processors;
the display screen is used for displaying a virtual-real interactive interface;
the input device is used for receiving interactive operation;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the digital twinning method of claim 6.
CN201911145077.7A 2019-08-21 2019-11-21 Digital twinning system and method and computer equipment Active CN110753218B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019107751288 2019-08-21
CN201910775128.8A CN110505464A (en) 2019-08-21 2019-08-21 A kind of number twinned system, method and computer equipment

Publications (2)

Publication Number Publication Date
CN110753218A CN110753218A (en) 2020-02-04
CN110753218B true CN110753218B (en) 2021-12-10

Family

ID=68588763

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910775128.8A Pending CN110505464A (en) 2019-08-21 2019-08-21 A kind of number twinned system, method and computer equipment
CN201911145077.7A Active CN110753218B (en) 2019-08-21 2019-11-21 Digital twinning system and method and computer equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910775128.8A Pending CN110505464A (en) 2019-08-21 2019-08-21 A kind of number twinned system, method and computer equipment

Country Status (2)

Country Link
CN (2) CN110505464A (en)
WO (1) WO2021031454A1 (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505464A (en) * 2019-08-21 2019-11-26 佳都新太科技股份有限公司 A kind of number twinned system, method and computer equipment
CN111401154B (en) * 2020-02-29 2023-07-18 同济大学 AR-based logistics accurate auxiliary operation device for transparent distribution
CN111708919B (en) * 2020-05-28 2021-07-30 北京赛博云睿智能科技有限公司 Big data processing method and system
CN111738571A (en) * 2020-06-06 2020-10-02 北京王川景观设计有限公司 Information system for development and application of geothermal energy complex based on digital twin
CN111857520A (en) * 2020-06-16 2020-10-30 广东希睿数字科技有限公司 3D visual interactive display method and system based on digital twins
CN111754754A (en) * 2020-06-19 2020-10-09 上海奇梦网络科技有限公司 Real-time equipment monitoring method based on digital twinning technology
CN116569545A (en) * 2020-08-14 2023-08-08 西门子股份公司 Remote assistance method and device
US20220067229A1 (en) * 2020-09-03 2022-03-03 International Business Machines Corporation Digital twin multi-dimensional model record using photogrammetry
CN112037543A (en) * 2020-09-14 2020-12-04 中德(珠海)人工智能研究院有限公司 Urban traffic light control method, device, equipment and medium based on three-dimensional modeling
CN112346572A (en) * 2020-11-11 2021-02-09 南京梦宇三维技术有限公司 Method, system and electronic device for realizing virtual-real fusion
CN112509148A (en) * 2020-12-04 2021-03-16 全球能源互联网研究院有限公司 Interaction method and device based on multi-feature recognition and computer equipment
CN112950758B (en) * 2021-01-26 2023-07-21 长威信息科技发展股份有限公司 Space-time twin visualization construction method and system
CN112991552B (en) * 2021-03-10 2024-03-22 中国商用飞机有限责任公司北京民用飞机技术研究中心 Human body virtual-real matching method, device, equipment and storage medium
CN112925496A (en) * 2021-03-30 2021-06-08 四川虹微技术有限公司 Three-dimensional visual design method and system based on digital twinning
CN113538863B (en) * 2021-04-13 2022-12-16 交通运输部科学研究院 Tunnel digital twin scene construction method and computer equipment
CN112991742B (en) * 2021-04-21 2021-08-20 四川见山科技有限责任公司 Visual simulation method and system for real-time traffic data
CN113406968B (en) * 2021-06-17 2023-08-08 广东工业大学 Unmanned aerial vehicle autonomous take-off and landing cruising method based on digital twin
CN113554063B (en) * 2021-06-25 2024-04-23 西安电子科技大学 Industrial digital twin virtual-real data fusion method, system, equipment and terminal
CN113919106B (en) * 2021-09-29 2024-05-14 大连理工大学 Underground pipeline structure safety evaluation method based on augmented reality and digital twinning
CN113963100B (en) * 2021-10-25 2022-04-29 广东工业大学 Three-dimensional model rendering method and system for digital twin simulation scene
CN114359475B (en) * 2021-12-03 2024-04-16 广东电网有限责任公司电力科学研究院 Digital twin three-dimensional model display method, device and equipment for GIS equipment
CN114217555A (en) * 2021-12-09 2022-03-22 浙江大学 Low-delay remote control method and system based on digital twin scene
CN114212609B (en) * 2021-12-15 2023-11-14 北自所(北京)科技发展股份有限公司 Digital twin spinning complete equipment package operation method
CN114584571B (en) * 2021-12-24 2024-02-27 北京中电飞华通信有限公司 Space calculation technology-based digital twin synchronous communication method for power grid station
CN113987850B (en) * 2021-12-28 2022-03-22 湖南视觉伟业智能科技有限公司 Digital twin model updating and maintaining method and system based on multi-source multi-modal data
CN114095549A (en) * 2022-01-11 2022-02-25 长沙理工大学 Web3D virtual single-chip microcomputer digital twin system based on ZigBee
CN114332741B (en) * 2022-03-08 2022-05-10 盈嘉互联(北京)科技有限公司 Video detection method and system for building digital twins
CN114827144B (en) * 2022-04-12 2024-03-01 中煤科工开采研究院有限公司 Three-dimensional virtual simulation decision-making distributed system for fully-mechanized coal mining face
CN114710495B (en) * 2022-04-29 2023-08-01 深圳市瑞云科技有限公司 Cloud rendering-based houdini distributed online resolving method
CN114966695B (en) * 2022-05-11 2023-11-14 南京慧尔视软件科技有限公司 Digital twin image processing method, device, equipment and medium for radar
CN114972599A (en) * 2022-05-31 2022-08-30 京东方科技集团股份有限公司 Method for virtualizing scene
CN115409944B (en) * 2022-09-01 2023-06-02 浙江巨点光线智慧科技有限公司 Three-dimensional scene rendering and data correction system based on low-code digital twin
CN115550687A (en) * 2022-09-23 2022-12-30 中国电信股份有限公司 Three-dimensional model scene interaction method, system, equipment, device and storage medium
CN116307740B (en) * 2023-05-16 2023-08-01 苏州和歌信息科技有限公司 Fire point analysis method, system, equipment and medium based on digital twin city
CN117152324A (en) * 2023-09-04 2023-12-01 艾迪普科技股份有限公司 Data driving method and device based on three-dimensional player
CN117095135B (en) * 2023-10-19 2024-01-02 云南三耳科技有限公司 Industrial three-dimensional scene modeling arrangement method and device capable of being edited online
CN117421940B (en) * 2023-12-19 2024-03-19 山东交通学院 Global mapping method and device between digital twin lightweight model and physical entity
CN117593498B (en) * 2024-01-19 2024-04-26 北京云庐科技有限公司 Digital twin scene configuration method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908232A (en) * 2010-07-30 2010-12-08 重庆埃默科技有限责任公司 Interactive scene simulation system and scene virtual simulation method
CN108040081A (en) * 2017-11-02 2018-05-15 同济大学 A kind of twin monitoring operational system of subway station numeral
CN109819233A (en) * 2019-01-21 2019-05-28 哈工大机器人(合肥)国际创新研究院 A kind of digital twinned system based on virtual image technology

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789348A (en) * 2011-05-18 2012-11-21 北京东方艾迪普科技发展有限公司 Interactive three dimensional graphic video visualization system
US9877016B2 (en) * 2015-05-27 2018-01-23 Google Llc Omnistereo capture and render of panoramic virtual reality content
US20190088015A1 (en) * 2016-03-31 2019-03-21 Umbra Software Oy Virtual reality streaming
CN106131536A (en) * 2016-08-15 2016-11-16 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D augmented reality interactive exhibition system and methods of exhibiting thereof
CN106131530B (en) * 2016-08-26 2017-10-31 万象三维视觉科技(北京)有限公司 A kind of bore hole 3D virtual reality display system and its methods of exhibiting
CN109359507B (en) * 2018-08-24 2021-10-08 南京理工大学 Method for quickly constructing workshop personnel digital twin model
CN110505464A (en) * 2019-08-21 2019-11-26 佳都新太科技股份有限公司 A kind of number twinned system, method and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908232A (en) * 2010-07-30 2010-12-08 重庆埃默科技有限责任公司 Interactive scene simulation system and scene virtual simulation method
CN108040081A (en) * 2017-11-02 2018-05-15 同济大学 A kind of twin monitoring operational system of subway station numeral
CN109819233A (en) * 2019-01-21 2019-05-28 哈工大机器人(合肥)国际创新研究院 A kind of digital twinned system based on virtual image technology

Also Published As

Publication number Publication date
WO2021031454A1 (en) 2021-02-25
CN110753218A (en) 2020-02-04
CN110505464A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN110753218B (en) Digital twinning system and method and computer equipment
TWI691197B (en) Preprocessor for full parallax light field compression
CN110675506B (en) System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion
CN103389699B (en) Based on the supervisory control of robot of distributed intelligence Monitoring and Controlling node and the operation method of autonomous system
CN112053446A (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN105204347A (en) Method, device and system for smart home interaction based on augmented reality technologies
CN115187742A (en) Method, system and related device for generating automatic driving simulation test scene
CN111696216A (en) Three-dimensional augmented reality panorama fusion method and system
CN111885643A (en) Multi-source heterogeneous data fusion method applied to smart city
KR20160007473A (en) Method, system and recording medium for providing augmented reality service and file distribution system
CN112379815A (en) Image capturing method and device, storage medium and electronic equipment
CN111710032B (en) Method, device, equipment and medium for constructing three-dimensional model of transformer substation
CN109788359B (en) Video data processing method and related device
CN111935124A (en) Multi-source heterogeneous data compression method applied to smart city
CN113115015A (en) Multi-source information fusion visualization method and system
CN115131484A (en) Image rendering method, computer-readable storage medium, and image rendering apparatus
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN112182286B (en) Intelligent video management and control method based on three-dimensional live-action map
CN111045586B (en) Interface switching method based on three-dimensional scene, vehicle-mounted equipment and vehicle
KR102049773B1 (en) A system and methods for the solutions of making up personalized tourism and cultural 3D mixed reality contents with operating the digital signage
CN110944140A (en) Remote display method, remote display system, electronic device and storage medium
CN105208372A (en) 3D landscape generation system and method with interaction measurable function and reality sense
CN115562217A (en) Digital twin monitoring system, method and device
CN115524990A (en) Intelligent household control method, device, system and medium based on digital twins
CN101344834B (en) Method for tracing map hot point resource on assembly wall

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 306, zone 2, building 1, Fanshan entrepreneurship center, Panyu energy saving technology park, No. 832 Yingbin Road, Donghuan street, Panyu District, Guangzhou City, Guangdong Province

Applicant after: Jiadu Technology Group Co.,Ltd.

Address before: Room 306, zone 2, building 1, Fanshan entrepreneurship center, Panyu energy saving technology park, No. 832 Yingbin Road, Donghuan street, Panyu District, Guangzhou City, Guangdong Province

Applicant before: PCI-SUNTEKTECH Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant