CN113179420B - City-level wide-area high-precision CIM scene server dynamic stream rendering technical method - Google Patents

City-level wide-area high-precision CIM scene server dynamic stream rendering technical method Download PDF

Info

Publication number
CN113179420B
CN113179420B CN202110455274.XA CN202110455274A CN113179420B CN 113179420 B CN113179420 B CN 113179420B CN 202110455274 A CN202110455274 A CN 202110455274A CN 113179420 B CN113179420 B CN 113179420B
Authority
CN
China
Prior art keywords
cim
scene
precision
signaling
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110455274.XA
Other languages
Chinese (zh)
Other versions
CN113179420A (en
Inventor
熊灿
邓万书
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Umbra Shanghai Network Technology Co ltd
Original Assignee
Umbra Shanghai Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Umbra Shanghai Network Technology Co ltd filed Critical Umbra Shanghai Network Technology Co ltd
Priority to CN202110455274.XA priority Critical patent/CN113179420B/en
Publication of CN113179420A publication Critical patent/CN113179420A/en
Application granted granted Critical
Publication of CN113179420B publication Critical patent/CN113179420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Abstract

The invention discloses a dynamic stream rendering technical method for an urban wide-area high-precision CIM scene server, which is characterized by comprising the following steps of: the method comprises the following steps: the method comprises the following steps: scene visualization information data acquisition: s1: analyzing the recognition object; s2: slicing audio and video coding; step two: and (3) high-definition rendering dynamic stream output: s3: pushing flow; s4: pulling the flow; step two: transferring signaling service: s5: user signaling; s6: and transferring signaling. The method is reasonable in design, and aims at wide-area high-precision CIM scene rendering, visual elements and characteristic information are processed, space vector topological information is decomposed and extracted, a space vector database is established to support topological space analysis, so that the problem of distribution of urban wide-area high-precision CIM scene rendering is solved for users, and the technical support capability of high-precision, multi-device and cross-platform real-time operation and browsing of CIM scenes is provided.

Description

City-level wide-area high-precision CIM scene server dynamic stream rendering technical method
Technical Field
The invention relates to the technical field of CIM application platforms, in particular to a technical method for rendering dynamic streams of an urban wide-area high-precision CIM scene server.
Background
The high-precision CIM scene is an organic complex of a three-dimensional city space model and city information established on the basis of massive city information data, based on the fusion of BIM and GIS technologies, the CIM is used for accurately measuring the data granularity to a single module in a city building, and a cool special effect is added to manufacture a real-time dynamic, virtual-real interactive and intelligent city CIM scene; the communication signaling server side controls the program control exchange, the network database and other 'intelligent' nodes in the network to exchange the following related information: call setup, monitoring (supervisory), Teardown (Teardown), information required for distributed application processes (queries/responses between processes or user-to-user data), network management information. Signaling transmission establishes a message channel for a CIM scene and a visual stream server, and is used for ensuring a control signal required by normal communication; the highly visual stream service establishes a stable and reliable data transmission service with the signaling service through stream sockets, and the service ensures that data can be received in sequence without errors and repeated sending, and streams and distributes the collected signaling messages (see fig. 1 in the attached drawing of the specification).
In terms of spatial range and technical logic, the construction of cim (tyinformationcoding) is an organic combination of GIS data of a large scene, BIM data of a small scene and internet of things IOT. The existing BIM technology can realize digital twinning of component dimensions on each building in a city, so that the information of the building is digitalized; the GIS technology can be used for carrying out structured and time-lapse storage on intangible social and economic activity information in cities, such as topographic features, land utilization and other macro-space environmental features, crowd features, information fund flow and the like on the scale of the cities. The IOT technology can supplement the operation data of buildings in BIM and more importantly sense and collect the change of microcosmic environments in urban open spaces such as traffic flow, atmospheric hydrology and the like in real time through the wide arrangement of urban sensors. The BIM and the GIS are complementary in space range, the data structure is common, each building can be regarded as a ground feature in the GIS, and each city infrastructure such as pipelines, street lamps and the like can also be regarded as a member of the BIM. On the basis, the data of the Internet of things is embedded, the refinement degree of data space and time granularity is greatly improved, and fine, comprehensive, dynamic and real-time digitization of urban space is realized. On the basis of comprehensively collecting data, the CIM carries out structuring and standardized integration on data with different dimensions in each field through a unified data platform. On one hand, the urban data can be calculated, the spatial indexes such as building area and volume rate in any spatial range and the social and economic indexes such as population density, vehicle density and even water and electricity consumption can be statistically analyzed, and mining rules can be learned and simulated through a machine to predict. On the other hand, by means of a spatial information visualization technology, the city data can correspond to the spatial position of the city data in real time, and the city data is clear at a glance and is convenient for rapid perception and decision of operation and management personnel.
At present, after the CIM technology is applied to a smart city, a large-range, dynamic and real-time mass model data information is bound to be faced, and a common application mode has problems and challenges in a plurality of aspects such as large-quantity dynamic loading, browsing performance, object materialization, space topology analysis and the like:
loading mass model data: the file size of a single conventional building complete full-element model generally reaches 2G,1000 ten thousand triangular faces and 10 thousand components, the model is several times larger for a super high-rise complex building, in addition, a CIM platform is a system facing to an urban area level, the platform size covers hundreds to tens of thousands of building groups, the whole model data belongs to a mass level, and the traditional loading and application method inevitably encounters a bottleneck and difficulty;
the problem of loading high-precision model data is solved: in a conventional urban CIM scene, because the traditional scene rendering limitation scene adopts an LOD100 stage of an engineering BIM model structure in a large range. The model at this stage usually represents the building size, the analysis includes geometric information such as the volume, the building orientation, the wall surface, the solid size, the shape and the position of the model, the scene refinement degree is insufficient, and the professional structural analysis can not be reflected in the CIM scene;
for the browsing performance problem: because of the mass model data problem faced by the CIM platform, dynamic loading has bottlenecks and difficulties, the browsing performance of the scene is also affected, different from a client, a Web end firstly needs to solve network data transmission and consider the use limit of a front-end browser on resources, for example, the maximum limit of v8 of chrome on rendering on a 32-bit system for the use of a memory on a 32-bit system is 1G, the maximum limit on a 64-bit system is 4G, and the transmission needs to be determined according to the network conditions, most of the current large-scale Web3D applications have difficulty in achieving smooth access on the internet, this also becomes a similar pseudo-problem, in theory you can use the system anywhere you get to the internet, but in practice there are preconditions like: a first, sufficient bandwidth; second, the hardware support is good enough, the condition is obviously not easier than installing a plug-in or downloading an application, and the front end has a limit and cannot meet the loading of a high-volume high-precision CIM scene;
for the spatial topology analysis problem: in a city-level wide-area CIM scene, because the data of a refined BIM model cannot be loaded due to a bearing bottleneck, spatial topology analysis application such as collision analysis and connectivity analysis of an engineering construction pipeline cannot be carried out;
the problem of blocking rendered video stream: in the transmission process of general streams, videos are coded and decoded, high-definition videos often bring decoding pressure to hardware, and blockage caused by decoding is particularly obvious. If the hardware configuration of the mobile phone/computer is low or the version of the playing software is too low, the speed of encoding and decoding is reduced, and the video playing is possibly blocked;
the problem of video streaming that cannot be interacted with by users is as follows: the traditional streaming transmission mode is that multimedia files such as animation, video and audio and the like are divided into compression packets through a special compression mode, and the compressed streaming multimedia files such as the animation, the video and the audio and the like can be decompressed on a computer of a user by using a corresponding player or other hardware and software for playing and watching after the start delay of several seconds or dozens of seconds, the traditional streaming can not allow the user to have participation control right, and the BIM scene is reflected on human-computer interaction on one hand;
therefore, a technical method for rendering dynamic streams of a city-level wide-area high-precision CIM scene server is provided.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a dynamic flow rendering technical method for a city-level wide-area high-precision CIM scene server, aiming at the scene rendering of the wide-area high-precision CIM, visual elements and characteristic information are processed, space vector topological information is decomposed and extracted, and a space vector database is established to support topological space analysis, so that the problem of providing users with the distribution of the city wide-area high-precision CIM scene rendering is solved, and the technical support capability of operating and browsing the CIM scene in real time in a high-precision, multi-device and cross-platform manner is provided.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a dynamic stream rendering technical method for an urban wide-area high-precision CIM scene server comprises the following steps:
the method comprises the following steps: scene visualization information data acquisition:
s1: analyzing the recognition object;
s2: slicing audio and video coding;
step two: and (3) outputting the high-definition rendering dynamic stream:
s3: pushing flow;
s4: drawing the flow;
step two: transfer of signaling service:
s5: user signaling;
s6: and transferring signaling.
Preferably, in S1, the points, lines and planes in the field of view of the camera are collected by the correlation function under VC, and the resolution is changed, so that the whole image can be thought of as a large chessboard, and the resolution is expressed by the number of intersections of all the longitude lines and the latitude lines; under the condition of certain display resolution, the smaller the display screen is, the clearer the image is, otherwise, when the size of the display screen is fixed, the higher the display resolution is, the clearer the image is, visual information is identified through a scene lens, the resolution is set, and the image is loaded into a temporary cache in a slicing mode.
Preferably, in S2, compression techniques such as motion-compensated inter-frame prediction, DCT transformation, adaptive quantization, and entropy coding are performed on the analysis information data obtained in the first step.
Preferably, in S3, the collected audio/video data is encapsulated by using a transmission protocol, and is changed into streaming data; common streaming protocols include RTSP, RTMP, HLS, and the like, delay of using RTMP transmission is usually 1-3 seconds, and for a scene with a very high real-time requirement, such as mobile phone live broadcast, RTMP also becomes the most common streaming protocol in mobile phone live broadcast, and finally, audio/video streaming data is pushed to a network segment through a certain Qos algorithm and is distributed through CDN.
Preferably, in S4, the video stream is read from the visual stream server by using a corresponding protocol, and discrete cosine transform (DCT transform) is performed, and after the image is DCT-transformed, the main components of the frequency coefficients of the image are concentrated in a relatively small range and are mainly located in the low frequency part.
Preferably, in S5, the signaling transmitted between the user and the exchange office collects signal commands of receiving devices such as a keyboard, a mouse, and a touch screen of the user, and uses the signaling information transmitted by the D channel (16kbit/S) in a digital coding format.
Preferably, in S6, the signaling relay station transmits signaling between switching nodes in the communication network, i.e. network interface (NNI) signaling, which is transmitted on a trunk between signaling services by a user, and mainly controls the establishment and release of call-related commands and corresponding signaling communication connections in the communication network, and transmits communication-related information.
Compared with the prior art, the invention has the following beneficial effects:
the invention has reasonable design, the first: the technical goals of lightweight loading and efficient rendering of the city information model data are achieved by carrying out file format analysis on the CIM city information model data and carrying out decomposition and extraction on visual elements and characteristic information, cleaning the model data, hierarchical layering of the model, geometric expression optimization of the elements and reasonable combination of space elements;
secondly, the method comprises the following steps: the method comprises the steps of analyzing file formats of CIM city information model data, decomposing and extracting space vector topology information, vectorizing and mapping three-dimensional space elements, establishing a massive space vector information base, unitizing entity objects, establishing a space database space index, and providing quick retrieval and space topology analysis capabilities of city information model objects;
thirdly, the method comprises the following steps: the method comprises the steps of integrating BIM city information model data, geographic information GIS data and Internet of things IoT data information, integrating city underground and ground, indoor and outdoor and historical present situation future multi-dimensional information model data and city perception data, and constructing a high-precision CIM scene digital bottom board, so that CIM scene projection data information can be conveniently and fluidically pushed to signaling service through an encryption data packet.
Of course, it is not necessary for any product to practice the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a signal transmission flow chart among a CIM scenario, a visual stream server and a communication signaling server according to the present invention;
FIG. 2 is a first flowchart of a dynamic stream rendering technique method for an urban wide-area high-precision CIM scene server according to the present invention;
FIG. 3 is a flowchart II of a dynamic stream rendering technique method of an urban wide-area high-precision CIM scene server according to the present invention;
fig. 4 is a signal transmission flowchart of a scene streaming service according to the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-4, a technical method for rendering dynamic streams of an urban wide-area high-precision CIM scene server includes the following steps:
the method comprises the following steps: scene visualization information data acquisition:
s1: analyzing recognition objects
Collecting points, lines and planes in the visual field range of the camera by using a correlation function under VC, and changing the resolution, wherein the whole image can be imagined as a large chessboard, and the expression mode of the resolution is the number of intersections of all longitude lines and latitude lines; under the condition of certain display resolution, the smaller the display screen is, the clearer the image is, otherwise, when the size of the display screen is fixed, the clearer the image is, the higher the display resolution is, the visual information is identified through a scene lens, the resolution is set, and the image is loaded into a temporary cache in a slicing mode;
s2: sliced audio/video coding
Performing compression technologies such as interframe prediction, DCT (discrete cosine transform) transformation, adaptive quantization and entropy coding of motion compensation on the analysis information data obtained in the first step;
step two: and (3) high-definition rendering dynamic stream output:
s3: plug flow
Packaging the acquired audio and video data by using a transmission protocol to become streaming data; common streaming protocols include RTSP, RTMP, HLS and the like, the delay of RTMP transmission is usually 1-3 seconds, the RTMP also becomes the most common streaming protocol in mobile phone live broadcast for scenes with high real-time requirements, such as mobile phone live broadcast, and finally, audio and video streaming data is pushed to a network break through a certain Qos algorithm and is distributed through a CDN;
s4: pulling flow
Reading a video stream from a visual stream server by using a corresponding protocol, and performing Discrete Cosine Transform (DCT), wherein after the image is subjected to DCT, the main components of the frequency coefficient are concentrated in a smaller range and are mainly positioned in a low-frequency part;
step three: transfer of signaling service:
s5: user signaling
The signaling transmitted from the user to the exchange office collects the signal instructions of the receiving equipment such as a user keyboard, a mouse, a touch screen and the like, and adopts a D channel (16kbit/s) to transmit the signaling information by using a digital coding format;
s6: transfer signaling
Signalling that the signalling transfer station transfers between individual switching nodes in the communication network, i.e. network interface (NNI) signalling, it transfers on the trunk between signalling services for subscribers, and it mainly controls the set-up and release of call-related commands and corresponding signalling communication connections in the communication network, and transfers communication-related information.
The signaling service in the invention is composed of a source signaling point, a signaling point for receiving the message is a destination signaling point of the message;
there are three types of signaling points:
(1) a serviceswitching point (ssp) is a generation or termination point of signaling messages, essentially a local switching system (or switching center CO) that originates calls or receives incoming calls.
(2) SignalTransferPoint (STP) performs the function of a router, looks at messages sent by SSPs, and then switches each message to the appropriate place over the network. STP connects other signaling points and networks together to form larger networks.
(3) Servicecontrolpoint (SCP) is a typical access database server, and is a control center of an intelligent network service, and is responsible for executing service logic, providing a call processing function, receiving query information and a query database sent by SSP, sending a call processing instruction to the SSP after verification, receiving a ticket generated by the SSP, and performing corresponding processing.
One specific application of this embodiment is: the method is reasonable in design, and achieves the technical aims of lightweight loading and efficient rendering of the urban information model data by analyzing the file format of the CIM urban information model data, decomposing and extracting visual elements and characteristic information, cleaning model data, grading and layering the model, geometrically expressing and optimizing the elements and reasonably combining spatial elements;
the method comprises the steps of analyzing file formats of CIM city information model data, decomposing and extracting space vector topology information, vectorizing and mapping three-dimensional space elements, establishing a massive space vector information base, unitizing entity objects, establishing a space database space index, and providing quick retrieval and space topology analysis capabilities of city information model objects;
the method comprises the steps of integrating BIM city information model data, geographic information GIS data and Internet of things IoT data information, integrating multi-dimensional multi-scale information model data and city perception data of cities in the ground, underground, indoor and outdoor and historical current situations in the future, and constructing a high-precision CIM scene digital bottom plate, so that CIM scene projection data information is conveniently and fluidically pushed to signaling service through an encrypted data packet;
thus, for scene rendering of wide-area high-precision CIM, visual elements and characteristic information are processed, space vector topological information is decomposed and extracted, a space vector database is established to support topological space analysis, and an original wide-area high-precision CIM scene is converted into a set of technical scheme supporting fast and efficient rendering of stream media playing operation through three parallel technical routes of scene visual information data acquisition, high-definition rendering dynamic stream output and signaling service transfer, so that distribution of urban wide-area high-precision CIM scene rendering is provided for users, and the technical support capability of high-precision, multi-device and cross-platform real-time operation and browsing of the CIM scene is provided.
"HTTP" as referred to in the drawings of the present document is Hypertext transfer protocol (HTTP), a simple request-response protocol that typically operates on TCP;
"TCP" is Transmission control protocol (Transmission control protocol), a connection-oriented, reliable, byte-stream-based transport-layer communication protocol, defined by RFC793 of the IETF;
"Socket" is an abstraction of an endpoint for two-way communication between application processes on different hosts in a network. One socket is one end of process communication on the network, and provides a mechanism for the application layer process to exchange data by using a network protocol;
"SessionDescription": a message session description;
"offer sdp": a protocol for sending a session;
"AnswerSDP": a protocol of a response session;
"ICECandidate": candidate address events thrown by service messages.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (1)

1. A technical method for rendering dynamic streams of an urban wide-area high-precision CIM scene server is characterized by comprising the following steps: the method comprises the following steps:
the method comprises the following steps: scene visualization information data acquisition:
s1: analyzing recognition objects
Collecting points, lines and planes in the visual field range of the camera by using a correlation function under VC, changing the resolution, imagining the whole image into a large chessboard, wherein the expression mode of the resolution is the number of intersections of all longitude lines and latitude lines; under the condition of certain display resolution, the smaller the display screen is, the clearer the image is, otherwise, when the size of the display screen is fixed, the clearer the image is, the visual information is identified through a scene lens, the resolution is set, and the image is loaded into a temporary cache in a slicing mode;
s2: sliced audio and video coding
Performing motion compensation inter-frame prediction, DCT transformation, adaptive quantization and entropy coding compression technology through the analysis information data obtained in the first step;
step two: and (3) high-definition rendering dynamic stream output:
s3: plug flow
Packaging the collected audio and video data by using a transmission protocol to obtain stream data; common streaming protocols include RTSP, RTMP and HLS, the transmission delay of RTMP is 1-3 seconds, the RTMP is used for live broadcast of a mobile phone in a scene with high real-time requirement, the RTMP also becomes the most common streaming protocol in live broadcast of the mobile phone, and audio and video streaming data are pushed to a network break through a Qos algorithm and are distributed through a CDN;
s4: pulling flow
Reading a video stream from a visual stream server by using a corresponding protocol, performing discrete cosine transform, and performing DCT (discrete cosine transform) on an image;
step three: transferring signaling service:
s5: user signaling
The signaling transmitted from the user to the exchange office collects the signal instructions of the keyboard, mouse and touch screen receiving equipment of the user, and adopts a D channel (16kbit/s) to transmit the signaling information by using a digital coding format;
s6: transfer signaling
Signalling transferred between switching nodes in a communications network by a signalling relay station, i.e. network interface (NNI) signalling, which is carried on trunks between signalling services by subscribers, controls the set-up and release of call-related commands and corresponding signalling traffic connections in the communications network, and conveys information relating to the traffic.
CN202110455274.XA 2021-04-26 2021-04-26 City-level wide-area high-precision CIM scene server dynamic stream rendering technical method Active CN113179420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110455274.XA CN113179420B (en) 2021-04-26 2021-04-26 City-level wide-area high-precision CIM scene server dynamic stream rendering technical method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110455274.XA CN113179420B (en) 2021-04-26 2021-04-26 City-level wide-area high-precision CIM scene server dynamic stream rendering technical method

Publications (2)

Publication Number Publication Date
CN113179420A CN113179420A (en) 2021-07-27
CN113179420B true CN113179420B (en) 2022-08-30

Family

ID=76926319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110455274.XA Active CN113179420B (en) 2021-04-26 2021-04-26 City-level wide-area high-precision CIM scene server dynamic stream rendering technical method

Country Status (1)

Country Link
CN (1) CN113179420B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655756B2 (en) * 2004-06-04 2014-02-18 Sap Ag Consistent set of interfaces derived from a business object model
US8893026B2 (en) * 2008-11-05 2014-11-18 Pierre-Alain Lindemann System and method for creating and broadcasting interactive panoramic walk-through applications
CN109977182B (en) * 2019-03-06 2021-06-01 广州市城市规划勘测设计研究院 City information system based on CIM
CN110399446A (en) * 2019-07-26 2019-11-01 广州市城市规划勘测设计研究院 Method for visualizing, device, equipment and the storage medium of extensive space-time data
US20210105451A1 (en) * 2019-12-23 2021-04-08 Intel Corporation Scene construction using object-based immersive media
CN112287138B (en) * 2020-10-15 2022-11-15 广州市城市规划勘测设计研究院 Organization scheduling method, device and equipment of city information model
CN112560137A (en) * 2020-12-04 2021-03-26 武汉光谷信息技术股份有限公司 Multi-model fusion method and system based on smart city

Also Published As

Publication number Publication date
CN113179420A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN108881147B (en) A kind of data processing method and device of view networking
CN105045820A (en) Method for processing video image information of mass data and database system
CN103813213A (en) Real-time video sharing platform and method based on mobile cloud computing
CN107360443A (en) A kind of cloud desktop picture processing method, cloud desktop server and client
CN108964963A (en) A method of warning system and realization alarm based on view networking
CN109302451A (en) A kind of methods of exhibiting and system of picture file
CN113902866B (en) Double-engine driven digital twin system
CN113163162A (en) Service providing method based on video cloud and video cloud system
CN108804486A (en) A kind of image-recognizing method and device
CN110087041A (en) Video data processing and transmission method and system based on the base station 5G
CN108632679B (en) A kind of method that multi-medium data transmits and a kind of view networked terminals
CN106454388A (en) Method and device for determining live broadcast setting information
Huang et al. Toward holographic video communications: A promising AI-driven solution
CN104363511A (en) Method and system for online playing F4v videos on mobile device
CN113179420B (en) City-level wide-area high-precision CIM scene server dynamic stream rendering technical method
CN205647835U (en) Video transcoding system under cloud environment
CN101959061A (en) Video monitoring system and method for traffic road conditions
CN102802042A (en) 3G Modem card multi-track coding transport system and method based on ARMl1 core microprocessor
CN201360312Y (en) Monitoring system based on embedded Web video server
CN109785432A (en) A kind of three-dimensional map mapping system
CN112995134B (en) Three-dimensional video streaming media transmission method and visualization method
CN201846438U (en) Video monitoring system for traffic road conditions
Kumar et al. Efficient compression and network adaptive video coding for distributed video surveillance
Aurangzeb et al. Analysis of binary image coding methods for outdoor applications of wireless vision sensor networks
CN102306402A (en) Three-dimensional graph processing system of mobile visual media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: Room 3330, 1st Floor, Building 8, No. 33 Guangshun Road, Changning District, Shanghai, 200335

Patentee after: Umbra (Shanghai) Network Technology Co.,Ltd.

Address before: 200333 room 200-311, 2nd floor, No.7 Lane 1130, Tongpu Road, Putuo District, Shanghai

Patentee before: Umbra (Shanghai) Network Technology Co.,Ltd.