CN116630543A - Three-dimensional live-action one-stop processing platform based on BS architecture - Google Patents

Three-dimensional live-action one-stop processing platform based on BS architecture Download PDF

Info

Publication number
CN116630543A
CN116630543A CN202310601247.8A CN202310601247A CN116630543A CN 116630543 A CN116630543 A CN 116630543A CN 202310601247 A CN202310601247 A CN 202310601247A CN 116630543 A CN116630543 A CN 116630543A
Authority
CN
China
Prior art keywords
action
live
dimensional
model
construction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310601247.8A
Other languages
Chinese (zh)
Other versions
CN116630543B (en
Inventor
顾长成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Suxin New Energy Technology Co ltd
Original Assignee
Shanghai Suxin New Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Suxin New Energy Technology Co ltd filed Critical Shanghai Suxin New Energy Technology Co ltd
Priority to CN202310601247.8A priority Critical patent/CN116630543B/en
Publication of CN116630543A publication Critical patent/CN116630543A/en
Application granted granted Critical
Publication of CN116630543B publication Critical patent/CN116630543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation

Abstract

The invention discloses a three-dimensional live-action one-stop processing platform based on a BS (browser/server) framework, relates to the technical field of data fusion, and solves the technical problems that in the prior art, in the three-dimensional live-action modeling process, the acquired image data are emphasized to be evaluated, the precision loss in the image data stitching process is not considered, and the three-dimensional live-action accuracy cannot be comprehensively ensured; according to the method, the device and the system, the real scene prediction requirements are obtained according to a plurality of real scene construction requirements and the knowledge graph model, the corresponding three-dimensional real scene model is generated, the possible three-dimensional real scene requirements of a user are predicted on the basis of the real scene construction requirements, the efficiency of obtaining the three-dimensional real scene model by the user is improved, and the working efficiency among all modules can be improved; according to the method, historical experience data and target live-action requirements are combined to generate corresponding live-action verification rules, and corresponding three-dimensional live-action models are verified and evaluated according to the live-action verification rules, so that the quality of the three-dimensional live-action models is guaranteed, and broken targets in the three-dimensional live-action models are particularly prevented from being abnormal.

Description

Three-dimensional live-action one-stop processing platform based on BS architecture
Technical Field
The invention belongs to the field of data fusion, relates to a three-dimensional live-action one-stop processing technology based on a BS (browser/server) framework, and particularly relates to a three-dimensional live-action one-stop processing platform based on the BS framework.
Background
The three-dimensional live-action is a three-dimensional virtual display technology which performs data fusion on shooting data after performing multi-angle shooting on the existing scene by using a digital camera. The user can realize the functions of hot spot link in scenes, virtual roaming among multiple scenes and the like through the display terminal.
The prior art (the invention patent application with publication number of CN 112258624A) discloses a three-dimensional live-action fusion modeling method, and the acquired image data is efficiently evaluated through a multi-level fuzzy comprehensive evaluation model so as to ensure the accuracy of the three-dimensional live-action model. In the prior art, in the three-dimensional live-action modeling process, the acquired image data is emphasized to be evaluated, the precision loss in the image data stitching process is not considered, and the accuracy of the three-dimensional live-action cannot be comprehensively ensured; therefore, a three-dimensional real-time one-stop processing platform with BS architecture is needed.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art; therefore, the invention provides a three-dimensional live-action one-stop processing platform based on a BS framework, which is used for solving the technical problems that in the three-dimensional live-action modeling process, the acquired image data is emphasized to be evaluated, the precision loss in the image data stitching process is not considered, and the three-dimensional live-action accuracy cannot be guaranteed in all directions in the prior art.
According to the method, the device and the system, the real scene prediction requirements are obtained according to a plurality of real scene construction requirements and the knowledge graph model, the corresponding three-dimensional real scene model is generated, the three-dimensional real scene model is generated in advance by predicting the possible three-dimensional real scene requirements of the user, and the efficiency of obtaining the three-dimensional real scene model by the user is improved; and meanwhile, the constructed three-dimensional live-action model is subjected to targeted evaluation, so that the quality of the three-dimensional live-action model is ensured, and the satisfaction degree of a user is further improved.
In order to achieve the above object, a first aspect of the present invention provides a BS architecture-based three-dimensional live-action one-stop processing platform, which includes a live-action construction module, and a demand assessment module and a data storage module connected thereto;
the demand assessment module: acquiring a live-action construction requirement based on a BS framework, and sending the live-action construction requirement to the live-action construction module; and
carrying out statistical analysis on a plurality of stored live-action construction requirements, determining live-action prediction requirements based on a knowledge graph model, and sending the live-action prediction requirements to the live-action construction module;
and a live-action construction module: acquiring image data from the data storage module based on the live-action construction requirement or the live-action prediction requirement, and constructing a three-dimensional live-action model according to the image data; and
evaluating the three-dimensional live-action model according to a live-action verification rule, and forwarding or storing the three-dimensional live-action model passing the evaluation; the live-action verification rule is constructed based on combination of historical experience data and live-action construction requirements or live-action prediction requirements.
Preferably, the live-action construction module is respectively in communication and/or electric connection with the demand assessment module and the data storage module;
the demand assessment module acquires the real scene construction demand according to a WEB server, and the three-dimensional real scene model which is assessed to pass through is forwarded to a real scene display terminal through the WEB server; the live-action display terminal comprises a smart phone and a computer.
Preferably, the data storage module is used for storing the constructed three-dimensional live-action model and the image data of the constructed three-dimensional live-action model;
the data storage module calls stored image data according to the data request signal or acquires the image data in real time; wherein the data request signal is generated by the live-action construction module.
Preferably, the requirement assessment module combines a plurality of stored live-action construction requirements with a knowledge graph model to obtain the live-action prediction requirements, including:
acquiring a plurality of real scene construction requirements; the real scene construction requirements are collected and stored from a WEB server through the requirement evaluation module;
analyzing a plurality of real scene construction requirements to obtain high-frequency keywords;
expanding search in the knowledge graph model based on the high-frequency keywords to obtain predicted keywords;
and generating the live-action prediction requirement based on the prediction keywords.
Preferably, the live-action construction module generates the knowledge graph model based on three-dimensional live-action application data, including:
acquiring the three-dimensional live-action application data; the three-dimensional live-action application data are acquired through the Internet;
extracting a plurality of entities in the three-dimensional live-action application data and association relations among the entities, and constructing and acquiring the knowledge graph model by combining a knowledge graph construction method;
and storing the knowledge graph model in the data storage module.
Preferably, the live-action construction module generates the three-dimensional live-action model according to live-action construction requirements or live-action prediction requirements, including:
receiving a target live-action requirement; the target live-action requirement comprises a live-action construction requirement or a live-action prediction requirement;
analyzing the target live-action requirement, and acquiring corresponding image data from the data storage module; and generating a three-dimensional live-action model by using the image data after the quality evaluation passes.
Preferably, the evaluating, by the live-action construction module, the constructed three-dimensional live-action model includes:
combining the historical experience data with the target live-action requirement to generate a corresponding live-action verification rule; the live-action verification rule comprises a verification target and corresponding verification content;
and carrying out retrieval verification on the three-dimensional live-action model through the live-action verification rule, and judging that the three-dimensional live-action model passes the evaluation after the verification is passed.
Preferably, the data storage module performs periodic detection on the three-dimensional live-action model, performs backup storage according to a detection result, and includes:
counting the calling frequency of the three-dimensional live-action model in a set time;
when the calling frequency does not exceed the frequency threshold value, the corresponding three-dimensional live-action model is turned out for backup; wherein the frequency threshold is set according to practical experience.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method and the system, the real scene prediction requirements are obtained according to the real scene construction requirements and the knowledge graph model, the corresponding three-dimensional real scene model is generated, the possible three-dimensional real scene requirements of the user are predicted on the basis of the real scene construction requirements, the efficiency of obtaining the three-dimensional real scene model by the user is improved, and the working efficiency among all the modules can be improved.
2. According to the method, historical experience data and target live-action requirements are combined to generate corresponding live-action verification rules, and corresponding three-dimensional live-action models are verified and evaluated according to the live-action verification rules, so that the quality of the three-dimensional live-action models is guaranteed, and broken targets in the three-dimensional live-action models are particularly prevented from being abnormal.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the working steps of the present invention;
fig. 2 is a schematic diagram of the system principle of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, in the three-dimensional live-action modeling process, the acquired image data is emphasized to be evaluated, the image data is considered to meet the requirements, the corresponding three-dimensional live-action model can also meet the quality requirements, the precision loss in the image data stitching process is not considered, and the accuracy of the three-dimensional live-action cannot be comprehensively ensured.
According to the method, corresponding real scene verification rules are generated according to historical experience data and target real scene requirements, the constructed three-dimensional real scene model is verified according to the real scene verification rules, the three-dimensional real scene model after verification can be forwarded to a user, flaws in the three-dimensional real scene model are avoided, and the quality of the three-dimensional real scene model is improved.
Referring to fig. 1-2, an embodiment of a first aspect of the present invention provides a BS architecture-based three-dimensional live-action one-stop processing platform, which includes a live-action construction module, and a demand evaluation module and a data storage module connected with the live-action construction module;
the demand assessment module: acquiring a live-action construction requirement based on a BS framework, and sending the live-action construction requirement to a live-action construction module; carrying out statistical analysis on a plurality of stored live-action construction requirements, determining live-action prediction requirements based on a knowledge graph model, and sending the live-action prediction requirements to a live-action construction module;
and a live-action construction module: acquiring image data from a data storage module based on a live-action construction requirement or a live-action prediction requirement, and constructing a three-dimensional live-action model according to the image data; and evaluating the three-dimensional live-action model according to the live-action verification rule, and forwarding or storing the three-dimensional live-action model passing the evaluation.
The three-dimensional live-action one-stop processing platform is built based on a BS framework, namely a framework mode of a browser and a server, a user generates live-action construction requirements through the browser, and the live-action construction requirements are sent to a live-action framework module through a requirement assessment module. The browser can be arranged in the live-action display terminal, namely, the live-action construction requirement is generated through the live-action display terminal, and the generated three-dimensional live-action model is watched.
The live-action construction module is respectively communicated and/or electrically connected with the demand evaluation module and the data storage module; the demand assessment module acquires the real scene construction demand according to the WEB server, and the three-dimensional real scene model which is assessed and passed is forwarded to the real scene display terminal through the WEB server.
The live-action construction module acquires live-action construction requirements through the requirement evaluation module, and then retrieves image data from the data storage module according to the live-action construction requirements, and performs stitching and splicing on the image data to generate a corresponding three-dimensional live-action model. The demand assessment model obtains the real scene construction demand according to the browser, and the corresponding three-dimensional real scene model can be displayed through the browser in the real scene display terminal.
The data storage module is used for storing the constructed three-dimensional live-action model and the image data of the constructed three-dimensional live-action model; the data storage module calls the stored image data according to the data request signal or acquires the image data in real time.
After receiving the target real scene demand, the real scene construction module analyzes the target real scene demand and generates a corresponding data request signal according to the analysis result, and the data storage module provides image data according to the data request signal. The data storage module can call the image data stored in the data storage module, and the image data can be acquired in real time through the image acquisition device when the image data stored in the data storage module does not meet the requirements. The image acquisition device is mainly a camera and other resource platforms, and the camera can acquire image data in a handheld or unmanned aerial vehicle carrying mode.
One of the key points of the application of the invention is as follows: the present gist is explained in detail below, acquiring live-action prediction requirements from a number of live-action construction requirements.
The actual scene prediction demand is to analyze and judge which three-dimensional scene construction demands exist for future users according to the historical scene construction demand.
The real scene construction module firstly acquires three-dimensional real scene application data through the Internet, extracts entities required for constructing a knowledge graph model and association relations among the entities from the three-dimensional real scene application data, and further constructs and generates the knowledge graph model.
The three-dimensional real-scene application data can be understood as data which is generated by applying the three-dimensional real-scene in which fields or scenes, including the fields or the entities in the scenes and the association relation among the entities. The invention patent with the application number of 201310036817X can be referred to, and when the invention patent is applied to monitoring of facility equipment, the state of the facility equipment needs to be monitored, so that the facility equipment can be understood as an entity, the connection relationship between the facility equipment can be used as an association relationship, and a knowledge graph model of the monitoring field of the facility equipment can be established according to the content. It should be noted that the main purpose of the knowledge graph model is to be able to obtain the predicted keywords highly associated with the high-frequency keywords, where the predicted keywords correspond to the actual prediction requirements.
And constructing a plurality of knowledge-graph models according to the three-dimensional real-scene application data corresponding to different fields or scenes, and storing the constructed knowledge-graph models in a real-scene construction module or a data storage module. It can be appreciated that the built knowledge-graph models require periodic updating of the extensions.
After the knowledge graph model is constructed, a requirement assessment module in the application combines a plurality of stored real scene construction requirements with the knowledge graph model to obtain real scene prediction requirements, and the method comprises the following steps:
acquiring a plurality of real scene construction requirements; analyzing a plurality of real scene construction requirements to obtain high-frequency keywords; expanding search in a knowledge graph model based on the high-frequency keywords to obtain predicted keywords; and generating a live-action prediction requirement based on the prediction keywords.
The real scene construction requirements are collected and stored from the WEB server through the requirement evaluation module, and can be directly sent by a user or obtained from other platforms. And analyzing high-frequency keywords in the live-action construction requirements, and then expanding and searching in a corresponding knowledge graph model according to the high-frequency keywords to obtain predicted keywords so as to generate the live-action prediction requirements.
The method for obtaining the high-frequency keywords is illustrated as follows: analyzing a plurality of real scene construction requirements, and extracting real words in the real scene construction requirements; and counting the proportion of the real words, and selecting one or more of the high-frequency keywords with the highest proportion as the high-frequency keywords.
The core point is illustrated based on the invention patent with application number 201310036817X:
assuming that the power equipment comprises a transformer, a power transmission line and a smart meter, and the transformer is connected with the smart meter through the power transmission line; the transformer, the power transmission line and the intelligent ammeter are taken as entities, and the relationship between the transformer and the power transmission line and the relationship between the power transmission line and the intelligent ammeter are taken as association relationships; and establishing a knowledge graph model according to the power equipment and the connection relation of the power equipment.
And the high-frequency keywords obtained through statistical analysis are transformers, the high-frequency keywords are searched in the knowledge graph model, the power transmission line is closely related to the high-frequency keywords and can be used as target related words, and the related image data can be obtained to generate a three-dimensional real model corresponding to the power transmission line.
The live-action construction module in the application generates a three-dimensional live-action model according to live-action construction requirements or live-action prediction requirements, and comprises the following steps:
receiving a target live-action requirement; analyzing the target live-action requirement, and acquiring corresponding image data from a data storage module; and generating a three-dimensional live-action model by using the image data after the quality evaluation passes.
The quality evaluation of the image data can refer to the patent application of CN112258624A, the invention evaluates the image data through a built hierarchical fuzzy comprehensive evaluation model, and the invention can be used for constructing a three-dimensional live-action model when the image data meets the requirement.
The target real scene requirement comprises a real scene construction requirement or a real scene prediction requirement, namely, receiving any one of the real scene construction requirement and the real scene prediction requirement should proceed to generate a corresponding three-dimensional real scene model. It should be noted that if the live-action build requirement and the live-action prediction requirement are received simultaneously, the live-action build requirement should be heavy.
The second key point of the application of the invention is as follows: how to evaluate the constructed three-dimensional live-action model to ensure the quality thereof.
When the fineness and the attractiveness of the three-dimensional live-action model are considered, the broken targets are key, and the broken targets refer to target objects which are easy to generate phenomena such as broken holes, texture missing, twisting and the like in the three-dimensional live-action construction process, such as thin and broken objects like traffic signs, billboards, fences, chemical cylinders, street lamps and the like.
The method for evaluating the constructed three-dimensional live-action model through the live-action construction module comprises the following steps:
combining the historical experience data with the target live-action requirements to generate corresponding live-action verification rules; and carrying out retrieval verification on the three-dimensional live-action model through a live-action verification rule, and judging that the three-dimensional live-action model passes the evaluation after the verification is passed.
The live-action verification rule comprises a verification target and corresponding verification content, wherein the verification target is a broken target, and the verification content comprises whether a broken hole occurs, whether texture deletion occurs or not and the like. The acquiring of the real scene verification rule needs to combine the historical experience data with the corresponding target real scene requirement, for example, the target real scene requirement requires to construct a three-dimensional real scene model of a certain administrative area, and the detection target can be determined to comprise a guideboard, a billboard and the like by combining the historical experience data. It can be understood that the live-action verification rule can be different according to different target live-action requirements, and can also be increased or decreased along with the expansion of the historical experience data.
In order to reduce the storage pressure of the data storage module, the data storage module in the application of the invention periodically detects the three-dimensional live-action model, and performs backup storage according to the detection result, and the method comprises the following steps:
counting the calling frequency of the three-dimensional live-action model in a set time; and when the calling frequency does not exceed the frequency threshold value, turning out the corresponding three-dimensional live-action model for backup. The set time may be one day, one month.
And the three-dimensional live-action model constructed according to the live-action construction requirement is immediately forwarded to a user or a live-action display terminal after the evaluation is completed, and is backed up and stored when the calling frequency is low in a period of time, so that the pressure of the data storage module is released. The three-dimensional live-action model constructed according to the live-action prediction requirement is temporarily stored in the data storage module after the evaluation is completed, and if the calling frequency of the three-dimensional live-action model is low for a period of time, the three-dimensional live-action model is also backed up and stored.
In the application of the invention, if the qualified image data is insufficient to construct the three-dimensional live-action model, the qualified image data is acquired again through the data storage module. If the three-dimensional quality model evaluation does not pass, the AI technique can be supplemented with image data in combination.
The working principle of the invention is as follows:
the demand assessment module acquires a live-action construction demand based on the BS framework and sends the live-action construction demand to the live-action construction module; and simultaneously, carrying out statistical analysis on a plurality of stored live-action construction requirements, and determining live-action prediction requirements based on a knowledge graph model.
The live-action construction module acquires image data from the data storage module based on live-action construction requirements or live-action prediction requirements, and constructs a three-dimensional live-action model according to the image data; and evaluating the three-dimensional live-action model according to the live-action verification rule, and forwarding or storing the three-dimensional live-action model passing the evaluation.
The above embodiments are only for illustrating the technical method of the present invention and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present invention may be modified or substituted without departing from the spirit and scope of the technical method of the present invention.

Claims (8)

1. The three-dimensional live-action one-stop processing platform based on the BS framework comprises a live-action construction module, a demand evaluation module and a data storage module, wherein the demand evaluation module and the data storage module are connected with the live-action construction module, and the three-dimensional live-action one-stop processing platform is characterized in that:
the demand assessment module: acquiring a live-action construction requirement based on a BS framework, and sending the live-action construction requirement to the live-action construction module; and
carrying out statistical analysis on a plurality of stored live-action construction requirements, determining live-action prediction requirements based on a knowledge graph model, and sending the live-action prediction requirements to the live-action construction module;
and a live-action construction module: acquiring image data from the data storage module based on the live-action construction requirement or the live-action prediction requirement, and constructing a three-dimensional live-action model according to the image data; and
evaluating the three-dimensional live-action model according to a live-action verification rule, and forwarding or storing the three-dimensional live-action model passing the evaluation; the live-action verification rule is constructed based on combination of historical experience data and live-action construction requirements or live-action prediction requirements.
2. The BS architecture-based three-dimensional live-action one-stop processing platform of claim 1, wherein the requirement assessment module combines a plurality of stored live-action construction requirements with a knowledge-graph model to obtain the live-action prediction requirements, comprising:
acquiring a plurality of real scene construction requirements; the real scene construction requirements are collected and stored from a WEB server through the requirement evaluation module;
analyzing a plurality of real scene construction requirements to obtain high-frequency keywords;
expanding search in the knowledge graph model based on the high-frequency keywords to obtain predicted keywords;
and generating the live-action prediction requirement based on the prediction keywords.
3. The BS architecture-based three-dimensional live-action one-stop processing platform of claim 2, wherein the live-action construction module generates the knowledge-graph model based on three-dimensional live-action application data, comprising:
acquiring three-dimensional live-action application data; the three-dimensional live-action application data are acquired through the Internet;
extracting a plurality of entities in the three-dimensional live-action application data and association relations among the entities, and constructing and acquiring the knowledge graph model by combining a knowledge graph construction method;
and storing the knowledge graph model in the data storage module.
4. The BS architecture-based three-dimensional live-action one-stop processing platform of claim 1, wherein the live-action construction module generates the three-dimensional live-action model according to live-action construction requirements or live-action prediction requirements, comprising:
receiving a target live-action requirement; the target live-action requirement comprises a live-action construction requirement or a live-action prediction requirement;
analyzing the target live-action requirement, and acquiring corresponding image data from the data storage module; and generating a three-dimensional live-action model by using the image data after the quality evaluation passes.
5. The BS-architecture-based three-dimensional live-action one-stop processing platform of claim 4, wherein evaluating, by the live-action construction module, the constructed three-dimensional live-action model comprises:
combining the historical experience data with the target live-action requirement to generate a corresponding live-action verification rule; the live-action verification rule comprises a verification target and corresponding verification content;
and carrying out retrieval verification on the three-dimensional live-action model through the live-action verification rule, and judging that the three-dimensional live-action model passes the evaluation after the verification is passed.
6. The BS architecture-based three-dimensional live-action one-stop processing platform of claim 1 or 4, wherein the data storage module performs periodic detection on the three-dimensional live-action model, performs backup storage according to a detection result, and includes:
counting the calling frequency of the three-dimensional live-action model in a set time;
when the calling frequency does not exceed the frequency threshold value, the corresponding three-dimensional live-action model is turned out for backup; wherein the frequency threshold is set according to practical experience.
7. The BS architecture-based three-dimensional live-action one-stop processing platform of claim 1, wherein the live-action construction module is in communication and/or electrical connection with the demand assessment module and the data storage module, respectively;
the demand assessment module acquires the real scene construction demand according to a WEB server, and the three-dimensional real scene model which is assessed to pass through is forwarded to a real scene display terminal through the WEB server; the live-action display terminal comprises a smart phone and a computer.
8. The BS-architecture-based three-dimensional live-action one-stop processing platform of claim 7, wherein the data storage module is configured to store the constructed three-dimensional live-action model and image data of the constructed three-dimensional live-action model;
the data storage module calls stored image data according to the data request signal or acquires the image data in real time; wherein the data request signal is generated by the live-action construction module.
CN202310601247.8A 2023-05-25 2023-05-25 Three-dimensional live-action one-stop processing platform based on BS architecture Active CN116630543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310601247.8A CN116630543B (en) 2023-05-25 2023-05-25 Three-dimensional live-action one-stop processing platform based on BS architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310601247.8A CN116630543B (en) 2023-05-25 2023-05-25 Three-dimensional live-action one-stop processing platform based on BS architecture

Publications (2)

Publication Number Publication Date
CN116630543A true CN116630543A (en) 2023-08-22
CN116630543B CN116630543B (en) 2024-03-08

Family

ID=87609454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310601247.8A Active CN116630543B (en) 2023-05-25 2023-05-25 Three-dimensional live-action one-stop processing platform based on BS architecture

Country Status (1)

Country Link
CN (1) CN116630543B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704743A (en) * 2019-09-30 2020-01-17 北京科技大学 Semantic search method and device based on knowledge graph
CN112052304A (en) * 2020-08-18 2020-12-08 中国建设银行股份有限公司 Course label determining method and device and electronic equipment
WO2021107445A1 (en) * 2019-11-25 2021-06-03 주식회사 데이터마케팅코리아 Method for providing newly-coined word information service based on knowledge graph and country-specific transliteration conversion, and apparatus therefor
CN113486136A (en) * 2021-08-04 2021-10-08 泰瑞数创科技(北京)有限公司 Method and system for assembling geographic entity service on demand
CN115269751A (en) * 2022-05-10 2022-11-01 泰瑞数创科技(北京)股份有限公司 Method for constructing geographic entity space-time knowledge map ontology base

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704743A (en) * 2019-09-30 2020-01-17 北京科技大学 Semantic search method and device based on knowledge graph
WO2021107445A1 (en) * 2019-11-25 2021-06-03 주식회사 데이터마케팅코리아 Method for providing newly-coined word information service based on knowledge graph and country-specific transliteration conversion, and apparatus therefor
CN112052304A (en) * 2020-08-18 2020-12-08 中国建设银行股份有限公司 Course label determining method and device and electronic equipment
CN113486136A (en) * 2021-08-04 2021-10-08 泰瑞数创科技(北京)有限公司 Method and system for assembling geographic entity service on demand
CN115269751A (en) * 2022-05-10 2022-11-01 泰瑞数创科技(北京)股份有限公司 Method for constructing geographic entity space-time knowledge map ontology base

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUNHAO ZHANG等: "The constructgion of personalized virtual landslide disaster environments based on knowledge graphs and deep neural networks", 《INTERNATIONAL JOURNAL OF DIGITAL EARTH》, vol. 13, no. 12 *
郭婧娟;田芳;: "基于知识图谱的轨道交通领域BIM研究现状分析", 北京交通大学学报(社会科学版), no. 03 *

Also Published As

Publication number Publication date
CN116630543B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109543513A (en) Method, apparatus, equipment and the storage medium that intelligent monitoring is handled in real time
CN110807085B (en) Fault information query method and device, storage medium and electronic device
CN101207524A (en) Method and system for supervising broadcast of web advertisement
CN106646110A (en) Low-voltage distribution network fault positioning system based on GIS and Petri technologies
CN114897329A (en) Power transmission line inspection method, device and system and storage medium
CN110728548B (en) VR tourism product evaluation system
CN112233428A (en) Traffic flow prediction method, traffic flow prediction device, storage medium and equipment
CN111932200A (en) Remote bidding evaluation system
CN111951390A (en) Warning situation display method, system, device and storage medium
CN116630543B (en) Three-dimensional live-action one-stop processing platform based on BS architecture
CN111597361B (en) Multimedia data processing method, device, storage medium and equipment
CN111337133B (en) Infrared data generation method and device and infrared data analysis method and device
CN115438812A (en) Life-saving management method and device for power transmission equipment, computer equipment and storage medium
CN110766322B (en) Big data-based VR (virtual reality) tourism product evaluation method
CN112153464A (en) Smart city management system
CN113177883A (en) Data queue-based arrangement transmission system
CN113449015A (en) Power grid fault processing method and device and electronic equipment
CN111553497A (en) Equipment working state detection method and device of multimedia terminal
CN111143688A (en) Evaluation method and system based on mobile news client
Wang et al. DeepAdaIn-Net: Deep Adaptive Device-Edge Collaborative Inference for Augmented Reality
CN116054414B (en) Line defect hidden danger monitoring method, device, computer equipment and storage medium
CN109740858A (en) Automation aid decision-making system and method based on deep learning
CN114245070B (en) Method and system for centralized viewing of regional monitoring content
Zheng et al. Urban Image Segmentation in Media Integration Era Based on Improved Sparse Matrix Generation of Digital Image Processing.
CN114501163A (en) Video processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant