CN113205576B - Scene reproduction method and reproduction system - Google Patents

Scene reproduction method and reproduction system Download PDF

Info

Publication number
CN113205576B
CN113205576B CN202110591915.4A CN202110591915A CN113205576B CN 113205576 B CN113205576 B CN 113205576B CN 202110591915 A CN202110591915 A CN 202110591915A CN 113205576 B CN113205576 B CN 113205576B
Authority
CN
China
Prior art keywords
event
data
scene
model
reproduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110591915.4A
Other languages
Chinese (zh)
Other versions
CN113205576A (en
Inventor
肖鸣仟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xindong Digital Information Co ltd
Original Assignee
Shenzhen Xindong Digital Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xindong Digital Information Co ltd filed Critical Shenzhen Xindong Digital Information Co ltd
Priority to CN202110591915.4A priority Critical patent/CN113205576B/en
Publication of CN113205576A publication Critical patent/CN113205576A/en
Application granted granted Critical
Publication of CN113205576B publication Critical patent/CN113205576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Abstract

The invention discloses a scene reproduction method and a scene reproduction system, which are applied to the technical field of scene restoration and are used for acquiring event data sets; constructing an event reproduction model and a scene reproduction model by using the data set; fusing at least one event output by the event reproduction model with the scene reproduction model; eliminating false events by adding artificial inference data until a unique event is determined; the display is performed in an animation form. The invention discloses a scene reproduction method and a scene reproduction system, which comprise a prediction function, only need to update an event reproduction model and data in the scene reproduction model to infer the occurrence of a new event, have high visualization degree through a VR technology, and play an important role in various aspects such as doctor simulation surgery, soldier training, police investigation and the like.

Description

Scene reproduction method and reproduction system
Technical Field
The invention relates to the technical field of scene restoration, in particular to a scene reproduction method and a scene reproduction system.
Background
Although big data technology is very different day by day, for specific scene analysis, most of the analysis is based on human experience, negligence is necessary to exist for analysis results, further simulation technology is developed, simulation reproduction is started for specific scenes, but the following drawbacks exist in the simulation of specific scenes in the prior art:
1. the data has a non-uniform structure and poor data interoperability, which brings difficulty to building a simulation model;
2. incomplete data, incapability of scene reproduction, and the like;
therefore, how to provide a scene reproduction system and method capable of truly recovering a scene and solving the problem of poor data interoperability is a urgent need for those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides a scene reproduction method and a reproduction system, which not only can realize real scene restoration, but also solve the problem of poor data interoperability by using a simulation model implemented based on a model constructed by the same data.
In order to achieve the above object, the present invention provides the following technical solutions:
in one aspect, the invention discloses a scene reproduction method, which comprises the following specific steps:
obtaining an event data set, the event data set comprising: event data and scene data;
respectively constructing an event reproduction model and a scene reproduction model by utilizing the event data and the scene data;
fusing at least one event output by the event reproduction model with the scene reproduction model;
eliminating false events by adding artificial inference data until a unique event is determined;
the display is performed in an animation form.
Preferably, in the above-mentioned scene reproduction method, the specific step of constructing the event reproduction model includes:
establishing a relation between event data and event results, and constructing an event result occurrence mathematical model;
performing feature selection by using the event data to construct a training set;
the training set is input into an RF model for training, and an event reproduction model is obtained.
Preferably, in the above-mentioned scene reproduction method, the expression of the event result occurrence mathematical model:
θ(t+1)=f(x 1 (t),x 2 (t),......,x n-1 (t),x n (t)); wherein f (·) is a nonlinear function, x i And (t) is an influencing factor of event result occurrence.
Preferably, in the above-described scene reproduction method, the artificial inference data is obtained through artificial experience.
Preferably, in the above method for reproducing a scene, the event reproducing model excludes an event by adding artificial inferred data, and the specific mathematical expression is:
wherein g (·), f' (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence; u (u) i (t) human inferred data for the occurrence of event results.
On the other hand, the invention also discloses a scene reproduction system, which comprises:
an acquisition module for acquiring an event data set, the event data set comprising: event data and scene data;
the model construction module is used for constructing an event reproduction model and a scene reproduction model by utilizing the event data and the scene data respectively;
the fusion module is used for fusing at least one event output by the event reproduction model with the scene reproduction model;
the false elimination module eliminates false events by adding artificial inferred data until a unique event is determined;
and the display module displays in an animation mode.
Preferably, in the above-mentioned scene reproduction system, the model building module includes an event reproduction model building module and a scene reproduction model building module;
the event reproduction model construction module includes:
the association relation establishing unit establishes a relation between the event data and the event result and establishes an event result occurrence mathematical model;
the feature selection unit is used for carrying out feature selection by utilizing the event data to construct a training set;
and the training unit is used for inputting the training set into the RF model for training to obtain an event reproduction model.
The scene reproduction module includes:
the recording unit acquires recording data from the scene, and comprises a frame state data list corresponding to a plurality of logic frames respectively, wherein the logic frames are frames with state update in a state synchronization algorithm, and the state update comprises: the scene state changes and/or the object state changes;
and the restoring unit is used for generating a scene restoring video according to the scene states and the object states generated in the plurality of logic frames and playing the scene restoring video.
Preferably, in the above-mentioned scene reproduction system, the association relation establishing unit determines that the event result occurrence mathematical model expression is:
θ(t+1)=f(x 1 (t),x 2 (t),......,x n-1 (t),x n (t)); wherein f (·) is a nonlinear function, x i And (t) is an influencing factor of event result occurrence.
Preferably, in the above-mentioned scene reproduction system, the false elimination module includes: the artificial inferred data acquisition unit is used for acquiring artificial inferred data;
the artificial inferred data discrimination unit discriminates the possibility of the acquired artificial inferred data and determines the authenticity probability of the artificial inferred data;
and the artificial inference data output unit sequentially increases the event reproduction model from large to small according to the authenticity probability to exclude false events.
Preferably, in the above-mentioned scene reproduction system, the false event excluding module may exclude the event by adding artificial inferred data, and the specific mathematical expression is:
wherein g (·), f' (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence; u (u) i (t) human inferred data for the occurrence of event results.
Compared with the prior art, the scene reproduction method and system provided by the invention have the advantages that the situation of new events is deduced by only updating the event reproduction model and the data in the scene reproduction model, the visual degree is high through the VR technology, and the scene reproduction method and system have important effects on various aspects such as doctor simulated operation, soldier training, police detection and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the overall method of the present invention;
FIG. 2 is a flow chart of an event reproduction model construction method of the present invention;
FIG. 3 is a flowchart of a scene reproduction model construction method of the present invention;
fig. 4 is a block diagram showing the overall structure of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It can be understood that, according to the scene reproduction method provided by the embodiment of the invention, various possibilities for causing the occurrence of the event can be obtained by constructing the event reproduction model, and false possibilities are eliminated by adding constraint conditions, so that the unique event is finally obtained. Taking a murder case as an example, an embodiment of the present invention discloses a scene reproduction method, as shown in fig. 1, including the following specific steps:
s101 acquires an event data set including: event data and scene data;
event data and scene data are obtained by investigation of investigation personnel aiming at a case-in-place.
Specifically, the event data includes: physical evidence acquired at the crime scene, such as crime tools, biological features of victims, biological features of others, death events, causes of death, etc.;
the scene data includes: environment, weather, temperature, humidity, etc. of the inland case site;
in addition to the above-described data for the event data set, witness of witness, video picture of suspicious person, and so on, in short, features associated with murder can be used as the data in the event data set.
S102, respectively constructing an event reproduction model and a scene reproduction model by using event data and scene data;
specifically, as shown in fig. 2, the specific steps of constructing the event reproduction model include:
s1021 establishes a relation between event data and event results, and builds an event result occurrence mathematical model;
specifically, the event outcome occurs from an expression of the mathematical model:
θ(t+1)=f(x 1 (t),x 2 (t),......,x n-1 (t),x n (t)); wherein f (·) is a nonlinear function, x i And (t) is an influencing factor of event result occurrence.
S1022, performing feature selection by using the event data, and constructing a training set;
s1023, inputting the training set into the RF model for training to obtain an event reproduction model.
Specifically, as shown in fig. 3, the specific steps of the scene reproduction model construction are as follows:
s1024 obtains the record data from the scene, which includes the frame status data list corresponding to the plurality of logical frames, the logical frames are the frames with status update in the status synchronization algorithm, and the status update includes: the scene state changes and/or the object state changes;
s1025 generates a scene restoration video according to the scene states and object states generated in the plurality of logical frames, and plays the scene restoration video.
Further, generating a scene restoration video, comprising: acquiring scene basic information and object basic information;
for example, scene basic information includes the size of a field room, environmental factors, and the like, and object basic information includes indoor furnishings, and the like.
S103, fusing at least one event output by the event reproduction model with the scene reproduction model;
the event reproduction model can obtain at least one event possibility according to the event data and the event occurrence result, outputs all event possibilities, and can preliminarily exclude the event which is not applicable to the current scene through the scene basic information and the object basic information in the scene reproduction model when the event reproduction model is placed in the scene reproduction model.
S104, eliminating false events by adding artificial inference data until a unique event is determined;
the event reproduction model excludes events by adding artificial inferred data, and the specific mathematical expression is:
wherein g (·), f' (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence; u (u) i (t) human inferred data for the occurrence of event results.
Further, by artificially inferring data, for example, determining that a trace of a person is present on the scene and that a person has a conflict with a dead person before, increasing the likelihood that a person is suspected, inputting the presence of a person as a feature into the event reproduction model can exclude a part of the event.
S105 is presented in an animated form.
Further, the three-dimensional data can be obtained by processing the determined event data, scene basic information and object basic information of the unique event, a three-dimensional presentation model is built according to the three-dimensional data and a preset model construction rule, and the VR technology is utilized to restore the historical event.
Through the steps S101-S105, determining an event reproduction model and a scene reproduction model through an event data set, fusing the scene reproduction model and the event reproduction model, restraining the event output by the event reproduction model, and eliminating a first false event;
further, artificial inference data is added, a probability generation model is utilized to determine the probability that the artificial inference data affects the occurrence of the events, the artificial inference data is added from the large probability to the small probability, and the second false event is eliminated until the unique event is left.
In another embodiment of the present invention, a scene reproduction system is disclosed, as shown in fig. 4, comprising:
the acquisition module is used for acquiring an event data set, wherein the event data set comprises: event data and scene data;
the model construction module is used for constructing an event reproduction model and a scene reproduction model by utilizing the event data and the scene data respectively;
the fusion module fuses at least one event output by the event reproduction model with the scene reproduction model;
the false elimination module eliminates false events by adding artificial inferred data until a unique event is determined;
and the display module displays in an animation mode.
In order to further optimize the technical scheme, the model building module comprises an event reproduction model building module and a scene reproduction model building module;
the event reproduction model construction module includes:
the association relation establishing unit establishes a relation between the event data and the event result and establishes an event result occurrence mathematical model;
the feature selection unit is used for performing feature selection by using the event data to construct a training set;
and the training unit is used for inputting the training set into the RF model for training to obtain an event reproduction model.
In order to further optimize the technical scheme, the association relation establishing unit determines that the event result occurrence mathematical model expression is:
θ(t+1)=f(x 1 (t),x 2 (t),……,x n-1 (t),x n (t)); wherein f (·) is a nonlinear function, x i And (t) is an influencing factor of event result occurrence.
In order to further optimize the technical scheme, the training unit trains each event data and each event result independently, and finally, the event reproduction model is obtained after feature fusion.
In order to further optimize the above technical solution, the false elimination module includes: the artificial inferred data acquisition unit is used for acquiring artificial inferred data;
the artificial inferred data discrimination unit discriminates the possibility of the acquired artificial inferred data and determines the authenticity probability of the artificial inferred data;
and the artificial inference data output unit sequentially increases the event reproduction model from high to low according to the authenticity probability to eliminate false events.
In order to further optimize the technical scheme, the false event elimination module eliminates the event by adding artificial inferred data, and the specific mathematical expression is as follows:
wherein g (·), f' (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence; u (u) i (t) human inferred data for the occurrence of event results.
The prediction function of the invention only needs to update the data in the event reproduction model and the scene reproduction model to infer the occurrence of a new event, and the visual degree is very high through the VR technology, thus having important roles in various aspects such as doctor simulation operation, soldier training, police detection and the like.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (4)

1. A scene reproduction method is characterized by comprising the following specific steps:
obtaining an event data set, the event data set comprising: event data and scene data;
respectively constructing an event reproduction model and a scene reproduction model by utilizing the event data and the scene data; the specific steps of constructing the event reproduction model include: establishing a relation between event data and event results, and constructing an event result occurrence mathematical model; the event outcome occurs to the expression of the mathematical model: θ (t+1) =f (x 1 (t),x 2 (t),......,x n-1 (t),x n (t)); wherein f (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence;
performing feature selection by using the event data to construct a training set;
the training set is input into an RF model for training to obtain an event reproduction model;
fusing at least one event output by the event reproduction model with the scene reproduction model;
eliminating false events by adding artificial inference data until a unique event is determined; the event reproduction model excludes events by adding artificial inferred data, and the specific mathematical expression is as follows:
wherein g (·), f' (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence; u (u) i (t) human inferred data for the occurrence of event results;
the display is performed in an animation form.
2. A scene rendering method according to claim 1, wherein the artificial extrapolated data is obtained through artificial experience.
3. A scene reproduction system, comprising:
an acquisition module for acquiring an event data set, the event data set comprising: event data and scene data;
the model construction module is used for constructing an event reproduction model and a scene reproduction model by utilizing the event data and the scene data respectively; the model construction module comprises an event reproduction model construction module and a scene reproduction model construction module;
the event reproduction model construction module includes: the association relation establishing unit establishes a relation between the event data and the event result and establishes an event result occurrence mathematical model; the association relation establishing unit determines that the event result occurrence mathematical model expression is:
θ(t+1)=f(x 1 (t),x 2 (t),......,x n-1 (t),x n (t)); wherein f (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence;
the feature selection unit is used for carrying out feature selection by utilizing the event data to construct a training set;
the training unit is used for inputting the training set into the RF model for training to obtain an event reproduction model fusion module, and fusing at least one event output by the event reproduction model with the scene reproduction model;
the false elimination module eliminates false events by adding artificial inferred data until a unique event is determined; the false event elimination module eliminates the event by adding artificial inferred data, and the specific mathematical expression is as follows:
wherein g (·), f' (·) is a nonlinear function, x i (t) is the influencing factor of event outcome occurrence; u (u) i (t) human inferred data for the occurrence of event results;
and the display module displays in an animation mode.
4. A scene rendering system according to claim 3, wherein said spurious elimination module comprises: the artificial inferred data acquisition unit is used for acquiring artificial inferred data;
the artificial inferred data discrimination unit discriminates the possibility of the acquired artificial inferred data and determines the authenticity probability of the artificial inferred data;
and the artificial inference data output unit sequentially increases the event reproduction model from large to small according to the authenticity probability to exclude false events.
CN202110591915.4A 2021-05-28 2021-05-28 Scene reproduction method and reproduction system Active CN113205576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110591915.4A CN113205576B (en) 2021-05-28 2021-05-28 Scene reproduction method and reproduction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110591915.4A CN113205576B (en) 2021-05-28 2021-05-28 Scene reproduction method and reproduction system

Publications (2)

Publication Number Publication Date
CN113205576A CN113205576A (en) 2021-08-03
CN113205576B true CN113205576B (en) 2024-02-27

Family

ID=77023477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110591915.4A Active CN113205576B (en) 2021-05-28 2021-05-28 Scene reproduction method and reproduction system

Country Status (1)

Country Link
CN (1) CN113205576B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080097403A (en) * 2008-07-14 2008-11-05 플레이데이타 시스템즈, 인크. Method and system for creating event data and making same available to be served
KR102120780B1 (en) * 2019-11-13 2020-06-09 한국해양과학기술원 System and method for simulating and analyzing marine accident
CN111563313A (en) * 2020-03-18 2020-08-21 交通运输部公路科学研究所 Driving event simulation reproduction method, system, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005069253A1 (en) * 2004-01-16 2005-07-28 R3 Consulting Pty Ltd Real-time training simulation system and method
PL2870567T3 (en) * 2012-07-04 2017-05-31 Virtually Live (Switzerland) Gmbh Method and system for real-time virtual 3d reconstruction of a live scene, and computer-readable media
US10453172B2 (en) * 2017-04-04 2019-10-22 International Business Machines Corporation Sparse-data generative model for pseudo-puppet memory recast
US11392733B2 (en) * 2018-08-03 2022-07-19 EMC IP Holding Company LLC Multi-dimensional event model generation
EP4234881A3 (en) * 2018-11-29 2023-10-18 BP Exploration Operating Company Limited Das data processing to identify fluid inflow locations and fluid type

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080097403A (en) * 2008-07-14 2008-11-05 플레이데이타 시스템즈, 인크. Method and system for creating event data and making same available to be served
KR102120780B1 (en) * 2019-11-13 2020-06-09 한국해양과학기술원 System and method for simulating and analyzing marine accident
CN111563313A (en) * 2020-03-18 2020-08-21 交通运输部公路科学研究所 Driving event simulation reproduction method, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Parametric Spatial Sound Processing: A flexible and efficient solution to sound scene acquisition, modification, and reproduction;Konrad Kowalczyk et al.;IEEE Signal Processing Magazine;31-42 *
道路交通事故再现模拟分析系统的研究;李一兵;《汽车工程》(第第4期期);226-229, 265 *

Also Published As

Publication number Publication date
CN113205576A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN111178523B (en) Behavior detection method and device, electronic equipment and storage medium
Wang et al. A video is worth more than 1000 lies. Comparing 3DCNN approaches for detecting deepfakes
US20210382933A1 (en) Method and device for archive application, and storage medium
CN111523413B (en) Method and device for generating face image
CN115526066B (en) Engineering project virtual simulation teaching method and system based on BIM technology
CN111274482A (en) Intelligent education system and method based on virtual reality and big data
KR20190125029A (en) Methods and apparatuses for generating text to video based on time series adversarial neural network
CN112149615A (en) Face living body detection method, device, medium and electronic equipment
Deshmukh et al. Deepfake detection approaches using deep learning: A systematic review
CN113205576B (en) Scene reproduction method and reproduction system
CN110363080A (en) Ox recognition methods, device, terminal and storage medium based on recognition of face
CN113572981A (en) Video dubbing method and device, electronic equipment and storage medium
CN112669244A (en) Face image enhancement method and device, computer equipment and readable storage medium
TW202042104A (en) Course index detection alarm method and device, electronic equipment and storage medium
CN115859689A (en) Panoramic visualization digital twin application method
CN114581749A (en) Audio-visual feature fusion target behavior identification method and device and application
CN109684143B (en) Deep learning-based GPU performance testing method and device
CN114049682A (en) Human body abnormal behavior identification method, device, equipment and storage medium
CN116824459B (en) Intelligent monitoring and evaluating method, system and storage medium for real-time examination
AU2021240232A1 (en) Data collection method and apparatus, device and storage medium
WO2023041970A1 (en) Data collection method and apparatus, device and storage medium
KR102554282B1 (en) Device, method and system for recommending object
CN117113439B (en) Safe anti-tampering storage method and system for data of automobile data recorder
CN113850828B (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
TWI784718B (en) Method and system for processing alarm event in factory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231102

Address after: 518000 Building B, Nanyuan Commercial Building, Nanyuan New Village, Hongshan Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province 229

Applicant after: Shenzhen Xindong Digital Information Co.,Ltd.

Address before: 518131 406, block B, Nanyuan commercial building, Nanyuan new village, North Station community, Minzhi street, Longhua District, Shenzhen, Guangdong Province

Applicant before: SHENZHEN XINDONG INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant