CN111078751A - Method and system for carrying out target statistics based on UNREAL4 - Google Patents

Method and system for carrying out target statistics based on UNREAL4 Download PDF

Info

Publication number
CN111078751A
CN111078751A CN201911291326.3A CN201911291326A CN111078751A CN 111078751 A CN111078751 A CN 111078751A CN 201911291326 A CN201911291326 A CN 201911291326A CN 111078751 A CN111078751 A CN 111078751A
Authority
CN
China
Prior art keywords
target
data
motion
scene
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911291326.3A
Other languages
Chinese (zh)
Inventor
赵伟玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanyi Digital Technology Co ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN201911291326.3A priority Critical patent/CN111078751A/en
Publication of CN111078751A publication Critical patent/CN111078751A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for carrying out target statistics based on UNREAL4, which is used for carrying out statistics on related data of a target to be measured in a target scene, and comprises the following steps: step S1, a virtual scene corresponding to the target scene is built in UNREAL 4; step S2, collecting the characteristic data of the target to be detected in the target scene, and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected; step S3, acquiring motion data of the target to be detected, and continuously updating the motion state of the corresponding target model in the virtual scene according to the motion data of the target to be detected to obtain the motion process of the target model; and step S4, according to the motion process of the target model, counting the relevant data of the target model with the preset motion process in the virtual scene. The invention also discloses a system for carrying out target statistics based on UNREAL4, which is used for counting the target to be measured meeting a certain motion process in a target scene and carrying out data analysis according to the statistical result.

Description

Method and system for carrying out target statistics based on UNREAL4
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method and a system for carrying out target statistics based on UNREAL 4.
Background
The existing big data analysis mainly collects data through hardware equipment, and the collected data mainly has the characteristics of reality (Veracity), Value (Value), multiple types (Variety), high speed (Velocity), large data Volume (Volume) and the like; the post-processing of the data mainly comprises the steps of analyzing the characteristics of the data through a terminal with strong computing capability; and classifying and summarizing according to a specific statistical mode, and displaying the data obtained after summarization in a chart form.
Because the electronic data analysis mode is fixed, the difference between the data is large, the statistical result of the data is easy to be confused, and the statistical result can exceed the expected range. Sometimes, the secondary treatment is carried out manually; in the process of data statistical analysis, on one hand, certain difficulty exists in the process of data statistics, and if the identifiability among data is low, error classification is easy to cause, so that a statistical result generates a large error; on the other hand, the numerical values obtained after the statistics are completed are obscure and difficult to understand, the data displayed by the diagram are not visual enough, the statistical process of the data cannot be embodied, and the statistical result is difficult to convince; in severe cases, the statistical results may also need to be manually accounted; how to simplify the statistical process of the target data and make the information obtained by statistical analysis more intuitive and easier to be accepted by people becomes a big problem.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a method and a system for carrying out target statistics based on UNREAL4, which are used for simplifying the target data statistics process and enabling information obtained through statistical analysis to be more visual and more acceptable to people.
In order to solve the technical problem, the invention provides a method for carrying out target statistics based on UNREAL4, which is used for carrying out statistics on relevant data of a target to be measured in a target scene, and the method comprises the following steps:
step S1, a virtual scene corresponding to the target scene is built in UNREAL 4;
step S2, collecting the characteristic data of the target to be detected in the target scene, and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected;
step S3, acquiring motion data of the target to be detected, and continuously updating the motion state of the corresponding target model in the virtual scene according to the motion data of the target to be detected to obtain the motion process of the target model;
and step S4, according to the motion process of the target model, counting the relevant data of the target model with the preset motion process in the virtual scene.
The invention also provides a system for carrying out target statistics based on UNREAL4, wherein the system for carrying out target statistics based on UNREAL4 comprises:
the scene building module is used for building a virtual scene corresponding to the target scene in UNREAL 4;
the model establishing module is used for acquiring the characteristic data of the target to be detected in the target scene and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected;
the model updating module is used for acquiring motion data of the target to be detected, and continuously updating the motion state of the corresponding target model in the virtual scene according to the motion data of the target to be detected to obtain the motion process of the target model;
and the model counting module is used for counting the relevant data of the target model with a preset motion process in the virtual scene according to the motion process of the target model.
According to the method and the system for carrying out target statistics based on UNREAL4, a virtual scene corresponding to a target scene is built in UNREAL4, a target model corresponding to a target to be measured is built in the virtual scene, the motion state of the target model is continuously updated according to the motion data of the target to be measured, and finally the motion data of the target to be measured is counted by counting the target model in a preset power running process; the virtual scene and the target model are established, so that the process of data statistics is more visual, the statistical principle is easy to understand, the process of data analysis is simplified, and the information obtained by statistics is displayed in a centralized manner through the model and is easier to accept.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for performing target statistics based on UNREAL4 in an embodiment of the present invention.
Fig. 2 is a sub-flowchart of step S1 in fig. 1.
Fig. 3 is a sub-flowchart of step S2 in fig. 1.
Fig. 4 is a sub-flowchart of step S3 in fig. 1.
Fig. 5 is a sub-flowchart of step S4 in fig. 1.
Fig. 6 is a block diagram of a system for performing target statistics based on UNREAL4 according to an embodiment of the present invention.
Fig. 7 is a block diagram of the structure of the scene building module 10 in fig. 6.
Fig. 8 is a block diagram of the structure of the model building block 20 in fig. 6.
Fig. 9 is a block diagram of the structure of the model update module 30 in fig. 6.
Fig. 10 is a block diagram of the structure of the model statistics module 40 in fig. 6.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
In the description of the embodiments of the present invention, it should be understood that the terms "first" and "second" are only used for convenience in describing the present invention and simplifying the description, and thus, should not be construed as limiting the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for performing target statistics based on UNREAL4 according to an embodiment of the present invention.
As shown in fig. 1, the present invention provides a method for performing target statistics based on UNREAL4, which is used for performing statistics on relevant data of a target to be measured in a target scene, and the method includes the following steps: step S1, a virtual scene corresponding to the target scene is built in UNREAL 4; step S2, collecting the characteristic data of the target to be detected in the target scene, and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected; step S3, acquiring motion data of the target to be detected, and continuously updating the motion state of the corresponding target model in the virtual scene according to the motion data of the target to be detected to obtain the motion process of the target model; and step S4, according to the motion process of the target model, counting the relevant data of the target model with the preset motion process in the virtual scene.
Therefore, according to the method for carrying out target statistics based on UNREAL4, the virtual scene corresponding to the target scene is built in UNREAL4, the target model corresponding to the target to be measured is built in the virtual scene, the motion state of the target model is continuously updated according to the motion data of the target to be measured, and finally the motion data of the target to be measured is counted by counting the target model in the preset work running process; the virtual scene and the target model are established, so that the process of data statistics is more visual, the statistical principle is easy to understand, the process of data analysis is simplified, and the information obtained by statistics is displayed in a centralized manner through the model and is easier to accept.
Referring to fig. 2, fig. 2 is a sub-flowchart of step S1 in fig. 1.
As shown in fig. 2, in some embodiments, the step S1 includes: step S11, acquiring environmental data of a target scene; step S12, inputting the environment data of the target scene into UNREAL 4; and step S13, building a virtual scene corresponding to the target scene in UNREAL4 according to the environment data.
The UNREAL4, that is, the uneal engine 4 is a 4 th generation ghost engine, and the 4 th generation ghost engine is a game engine, which is a special engine developed for a mobile terminal.
The game engine is a core component of some edited terminal game systems or some interactive real-time image application programs, is used for providing various tools required for game designers to write games, and aims to enable the game designers to make game programs on mobile terminals easily and quickly.
The mobile terminal includes a Central Processing Unit (CPU), and may also be other general purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and so on. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processing unit being a data processing center of the vehicle, the various modules of the terminal being connected by wired or wireless lines.
The target to be measured is a real object of statistical analysis; the target to be detected moves in a specific area, and the target to be detected needs to be associated with the statistical content; for example, when the flow of people in a certain shop in a mall is counted, the target to be measured is all the customers in the mall.
The target scene is a real scene of the activity of the target to be detected; the target scene, namely the specific area, is also used for carrying out activities related to the statistical content of the target to be detected in the target scene; for example, when the traffic of a certain shop in a mall is counted, a certain floor of the mall in which the shop is located is set as a target scene.
The related data is data which is obtained by carrying out detection statistics on the target to be detected in the target scene and is associated with the target to be detected; the related data can be the statistical content; the related data may be the number of the target to be measured, the staying time of the target to be measured in a certain area, the motion track of the target to be measured, and the like.
The virtual scene is a virtualized three-dimensional environment scene built in the UNREAL 4; the virtual scene is an environmental scene highly restored in the UNREAL4 using modeling techniques according to the target scene. In addition, the environmental change in the virtual scene is consistent with the environmental change of the target scene.
The environment data refers to data of environment elements such as buildings, terrains, vegetation, layout and the like in a target scene; the data of the environment elements includes size data and position data of the environment elements.
Specifically, the environment data of the target scene is obtained through manual measurement and calculation, and is input into the terminal installed with the UNREAL 4.
After the terminal acquires the environmental data of a target scene, starting UNREAL4, and inputting the environmental data of the target scene into UNREAL 4; a virtual scene identical to the target scene is created in UNREAL4 according to the environment data.
In other embodiments, images containing environmental elements in a target scene are acquired from various angles through a camera respectively, and size data and position data of the environmental elements are obtained through calculation of the terminal according to the images containing the environmental elements, so that the cost of manual measurement and calculation is reduced, the measurement error is reduced, and the reduction degree of a virtual scene is improved.
Therefore, by acquiring the environmental data in the target scene and building the virtual scene in the UNREAL4 according to the environmental data, the target scene is restored to the maximum extent, conditions are provided for relevant data statistics, and the accuracy of the data statistics is improved.
Referring to fig. 3, fig. 3 is a sub-flowchart of step S2 in fig. 1.
As shown in fig. 3, in some embodiments, a camera and a sensor are disposed in the target scene, and the step S2 includes: step S21, collecting characteristic data of a target to be detected in a target scene through a camera and a sensor in the target scene; step S22, acquiring characteristic data acquired by a camera and a sensor, and inputting the characteristic data into UNREAL 4; and step S23, establishing a target model corresponding to the target to be detected in UNREAL4 according to the characteristic data.
The characteristic data refers to identifiability information data of the object to be detected, and the identifiability information can be characteristics with identification such as gender, facial features, height, weight and shape; the characteristic data is determined according to relevant data needing to be counted, for example, when the relevant data needing to be counted is the sex of a customer visiting a shop, the characteristic data is data of external characteristics such as the growth phase and the hair length of the customer and capable of distinguishing the sex.
The target model is a model which is established in the UNREAL4 according to the collected characteristic data of the target to be detected and corresponds to the target to be detected. The external characteristics of the target model and the external characteristics of the target to be detected are kept consistent, and the behavior of the target model and the behavior of the target to be detected are kept synchronous in real time, so that the relevant data of the target to be detected can be obtained by counting the relevant data of the target model.
The camera is arranged at a higher position in the target scene in advance and is used for acquiring image data of a target to be detected in the target scene; for example, the image data of the customer arriving at the store and the customer arriving at the store can be conveniently shot by hanging the image data at the door of the mall store and facing to the outside of the store; the camera is connected with the mobile terminal in a wired or wireless mode, and transmits the acquired image data to the mobile terminal for analysis processing, so that the characteristic data of the target to be detected is obtained.
The sensor comprises an infrared sensor, a weight sensor and the like, and the weight sensor is arranged at a lower position in the target scene in advance, so that the weight data of the target to be detected in the target scene can be conveniently acquired; for example, the weight sensor is arranged under a carpet at an entrance of a mall or a store, when a customer enters the mall or the store from the entrance, and the camera acquires that the customer steps on the carpet, that is, the terminal controls to start the weight sensor at the position corresponding to the carpet, and sequentially acquires the weight data of each customer entering the store. Or when the camera detects that the user stays in the mall or the shop, the terminal controls and starts the weight sensor under the floor of the current position of the user, so that the weight data of the customer are collected. The infrared sensor is arranged in a target scene in advance and used for detecting the position movement of a target to be detected, for example, the infrared sensor is arranged at the door side position of an entrance of a shopping mall or a shop and used for counting the number of customers entering the shop; the sensor is connected with the terminal in a wired or wireless mode and is used for transmitting the collected weight data or quantity data to the terminal.
Specifically, a camera is arranged in the target scene to acquire image data of a target to be detected in the target scene; acquiring weight data or quantity data of a target to be detected in the target scene by arranging a sensor in the target scene, transmitting the acquired image data, weight data or quantity data to a terminal for analysis and processing, and processing by the terminal to obtain characteristic data of the target to be detected in the target scene;
respectively inputting the feature data of the target to be detected in the target scene into UNREAL4, establishing a target model in proportion to the target to be detected in the target scene in UNREAL4 according to the feature data, guiding the target model into the corresponding position in the virtual scene according to the position of the target to be detected in the target scene, and displaying the target model in the virtual scene.
In other embodiments, a plurality of three-dimensional scanners are arranged at positions, which need to detect a target to be detected, in the target scene, the three-dimensional scanners are connected with the terminal in a wired or wireless manner, and are used for performing omni-directional scanning on the target to be detected entering the target scene to acquire lattice model data of the target to be detected, and after receiving the lattice model data, the terminal inputs the lattice model data into the unral 4; the UNREAL4 rapidly generates a target model in the same proportion as the target to be detected according to the dot matrix model data of the target to be detected, introduces the target model into the virtual scene, and displays the target model at the position of the virtual scene corresponding to the target to be detected; therefore, the rapid modeling is realized, and the modeling efficiency is improved.
In other embodiments, the plurality of cameras may be arranged, and are respectively arranged at different positions of the target scene, so as to respectively obtain the characteristic data of the target to be detected from different angles, and input the collected characteristic data of the target to be detected to the terminal for integration, thereby obtaining more characteristic data of the target to be detected, perfecting the details of the target model, and facilitating the statistics and analysis of later-stage related data.
Therefore, the characteristic data of the target to be measured is collected in the target scene through the camera and the sensor, the target model corresponding to the target to be measured is established in the UNREAL4, the established target model is displayed in the virtual scene, the real scene is converted into the virtual scene, the data statistics process is more visual through the 3D model, and the collected data are more accurate.
Referring to fig. 4, fig. 4 is a sub-flowchart of step S3 in fig. 1.
As shown in fig. 4, in some embodiments, the motion data includes motion data and position data, the motion state includes motion and position, and the step S3 includes: step S31, acquiring motion data and position data of a target to be detected in a target scene according to a preset time interval by a camera in the target scene; step S32, comparing and calculating the action data collected according to the adjacent time intervals to obtain the action variable of the target to be measured, and comparing and calculating the position data collected according to the adjacent time intervals to obtain the position variable of the target to be measured; step S33, inputting the motion variable and the position variable into UNREAL4, and updating the motion and position of the target model.
The motion data comprises motion data and position data of the target to be detected, the motion data refers to collected data containing motion information of the target to be detected in a target scene, and the motion data comprises data of the posture, the orientation and the like of the target to be detected; for example, the walking posture and orientation of the customer obtained from the collected image data including the walking posture of the customer in the mall; the position data refers to collected data including position information of the target to be measured in the target scene, for example, the position of the customer in the market is obtained by analyzing the collected image data including the customer and a certain landmark in the market.
The motion state comprises the action and the position of the target model, the action refers to the action of the target model corresponding to the collected action data of the target to be detected, and the position refers to the position of the target model corresponding to the collected position data of the target to be detected in the target scene.
The preset time interval refers to a preset time interval for shooting the image of the model to be detected by the camera, and the smaller the preset time interval is, the higher the action synchronization rate of the target model and the target to be detected is; the preset time interval can be adjusted according to the image acquisition frequency of the camera; after the terminal generates a target model according to the collected characteristic data of the target to be detected, the camera starts to shoot image data of the target to be detected according to a preset time interval; for example, the preset time interval may be 0.2S.
The motion variable is a motion variable of the object to be measured calculated according to images of the object to be measured taken at adjacent time intervals, and the motion variable includes a motion variable and a motion change direction, for example, the motion variable of the leg of a certain customer can be calculated to be lifted up by 10cm according to the images taken at the adjacent time intervals by a camera.
The position variable refers to the variation of the position of the target to be measured, which is calculated according to the images of the target to be measured shot at adjacent time intervals. The position variable can be calculated by comparing reference objects in the shot image of the target to be measured, and the position variable of the target to be measured is obtained through calculation; the position variable includes a position movement amount and a position movement direction; for example, from two store images taken by a camera at adjacent time intervals, the position variable of a certain customer is calculated to be moved forward by 20cm relative to the store gate.
Specifically, the camera shoots an image of a target to be detected in a target scene when a preset time interval expires, the image is transmitted to the terminal in a wired or wireless mode, the terminal receives the image collected by the camera, obtains action data of the target to be detected according to the posture and the orientation of the target to be detected in the image, and calculates position data of the target to be detected according to an environment reference object around the target to be detected in the image of the target to be detected.
The terminal compares the image with an image shot when a last preset time interval expires, compares the action data of the target to be detected with the action data in the image shot last time, and calculates to obtain an action variable of the target to be detected; and comparing the position data of the target to be detected with the position data in the last shot image, and calculating to obtain the position variable of the target to be detected.
Inputting the calculated motion variable and position variable into UNREAL4, and searching a target model corresponding to a target to be detected shot twice continuously in UNREAL 4; and updating the action and the position of the target model according to the action variable and the position variable. And displaying the updated object model in the virtual scene instead of the previous object model.
In other embodiments, a preset mark position is set in a target scene area shot by each camera, the preset mark position is searched in an image shot by the camera, the position of a target to be detected can be quickly positioned according to the preset mark position in the image, and the action variable and the position variable of the target to be detected can be quickly determined by taking the preset mark position as a reference point, so that the updating speed of a target model is increased.
In other embodiments, images of the target to be detected acquired by the cameras at different positions at different times can be used, the motion state of the target model is correspondingly updated in the virtual scene by detecting the time sequence of the target to be detected appearing in the images, and the motion track of the target model is calculated, so that the motion track of the target to be detected is obtained according to the motion track of the target model, and the method can be applied to pursuing a target suspect and fighting crimes.
Therefore, the image data of the target to be measured in the target scene is collected in real time, the motion data of the target to be measured is calculated according to the image data, and the motion state of the corresponding target model in the virtual scene is updated according to the motion data, so that the motion synchronization of the target model and the target to be measured is realized, the statistical data is convenient to obtain, the motion process of the target model is dynamically displayed in the virtual scene, and the statistical process of the data is more visual.
Referring to fig. 5, fig. 5 is a sub-flowchart of step S4 in fig. 1.
As shown in fig. 5, in one embodiment. The virtual scene includes a plurality of object models, and step S4 includes: step S41, arranging all motion states of each target model in the virtual scene according to a time sequence to obtain motion processes of all target models; step S42, comparing the motion processes of all target models with the preset motion process, and screening out the target models which accord with the preset motion process as conditional models; and step S43, counting the number of the condition models, and taking the number as the relevant data of the target to be measured in the preset motion process in the target scene.
The motion process refers to a process formed by continuous motion states of the same target model in a certain period of time, wherein each motion state comprises the action and the position of the target model;
the preset motion process refers to a preset process of changing the motion state of the target model, and is determined according to relevant data needing to be counted. The preset motion process generally comprises a plurality of preset motion states; the target model is considered to meet the preset motion process as long as the preset motion process is included in the motion process of the target model; for example, when the flow of people entering a store needs to be counted, the preset exercise process may be an exercise process of a customer entering the store, which is formed by combining a plurality of exercise states of a customer model from outside the store to inside the store.
The condition model refers to a target model meeting statistical conditions in all target models; the statistical conditions refer to screening conditions used for selecting data having statistical value from the collected data. The statistical condition can be single or multiple; the statistical conditions can be preset according to the statistical content. For example, when the number of women among customers entering the store is counted, the first statistical condition is the customer entering the store, and the second statistical condition is the woman customer.
Specifically, motion data of all targets to be detected in a target scene are collected within a period of time, and the action state of the corresponding target model in the virtual scene is updated according to the motion data of the targets to be detected.
And sequencing the action states of each target model in the virtual scene in the period of time according to the time sequence to obtain the motion process of each target model in the period of time.
And comparing the motion process of each target model in the period of time with the preset motion process, and screening out the target models containing the preset motion process in the period of time as conditional models meeting statistical conditions.
And counting the number of the condition models to serve as the number of the targets to be detected in the preset motion process in the target scene in the period of time.
In a specific application scene, a virtual scene of a market is built by using UNREAL4, a character model corresponding to each customer is built in the built virtual scene of the market according to the collected characteristic data of each customer entering the market, and the motion state of the character model is updated in real time according to the motion data of the customer in the market.
Selecting the motion state of the character model in the virtual scene of the market in a certain period of time, and counting the number of customers entering a certain store in the market; the method comprises the steps of setting a position change process from the outside of a shop to the inside of the shop as a preset movement process, comparing the movement process of each person model in the period of time with the preset movement process respectively, finding out the person model containing the preset movement process in the movement process as a condition model, and obtaining the number of customers entering a certain shop in a mall in the selected time period by counting the number of the condition models.
In other embodiments, for a motion process with a small motion state change, the preset motion process may be set to only include an initial motion state and a result motion state, the initial motion state is compared with all motion states of the target model, then the result motion state is compared with all motion states of the target model, and the target model which simultaneously conforms to the preset motion process is screened out, so that the target model which conforms to the preset motion state is screened out to the maximum extent, and omission is avoided in the screening process of the target model. For example, in a certain period of time, a character model whose position changes from outside the store to inside the store is found out from all character models in the virtual scene, that is, the position of the initial motion state is outside the store, and as a result, the position of the motion state is inside the store.
In other embodiments, the preset motion process may be a motion process with the same motion, that is, the number of target models whose duration of a certain motion in the virtual scene reaches the preset time is screened out. For example, customers who stay in a sleeping position for more than one hour in a store are screened out.
In other embodiments, the preset motion process may also be a motion process in the same position or in the same position area of the motion state, that is, a target model continuously located in a certain position or area is screened out. For example, customers who stay in the store for more than one hour are screened.
In other embodiments, the preset motion process may also be a motion process with the same motion at the same position, that is, a target model for performing a certain motion at a certain position is screened out. For example, all customers who are looking up at the door of the store and continuously pay attention to a certain product are screened out.
Therefore, the motion processes of all the target models are screened according to the preset motion process to obtain the target models which accord with the preset motion process, the target to be detected is screened and counted to obtain the related data of the target to be detected in the target scene, the data analysis efficiency is improved, the counted data are more visual and easier to accept by people, and the counted data are more accurate.
A method for performing objective statistics based on unral 4 provided by the present invention can be implemented in hardware, firmware, or as software or computer code that can be stored in a computer readable storage medium such as a CD, ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code that is originally stored on a remote recording medium or a non-transitory machine readable medium, downloaded over a network, and stored in a local recording medium, so that a method for performing objective statistics based on unral 4 described herein can be presented using a general purpose computer or special processor, or in programmable or dedicated hardware such as an ASIC or FPGA as software stored on a recording medium. As can be appreciated in the art, a computer, processor, microprocessor, controller or programmable hardware includes a memory component, e.g., RAM, ROM, flash memory, etc., that can store or receive software or computer code when the computer, processor or hardware accesses and executes the software or computer code implementing one of the methods described herein that are based on the UNREAL4 for target statistics. In addition, when a general-purpose computer accesses code for implementing the processing shown herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the processing shown herein.
The computer readable storage medium may be a solid state memory, a memory card, an optical disc, etc. The computer-readable storage medium stores program instructions for the computer to call and execute a method for performing target statistics based on UNREAL4 shown in fig. 1 to 5.
Referring to fig. 6, fig. 6 is a block diagram of a system 100 for performing target statistics based on UNREAL4 according to an embodiment of the present invention.
As shown in fig. 6, the present invention further provides a system 100 for performing target statistics based on UNREAL4, where the system 100 for performing target statistics based on UNREAL4 includes: a scene building module 10, configured to build a virtual scene corresponding to the target scene in UNREAL 4; the model establishing module 20 is used for acquiring the characteristic data of the target to be detected in the target scene and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected; the model updating module 30 is configured to collect motion data of the target to be detected, and continuously update a motion state of a corresponding target model in the virtual scene according to the motion data of the target to be detected, so as to obtain a motion process of the target model; and the model counting module 40 is configured to count relevant data of the target model in which a preset motion process occurs in the virtual scene according to the motion process of the target model.
Referring to fig. 7, fig. 7 is a block diagram of the scene building module 10 in fig. 6.
In some embodiments, the scene building module 10 comprises: an environment data obtaining module 11, configured to obtain environment data of a target scene; an environment data input module 12 for inputting environment data of a target scene into the UNREAL 4; and the virtual scene building module 13 is configured to build a virtual scene corresponding to the target scene in the UNREAL4 according to the environment data.
Specifically, the environment data of the target scene is obtained through manual measurement and calculation, and is input into the terminal installed with the UNREAL 4. After the terminal acquires the environmental data of a target scene, starting UNREAL4, and inputting the environmental data of the target scene into UNREAL 4; a virtual scene identical to the target scene is created in UNREAL4 according to the environment data.
Therefore, by acquiring the environmental data in the target scene and building the virtual scene in the UNREAL4 according to the environmental data, the target scene is restored to the maximum extent, conditions are provided for relevant data statistics, and the accuracy of the data statistics is improved.
Referring to fig. 8, fig. 8 is a block diagram of the model building module 20 in fig. 6.
In some embodiments, a camera and a sensor are disposed in the target scene, and the model building module 20 includes: the characteristic data acquisition module 21 is used for acquiring characteristic data of a target to be detected in a target scene through a camera and a sensor in the target scene; the characteristic data input module 22 is used for acquiring characteristic data acquired by the camera and the sensor and inputting the characteristic data into UNREAL 4; and the target model establishing module 23 is used for establishing a target model corresponding to the target to be measured in the UNREAL4 according to the characteristic data.
Specifically, a camera is arranged in the target scene to acquire image data of a target to be detected in the target scene; acquiring weight data or quantity data of a target to be detected in the target scene by arranging a sensor in the target scene, transmitting the acquired image data, weight data or quantity data to a terminal for analysis and processing, and processing by the terminal to obtain characteristic data of the target to be detected in the target scene;
respectively inputting the feature data of the target to be detected in the target scene into UNREAL4, establishing a target model in proportion to the target to be detected in the target scene in UNREAL4 according to the feature data, guiding the target model into the corresponding position in the virtual scene according to the position of the target to be detected in the target scene, and displaying the target model in the virtual scene.
Therefore, the characteristic data of the target to be measured is collected in the target scene through the camera and the sensor, the target model corresponding to the target to be measured is established in the UNREAL4, the established target model is displayed in the virtual scene, the real scene is converted into the virtual scene, the data statistics process is more visual through the 3D model, and the collected data are more accurate.
Referring to fig. 9, fig. 9 is a block diagram illustrating a structure of the model update module 30 shown in fig. 6.
In some embodiments, the motion data includes motion data and position data, the motion state includes motion and position, the model update module 30 includes: the motion data acquisition module 31 is configured to acquire motion data and position data of a target to be detected in a target scene according to a preset time interval by using a camera in the target scene; the motion variable calculation module 32 is configured to perform comparison calculation according to motion data acquired at adjacent time intervals to obtain a motion variable of the target to be measured, and perform comparison calculation according to position data acquired at adjacent time intervals to obtain a position variable of the target to be measured; and a model motion update module 33, configured to input the motion variable and the position variable into the UNREAL4, and update the motion and the position of the target model.
Specifically, the camera shoots an image of a target to be detected in a target scene when a preset time interval expires, the image is transmitted to the terminal in a wired or wireless mode, the terminal receives the image collected by the camera, obtains action data of the target to be detected according to the posture and the orientation of the target to be detected in the image, and calculates position data of the target to be detected according to an environment reference object around the target to be detected in the image of the target to be detected. The terminal compares the image with an image shot when a last preset time interval expires, compares the action data of the target to be detected with the action data in the image shot last time, and calculates to obtain an action variable of the target to be detected; and comparing the position data of the target to be detected with the position data in the last shot image, and calculating to obtain the position variable of the target to be detected.
Inputting the calculated motion variable and position variable into UNREAL4, and searching a target model corresponding to a target to be detected shot twice continuously in UNREAL 4; and updating the action and the position of the target model according to the action variable and the position variable. And displaying the updated object model in the virtual scene instead of the previous object model.
Therefore, the image data of the target to be measured in the target scene is collected in real time, the motion data of the target to be measured is calculated according to the image data, and the motion state of the corresponding target model in the virtual scene is updated according to the motion data, so that the motion synchronization of the target model and the target to be measured is realized, the statistical data is convenient to obtain, the motion process of the target model is dynamically displayed in the virtual scene, and the statistical process of the data is more visual.
Referring to fig. 10, fig. 10 is a block diagram of the model statistics module 40 shown in fig. 6.
In some embodiments, the virtual scene includes a plurality of object models, and the model statistics module 40 includes: the motion state combination module 41 is configured to arrange all motion states of each target model in the virtual scene according to a time sequence to obtain motion processes of all target models; the conditional model screening module 42 is configured to compare the motion processes of all the target models with the preset motion process, and screen out a target model that meets the preset motion process as a conditional model; and the to-be-detected target counting module 43 is configured to count the number of the condition models, where the number is used as the relevant data of the to-be-detected target performing the preset motion process in the target scene.
Specifically, motion data of all targets to be detected in a target scene are collected within a period of time, and the action state of the corresponding target model in the virtual scene is updated according to the motion data of the targets to be detected. And sequencing the action states of each target model in the virtual scene in the period of time according to the time sequence to obtain the motion process of each target model in the period of time. And comparing the motion process of each target model in the period of time with the preset motion process, and screening out the target models containing the preset motion process in the period of time as conditional models meeting statistical conditions. And counting the number of the condition models to serve as the number of the targets to be detected in the preset motion process in the target scene in the period of time. In a specific application scene, a virtual scene of a market is built by using UNREAL4, a character model corresponding to each customer is built in the built virtual scene of the market according to the collected characteristic data of each customer entering the market, and the motion state of the character model is updated in real time according to the motion data of the customer in the market. Selecting the motion state of the character model in the virtual scene of the market in a certain period of time, and counting the number of customers entering a certain store in the market; the method comprises the steps of setting a position change process from the outside of a shop to the inside of the shop as a preset movement process, comparing the movement process of each person model in the period of time with the preset movement process respectively, finding out the person model containing the preset movement process in the movement process as a condition model, and obtaining the number of customers entering a certain shop in a mall in the selected time period by counting the number of the condition models.
As shown in fig. 6, in some embodiments, the system 100 for performing target statistics based on UNREAL4 further includes a storage module 50 for storing the feature data, the environment data, and the statistical related data acquired by the terminal.
The memory module 50 may include a high-speed random access memory, and may further include a nonvolatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), a plurality of magnetic disk storage devices, a Flash memory device, or other volatile solid state storage devices.
In addition, the invention also provides a terminal, which can be a mobile phone or a computer; the terminal uses the method for carrying out the target statistics based on UNREAL4, and the terminal comprises the system 100 for carrying out the target statistics based on UNREAL4 and is used for carrying out statistics on the related data of the target to be measured in the target scene.
Specifically, the storage module 50, the scene building module 10, the model building module 20, the model updating module 30 and the model statistics module 40 in the system 100 for performing target statistics based on the UNREAL4 are all arranged in the terminal; the scene building module 10 is connected with the model building module 20 in a wired or wireless manner, and is used for building a target model in a built virtual scene; the model establishing module 20 is connected with the model updating module 30 in a wired or wireless manner, and is used for updating the action state of the target model according to the collected motion data of the target to be detected; the model updating module 30 is connected with the model counting module 40 in a wired or wireless manner, and is configured to count the number of conditional models that meet a preset motion process in the target model. The storage module 50 is connected with the scene building module 10, the model building module 20, the model updating module 30 and the model counting module 40 in a wired or wireless manner, and is used for storing data required by each module and data obtained through counting.
The system 100 for performing target statistics based on unral 4 uses the method for performing target statistics based on unral 4, the functions performed by the system 100 for performing target statistics based on unral 4 correspond to the steps of the method for performing target statistics based on unral 4, and reference may be made to the related contents of the method for performing target statistics based on unral 4 in the more detailed description.
According to the method and the system for carrying out target statistics based on UNREAL4, a virtual scene corresponding to a target scene is built in UNREAL4, a target model corresponding to a target to be measured is built in the virtual scene, the motion state of the target model is continuously updated according to the motion data of the target to be measured, and finally the motion data of the target to be measured is counted by counting the target model in a preset power running process; the virtual scene and the target model are established, so that the process of data statistics is more visual, the statistical principle is easy to understand, the process of data analysis is simplified, and the information obtained by statistics is displayed in a centralized manner through the model and is easier to accept.
The foregoing is illustrative of embodiments of the present invention, and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the embodiments of the present invention and are intended to be within the scope of the present invention.

Claims (10)

1. A method for carrying out target statistics based on UNREAL4 is used for carrying out statistics on related data of a target to be measured in a target scene, and is characterized by comprising the following steps:
step S1, a virtual scene corresponding to the target scene is built in UNREAL 4;
step S2, collecting the characteristic data of the target to be detected in the target scene, and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected;
step S3, acquiring motion data of the target to be detected, and continuously updating the motion state of the corresponding target model in the virtual scene according to the motion data of the target to be detected to obtain the motion process of the target model;
and step S4, according to the motion process of the target model, counting the relevant data of the target model with the preset motion process in the virtual scene.
2. The method for performing target statistics based on UNREAL4 of claim 1, wherein the step S1 comprises:
step S11, acquiring environmental data of a target scene;
step S12, inputting the environment data of the target scene into UNREAL 4;
and step S13, building a virtual scene corresponding to the target scene in UNREAL4 according to the environment data.
3. The method for performing target statistics based on UNREAL4 as claimed in claim 2, wherein the target scene is provided with a camera and a sensor, the step S2 includes:
step S21, collecting characteristic data of a target to be detected in a target scene through a camera and a sensor in the target scene;
step S22, acquiring characteristic data acquired by a camera and a sensor, and inputting the characteristic data into UNREAL 4;
and step S23, establishing a target model corresponding to the target to be detected in UNREAL4 according to the characteristic data.
4. The UNREAL 4-based method for performing target statistics, as claimed in claim 3, wherein the motion data includes motion data and position data, the motion status includes motion and position, the step S3 includes:
step S31, acquiring motion data and position data of a target to be detected in a target scene according to a preset time interval by a camera in the target scene;
step S32, comparing and calculating the action data collected according to the adjacent time intervals to obtain the action variable of the target to be measured, and comparing and calculating the position data collected according to the adjacent time intervals to obtain the position variable of the target to be measured;
step S33, inputting the motion variable and the position variable into UNREAL4, and updating the motion and position of the target model.
5. The method for performing target statistics based on UNREAL4 as claimed in claim 1, wherein the virtual scene includes a plurality of target models, the step S4 includes:
step S41, arranging all motion states of each target model in the virtual scene according to a time sequence to obtain motion processes of all target models;
step S42, comparing the motion processes of all target models with the preset motion process, and screening out the target models which accord with the preset motion process as conditional models;
and step S43, counting the number of the condition models, and taking the number as the relevant data of the target to be measured in the preset motion process in the target scene.
6. A system for carrying out target statistics based on UNREAL4 is characterized in that the system for carrying out target statistics based on UNREAL4 comprises:
the scene building module is used for building a virtual scene corresponding to the target scene in UNREAL 4;
the model establishing module is used for acquiring the characteristic data of the target to be detected in the target scene and establishing a target model in UNREAL4 according to the characteristic data of the target to be detected;
the model updating module is used for acquiring motion data of the target to be detected, and continuously updating the motion state of the corresponding target model in the virtual scene according to the motion data of the target to be detected to obtain the motion process of the target model;
and the model counting module is used for counting the relevant data of the target model with a preset motion process in the virtual scene according to the motion process of the target model.
7. The UNREAL 4-based system for performing target statistics, as claimed in claim 6, wherein the scene building module comprises:
the environment data acquisition module is used for acquiring environment data of a target scene;
the environment data input module is used for inputting the environment data of the target scene into UNREAL 4;
and the virtual scene building module is used for building a virtual scene corresponding to the target scene in UNREAL4 according to the environment data.
8. The UNREAL 4-based target statistics system according to claim 7, wherein the target scene is provided with a camera and a sensor, the model building module comprises:
the characteristic data acquisition module is used for acquiring characteristic data of a target to be detected in a target scene through a camera and a sensor in the target scene;
the characteristic data input module is used for acquiring characteristic data acquired by the camera and the sensor and inputting the characteristic data to UNREAL 4;
and the target model establishing module is used for establishing a target model corresponding to the target to be detected in UNREAL4 according to the characteristic data.
9. The UNREAL 4-based system for performing goal statistics, according to claim 8, wherein the motion data includes motion data and location data, the motion status includes motion and location, and the model update module includes:
the motion data acquisition module is used for acquiring motion data and position data of a target to be detected in a target scene according to a preset time interval through a camera in the target scene;
the motion variable calculation module is used for carrying out comparison calculation according to the motion data acquired at adjacent time intervals to obtain the motion variable of the target to be detected, and carrying out comparison calculation according to the position data acquired at adjacent time intervals to obtain the position variable of the target to be detected;
and the model action updating module is used for inputting the action variable and the position variable into UNREAL4 and updating the action and the position of the target model.
10. The UNREAL 4-based system for performing object statistics, according to claim 7, wherein the virtual scene includes a plurality of object models, the model statistics module includes:
the motion state combination module is used for arranging all motion states of each target model in the virtual scene according to a time sequence to obtain motion processes of all target models;
the conditional model screening module is used for comparing the motion processes of all the target models with the preset motion process and screening out the target models which accord with the preset motion process as conditional models;
and the to-be-detected target counting module is used for counting the number of the condition models and taking the number as the related data of the to-be-detected target in the preset motion process in the target scene.
CN201911291326.3A 2019-12-13 2019-12-13 Method and system for carrying out target statistics based on UNREAL4 Pending CN111078751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911291326.3A CN111078751A (en) 2019-12-13 2019-12-13 Method and system for carrying out target statistics based on UNREAL4

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911291326.3A CN111078751A (en) 2019-12-13 2019-12-13 Method and system for carrying out target statistics based on UNREAL4

Publications (1)

Publication Number Publication Date
CN111078751A true CN111078751A (en) 2020-04-28

Family

ID=70314705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911291326.3A Pending CN111078751A (en) 2019-12-13 2019-12-13 Method and system for carrying out target statistics based on UNREAL4

Country Status (1)

Country Link
CN (1) CN111078751A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150507A (en) * 2020-09-29 2020-12-29 厦门汇利伟业科技有限公司 Method and system for synchronously reproducing 3D model of object posture and displacement
CN112854739A (en) * 2021-01-04 2021-05-28 海门市帕源路桥建设有限公司 Automatic control and regulation system for bottom plate short wall formwork
CN114006894A (en) * 2020-12-30 2022-02-01 万翼科技有限公司 Data processing system, method, electronic device, and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306020A (en) * 2011-06-27 2012-01-04 中国科学院深圳先进技术研究院 Virtual modeling thing internet and cloud computing combining method and system
CN104506800A (en) * 2014-12-15 2015-04-08 浙江宇视科技有限公司 Scene synthesis and comprehensive monitoring method and device for electronic police cameras in multiple directions
CN107330386A (en) * 2017-06-21 2017-11-07 厦门中控智慧信息技术有限公司 A kind of people flow rate statistical method and terminal device
CN110110710A (en) * 2019-06-03 2019-08-09 北京启瞳智能科技有限公司 A kind of scene abnormality recognition methods, system and intelligent terminal
CN110222579A (en) * 2019-05-09 2019-09-10 华南理工大学 A kind of the video object method of counting of the combination characteristics of motion and target detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306020A (en) * 2011-06-27 2012-01-04 中国科学院深圳先进技术研究院 Virtual modeling thing internet and cloud computing combining method and system
CN104506800A (en) * 2014-12-15 2015-04-08 浙江宇视科技有限公司 Scene synthesis and comprehensive monitoring method and device for electronic police cameras in multiple directions
CN107330386A (en) * 2017-06-21 2017-11-07 厦门中控智慧信息技术有限公司 A kind of people flow rate statistical method and terminal device
CN110222579A (en) * 2019-05-09 2019-09-10 华南理工大学 A kind of the video object method of counting of the combination characteristics of motion and target detection
CN110110710A (en) * 2019-06-03 2019-08-09 北京启瞳智能科技有限公司 A kind of scene abnormality recognition methods, system and intelligent terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150507A (en) * 2020-09-29 2020-12-29 厦门汇利伟业科技有限公司 Method and system for synchronously reproducing 3D model of object posture and displacement
CN112150507B (en) * 2020-09-29 2024-02-02 厦门汇利伟业科技有限公司 3D model synchronous reproduction method and system for object posture and displacement
CN114006894A (en) * 2020-12-30 2022-02-01 万翼科技有限公司 Data processing system, method, electronic device, and computer storage medium
CN114006894B (en) * 2020-12-30 2023-11-14 深圳市万翼数字技术有限公司 Data processing system, method, electronic device, and computer storage medium
CN112854739A (en) * 2021-01-04 2021-05-28 海门市帕源路桥建设有限公司 Automatic control and regulation system for bottom plate short wall formwork
CN112854739B (en) * 2021-01-04 2022-06-03 海门市帕源路桥建设有限公司 Automatic control and regulation system for bottom plate short wall formwork

Similar Documents

Publication Publication Date Title
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
CN108509896B (en) Trajectory tracking method and device and storage medium
US9224037B2 (en) Apparatus and method for controlling presentation of information toward human object
CN105654512B (en) A kind of method for tracking target and device
US8724845B2 (en) Content determination program and content determination device
US7454216B2 (en) in-facility information provision system and in-facility information provision method
CN111078751A (en) Method and system for carrying out target statistics based on UNREAL4
CN110874583A (en) Passenger flow statistics method and device, storage medium and electronic equipment
WO2014050518A1 (en) Information processing device, information processing method, and information processing program
CN111047621B (en) Target object tracking method, system, equipment and readable medium
US20150092981A1 (en) Apparatus and method for providing activity recognition based application service
JP6590609B2 (en) Image analysis apparatus and image analysis method
CN111160243A (en) Passenger flow volume statistical method and related product
US20220277463A1 (en) Tracking dynamics using a computerized device
CN108734502A (en) A kind of data statistical approach and system based on user location
CN109766755A (en) Face identification method and Related product
JP6779410B2 (en) Video analyzer, video analysis method, and program
JP3655618B2 (en) Pedestrian age determination device, walking state / pedestrian age determination method and program
CN106504227B (en) Demographic method and its system based on depth image
CN111126288B (en) Target object attention calculation method, target object attention calculation device, storage medium and server
CN111178113B (en) Information processing method, device and storage medium
CN114930319A (en) Music recommendation method and device
CN111626265A (en) Multi-camera downlink identification method and device and computer readable storage medium
CN112561987B (en) Personnel position display method and related device
CN116246299A (en) Low-head-group intelligent recognition system combining target detection and gesture recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230710

Address after: A601, Zhongke Naneng Building, No. 06 Yuexing 6th Road, Gaoxin District Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518051

Applicant after: Shenzhen Wanyi Digital Technology Co.,Ltd.

Address before: 519000 room 105-24914, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province (centralized office area)

Applicant before: WANYI TECHNOLOGY Co.,Ltd.