CN115619867B - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115619867B
CN115619867B CN202211445032.3A CN202211445032A CN115619867B CN 115619867 B CN115619867 B CN 115619867B CN 202211445032 A CN202211445032 A CN 202211445032A CN 115619867 B CN115619867 B CN 115619867B
Authority
CN
China
Prior art keywords
frame
target object
motion information
matrix
variance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211445032.3A
Other languages
Chinese (zh)
Other versions
CN115619867A (en
Inventor
郑强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211445032.3A priority Critical patent/CN115619867B/en
Publication of CN115619867A publication Critical patent/CN115619867A/en
Application granted granted Critical
Publication of CN115619867B publication Critical patent/CN115619867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a data processing method, a device, equipment, a storage medium and a program product; the method comprises the following steps: based on modeling characteristics of a first object in a virtual scene, carrying out object type identification processing on the first object to obtain an object type of the first object, and taking the first object which does not belong to a static object type as a target object; performing state change processing based on a first position of a target object corresponding to a first frame in a virtual scene and first motion information of the target object corresponding to the first frame to obtain a second position of the target object corresponding to a second frame and second motion information of the target object corresponding to the second frame; and when the second position of the target object corresponding to the second frame is abnormal, smoothing the second position of the target object based on the shielding related information of the target object and the correction coefficient of the corresponding object type to obtain a third position of the target object, and replacing the second position with the third position. By the method and the device, the position prediction accuracy of the object can be improved.

Description

Data processing method, device, equipment and storage medium
Technical Field
The present application relates to image processing technologies, and in particular, to a data processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
The display technology based on graphic processing hardware expands the perception environment and the channel for acquiring information, in particular to the multimedia technology of virtual scenes, and can realize diversified interaction between virtual objects controlled by users or artificial intelligence according to the actual application requirements by means of the man-machine interaction engine technology.
The virtual scene has a target position prediction scene, for example, after being launched, the virtual mobile prop may change with the change of the position of the player, and in order to realize effective position prediction of the target, the position of the object needs to be accurately positioned in the virtual scene, however, the object may be occluded in the virtual scene, which makes it difficult to accurately predict the position of the target.
Disclosure of Invention
Embodiments of the present application provide a data processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product, which can improve data processing accuracy of an object, thereby improving target position prediction accuracy.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a data processing method, including:
based on the modeling characteristics of each first object in the virtual scene, carrying out object type identification processing on each first object to obtain the object type of each first object, and taking the first object which does not belong to the static object type as a target object;
performing state change processing based on a first position of each target object corresponding to a first frame in the virtual scene and first motion information of each target object corresponding to the first frame to obtain a second position of each target object corresponding to a second frame and second motion information of each target object corresponding to the second frame;
the second frame is a next frame adjacent to the first frame, the first position of each target object is obtained by performing image feature recognition processing based on a third position on the first frame of the virtual scene, the third position is a position of each target object corresponding to a third frame, the third frame is a previous frame adjacent to the first frame, and the second motion information and the first motion information include occlusion related information of the target object;
and when the second position of the target object corresponding to the second frame is abnormal, smoothing the second position of the target object based on the shielding correlation information of the target object and the correction coefficient corresponding to the object type to obtain the position of the target object, and replacing the second position by using the position.
An embodiment of the present application provides a data processing apparatus, including:
the type module is used for carrying out object type identification processing on each first object based on the modeling characteristics of each first object in the virtual scene to obtain the object type of each first object, and taking the first objects which do not belong to the static object type as target objects;
a state module, configured to perform state change processing based on a first position of each target object in the virtual scene, where the first position corresponds to a first frame, and first motion information of each target object, where the first position corresponds to the first frame, to obtain a second position of each target object, where the second position corresponds to a second frame, and second motion information of each target object, where the second position corresponds to the second frame; the second frame is a next frame adjacent to the first frame, the first position of each target object is obtained by performing image feature recognition processing based on a third position on the first frame of the virtual scene, the third position is a position of each target object corresponding to a third frame, the third frame is a previous frame adjacent to the first frame, and the second motion information and the first motion information include occlusion related information of the target object;
and the position module is used for smoothing the second position of the target object based on the shielding relevant information of the target object and the correction coefficient corresponding to the object type to obtain the position of the target object and replacing the second position with the position when the second position of the target object corresponding to the second frame is abnormal.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the data processing method provided by the embodiment of the application when the processor executes the executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions and is used for realizing the data processing method provided by the embodiment of the application when being executed by a processor.
The embodiments of the present application provide a computer program product, which includes a computer program or an instruction, and when the computer program or the instruction is executed by a processor, the data processing method provided by the embodiments of the present application is implemented.
The embodiment of the application has the following beneficial effects:
the image characteristics and modeling characteristics of the objects in the virtual scene are fused for comprehensive processing, specifically, the positions of the objects are determined by utilizing image characteristic identification processing, the object types are judged from the modeling characteristics, each object is subjected to relation binding, and the motion track of each object is marked and predicted.
Drawings
Fig. 1 is a schematic view of an application mode of a data processing method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device to which a data processing method is applied according to an embodiment of the present application;
FIG. 3A is a schematic flow chart of a data processing method according to an embodiment of the present disclosure;
fig. 3B is another schematic flow chart of a data processing method provided in an embodiment of the present application;
fig. 3C is a schematic flowchart of a data processing method provided in the embodiment of the present application;
FIG. 4 is a schematic interface diagram of a data processing method provided in an embodiment of the present application;
FIG. 5 is a logic framework diagram of a data processing method provided by an embodiment of the present application;
FIG. 6 is a flow chart of motion detection of a data processing method provided by an embodiment of the present application;
fig. 7 is a schematic diagram of smoothing processing of a data processing method according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Virtual scenes, which are different from real world scenes output by devices, can form visual perception of the virtual scenes through naked eyes or assistance of the devices, such as two-dimensional images output by a display screen, and three-dimensional images output by stereoscopic display technologies such as stereoscopic projection, virtual reality and augmented reality technologies; in addition, various real-world-simulated perceptions such as auditory perception, tactile perception, olfactory perception, motion perception and the like can be formed through various possible hardware.
2) In response to the condition or state indicating that the executed operation depends on, one or more of the executed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) A client, an application program running in the terminal for providing various services, such as a game client, etc.
The basic principle of the position prediction scheme in the related art is to perform target detection by using the difference between the background and the target in the adjacent frame image sequence. For example, in a dynamic scene, the background moves slowly, while the object moves quickly. Therefore, the difference value of the two or more frames of images is made, the difference value after the background subtraction is very small or zero, the value after the dynamic target subtraction is larger, and the binaryzation is carried out through the set threshold value, so that the dynamic target can be detected. However, this basic frame difference method has problems, such as missing detection easily occurs when the object moves slowly, and the overlapped part of the target cannot be detected, and a void occurs.
The position prediction scheme of the related art has better performance on conventional characters and objects with little occlusion, but in the game position prediction, the characters rotate changeably and are often occluded, so that the game character position prediction becomes very difficult.
Embodiments of the present application provide a data processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product, which can improve position prediction accuracy of an object. An exemplary application of the electronic device provided in the embodiments of the present application is described below, and the electronic device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, and a virtual reality hardware device).
In order to facilitate easier understanding of the data processing method provided by the embodiment of the present application, an exemplary implementation scenario of the data processing method provided by the embodiment of the present application is first described, and a virtual scenario may be completely output based on terminal output or based on cooperation of a terminal and a server.
In some embodiments, the virtual scene may be an environment for game characters to interact with, for example, game characters to play a battle in the virtual scene, and the two parties may interact with each other in the virtual scene by controlling actions of the virtual objects, so that the user can relieve life stress during the game.
In an implementation scenario, referring to fig. 1, fig. 1 is a schematic diagram of an application mode of a data processing method provided in an embodiment of the present application, and the application mode is applied to a terminal 400 and a server 200, where the terminal 400 and the server 200 communicate through a network, and is generally applicable to an application mode that depends on a computing power of the server 200 to complete virtual scene computation and output a virtual scene at the terminal 400.
As an example, a user logs in a client (e.g., a network version game application) operated by the terminal 400 through an account, during the game operation, a plurality of objects (first objects) are displayed in a virtual scene, including a moving object (target object) and a stationary object, the terminal 400 sends a real-time game frame to the server 200 along with the movement of the moving object, the server 200 predicts the position of each target object in each game frame in real time through the data processing method provided in the embodiment of the present application, and returns the position of each target object in each frame to the terminal 400, and a position mark of each target object is presented in the human-computer interaction interface of the terminal 400.
As an example, a user logs in a client (e.g., a network version game application) operated by the terminal 400 through an account, during the game operation, a plurality of objects (first objects) are displayed in a virtual scene, including a moving object (target object) and a stationary object, and along with the movement of the moving object, the terminal 400 predicts the position of each target object in each game frame in real time through the data processing method provided by the embodiment of the present application, and presents a position mark of each target object in the human-computer interaction interface of the terminal 400.
In some embodiments, the terminal 400 may implement the data processing method provided by the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP (i.e. the above-mentioned client), a live APP; or may be an applet, i.e. a program that can be run only by downloading it to a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiments of the present application may be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying series resources such as hardware, software, and network in a wide area network or a local area network to implement data calculation, storage, processing, and sharing.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
As an example, the server 200 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device applying a data processing method provided in an embodiment of the present application, and is described by taking the electronic device as an example, where a terminal 400 shown in fig. 2 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in FIG. 2.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes both volatile memory and nonvolatile memory, and can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), and the like;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the data processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows a data processing apparatus 455 stored in the memory 450, which may be software in the form of programs and plug-ins, and includes: a type module 4551, a status module 4552 and a location module 4553, which are logical and thus may be arbitrarily combined or further divided according to the functions implemented, and the functions of the respective modules will be described hereinafter.
The data processing method provided by the embodiment of the present application will be described in conjunction with exemplary applications and implementations of the terminal provided by the embodiment of the present application.
Referring to fig. 3A, fig. 3A is a schematic flowchart of a data processing method according to an embodiment of the present application, and will be described with reference to step 101 to step 103 shown in fig. 3A.
In step 101, based on the modeling features of each first object in the virtual scene, an object type identification process is performed on each first object to obtain an object type of each first object, and the first objects which do not belong to the static object type are used as target objects.
In some embodiments, referring to fig. 3B, in step 101, performing an object type identification process on each first object based on the modeling feature of each first object in the virtual scene to obtain an object type of each first object, which may be implemented by performing the following steps 1011 to 1015 for each first object.
In step 1011, an object image corresponding to the first object is acquired in the second frame.
As an example, the second frame is a current frame, and an object image including the first object is obtained from the current frame, so that modeling feature statistics may be subsequently performed on the object image of the first object to distinguish the type of the first object.
In step 1012, a modeling feature corresponding to the first object is extracted from the object image, and the modeling feature is subjected to statistical processing to obtain a statistical value.
As an example, the modeling feature may refer to the total number of open graphical interfaces, which are standard application program interfaces for 3D modeling, and may also refer to the number of types of open graphical interfaces. In the Unity or UE three-dimensional game, game characters are created by three-dimensional modeling, but a user sees and operates a two-dimensional game screen, so that motion detection needs to be performed in combination with graphical features of the game.
In step 1013, when the statistics of the first object are greater than a first statistical threshold, the object type of the first object is identified as a moving object type.
In step 1014, the object type of the first object is identified as a static object type when the statistics of the first object are less than a second statistical threshold, wherein the second statistical threshold is less than the first statistical threshold.
In step 1015, when the statistics of the first object are not less than the second statistical threshold and not greater than the first statistical threshold, the object type of the first object is identified as the pending type.
As an example, the pending type and the moving object type do not belong to static object types, typically the second statistical threshold is a value close to 0, such that the object type of the first object having a statistical value smaller than the second statistical threshold is typically a static object type, the first statistical threshold is a higher value, such that the object type of the first object having a statistical value larger than the first statistical threshold is typically a moving object type, the statistical value being within an interval
In some embodiments, prior to performing step 102, the following is performed for each target object: acquiring a search range which takes the third position as a center and accords with a set area in the first frame; performing sliding window sampling processing on a search range in a first frame to obtain a plurality of candidate sampling images; and determining a sampling image matched with the target object from the candidate sampling images based on the image characteristics of the candidate sampling images, and taking the center of the sampling image as the first position of the target object in the first frame.
As an example, taking an object a as an example, a third frame is a frame adjacent to the first frame, the position of the object a in the third frame is a third position, and there is no great distance difference between moving objects in previous and next frames, a surrounding region of the object a (i.e., a region centered on the third position) is marked in the third frame, the surrounding region is multiplexed into the first frame to obtain a search range of the first frame, that is, a search range centered on the third position and conforming to a set area is obtained in the first frame, sliding window sampling processing is performed in the search range, and matching is performed based on the obtained candidate sampling image to obtain a first position of the object in the first frame.
In some embodiments, the above-mentioned performing sliding window sampling processing on the search range in the first frame to obtain a plurality of candidate sample images may be implemented by the following technical solutions: acquiring a sliding interval distance for executing sliding window sampling processing; and sequentially executing a plurality of times of sliding window sampling processing based on the sliding interval distance in the search range to obtain a plurality of candidate sampling images.
Taking the above example as a support, if the length of the object image including the object a in the third frame is w, the width of the object image including the object a in the third frame is h, then the length of the search range is 2w, the width of the search range is 2h, the length of the candidate sample image is w, the width of the candidate sample image is h, and the search interval (sliding interval distance) is 0.2w and 0.2h each time, for example, 0.2w slides in the length direction each time, and does not change in the width direction, for example, 0.2h slides in the width direction each time, and does not change in the length direction, and the frame of the candidate sample image is the frame of the sliding window each time.
In some embodiments, the determining, from the plurality of candidate sample images, a sample image matching the target object based on the image features of the plurality of candidate sample images may be implemented by: acquiring an object image corresponding to the target object in the third frame; acquiring the gray level histogram feature of the object image and acquiring the gray level histogram feature of each candidate sampling image, wherein the gray level histogram is a function related to gray level distribution and is statistics of the gray level distribution in the image. The gray histogram is to count the frequency of occurrence of gray levels according to the size of the gray levels of all pixels in an image. A gray histogram is a function of gray level, which represents the number of pixels in an image having a certain gray level, reflecting the frequency with which a certain gray level appears in the image. The gray histogram feature here may be the number of pixels having a certain gray level in the image, or may be the frequency of occurrence of a certain gray level in the image; acquiring a characteristic distance between the gray level histogram characteristic of the object image and the gray level histogram characteristic of each candidate sampling image; and taking the candidate sampling image corresponding to the maximum characteristic distance as a sampling image matched with the target object.
As an example, a gray histogram feature of an image is taken as an image feature for image matching, see formula (1):
Figure SMS_1
(1);
wherein the content of the first and second substances,
Figure SMS_2
is the score of each candidate sample image, <' > is>
Figure SMS_3
Is a candidate sampled image
Figure SMS_4
Is selected based on the gray level histogram feature of (4)>
Figure SMS_5
Is a subject image of subject A->
Figure SMS_6
The gray-level histogram feature of (a),
Figure SMS_7
is the euclidean distance.
In step 102, based on a first position of each target object corresponding to a first frame in the virtual scene and first motion information of each target object corresponding to the first frame, performing state change processing to obtain a second position of each target object corresponding to a second frame and second motion information of each target object corresponding to the second frame;
as an example, the second frame is a next frame adjacent to the first frame, the first position of each target object is obtained by performing image feature recognition processing based on a third position on the first frame of the virtual scene, the third position is a position of each target object corresponding to the third frame, the third frame is a previous frame adjacent to the first frame, and the second motion information and the first motion information include occlusion related information of the target object.
As an example, the first position of each target object is obtained by performing an image feature recognition process based on the third position on the first frame of the virtual scene, where the image feature recognition process may refer to the scheme of determining the first position of the object in the first frame before performing step 102.
In some embodiments, referring to fig. 3C, in step 102, based on the first position of each target object corresponding to the first frame in the virtual scene and the first motion information of each target object corresponding to the first frame, performing state change processing to obtain the second position of each target object corresponding to the second frame and the second motion information of each target object corresponding to the second frame, which may be implemented by performing steps 1021 to 1026 on each target object.
In step 1021, a position expectation change process is performed on the first position of the target object corresponding to the first frame to obtain a second position expectation of the target object corresponding to the second frame.
In some embodiments, in step 1021, performing position expectation change processing on a first position of the target object corresponding to the first frame to obtain a second position expectation of the target object corresponding to the second frame, may be implemented by the following technical solutions: generating a first position matrix corresponding to a first position based on the first position of the target object corresponding to the first frame; matrix multiplication processing is carried out on the motion state transition matrix and the first position matrix to obtain a first multiplication result, and mapping processing based on a relation function is carried out on the first multiplication result to obtain a second position matrix; a second position expectation is extracted from a second position matrix, the second position matrix comprising data of the second position expectation, the second position expectation describing an expectation of a second position of the target object in a second frame, the second position comprising an abscissa and an ordinate, where the expectation is also the expectation of the abscissa and the expectation of the ordinate.
As an example, the state change process here is implemented by kalman filter model, see equation (2):
Figure SMS_8
(2)
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_9
is a motion state transfer matrix, and->
Figure SMS_10
Is a second location matrix comprising second location expectation data describing an expectation of the object a at a second location in a second frame, the second location comprising an abscissa and an ordinate, where the expectation is also an expectation of the abscissa and an expectation of the ordinate, and->
Figure SMS_11
Is a first location matrix corresponding to a first location,
Figure SMS_12
is a relational function.
In step 1022, a position variance change process is performed on the first position variance of the target object corresponding to the first frame, so as to obtain a second position variance of the target object corresponding to the second frame.
In some embodiments, in step 1022, a position variance change process is performed on a first position variance of the target object corresponding to the first frame to obtain a second position variance of the target object corresponding to the second frame, which may be implemented by the following technical solutions: based on a first position variance of a target object corresponding to a first frame, constructing a first position variance matrix of the target object corresponding to the first frame, multiplying a motion state transition matrix by the first position variance matrix of the target object corresponding to the first frame to obtain a second multiplication result, and multiplying the second multiplication result by a transpose matrix of the motion state transition matrix to obtain a third multiplication result; acquiring an object image corresponding to a target object in a first frame, and acquiring a gray histogram feature of the object image; generating a gray histogram feature matrix based on the gray histogram feature of the object image; and adding the third multiplication result and the gray histogram feature matrix to obtain a second position variance matrix of the target object in a second frame, and extracting a second position variance from the second position variance matrix.
As an example, a first positional variance in the first frame characterizes an uncertainty of the target object at a first position of the first frame; the variance of the second position of the target object in the second frame represents the uncertainty of the second position of the target object in the second frame, and the state change processing here is also implemented by a kalman filtering model, see formula (3):
Figure SMS_13
(3);
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_14
is a motion state transfer matrix, and->
Figure SMS_15
Is a second position variance matrix describing the uncertainty of the object a at a second position in a second frame, the second position comprising an abscissa and an ordinate, where the uncertainty is also the uncertainty of the abscissa and the uncertainty of the ordinate, and/or>
Figure SMS_16
Is a first position variance matrix of the object corresponding to the first frame, based on the variance value of the object>
Figure SMS_17
Representing histogram features of the object.
In step 1023, a first random process constructed by the second position expectation and the second position variance is sampled to obtain a second position of the target object corresponding to the second frame.
As an example, the first stochastic process may be a first stochastic distribution, such as a gaussian distribution, constructed from the second location expectation and the second location variance, common sampling methods are inverse transformation, rejection sampling, importance sampling, markov monte carlo sampling, and so on.
In step 1024, the first motion information of the target object corresponding to the first frame is subjected to motion information expectation change processing to obtain a second motion information expectation of the target object in the second frame.
In some embodiments, in step 1024, performing expected motion information change processing on first motion information of the target object corresponding to the first frame to obtain second expected motion information of the target object in the second frame, which may be implemented by the following technical solutions: acquiring a first motion information expectation corresponding to the first motion information; constructing a first motion information matrix corresponding to a first motion information expectation based on the first motion information expectation of the target object in the first frame; matrix multiplication processing is carried out on the motion state transition matrix and the first motion information matrix to obtain a fourth multiplication result, and mapping processing based on a relation function is carried out on the fourth multiplication result to obtain a second motion information matrix; a second motion information expectation is extracted from the second motion information matrix.
As an example, the first motion information matrix is expected to be constructed based on first motion information of the target object in a first frame, the second motion information matrix is expected to be constructed based on second motion information of the target object in a second frame, the first motion information matrix and the second motion information matrix are both matrices of 1 × N, N is a dimension of the first motion information and a dimension of the second motion information, data of each dimension in the matrices is data of an information type of a corresponding dimension, such as expectation of speed, expectation of occlusion information, and the like, the occlusion information includes overlapping information and proximity information, the proximity information includes a position (abscissa and ordinate) of a center of a proximity object, the expectation of overlapping information is expressed by using repeat, the expectation of overlapping information indicates no occlusion object if 0, repeat is 1 indicates an occlusion object, the proximity object indicates an object whose distance from the object a is smaller than a distance threshold, and the first motion information characterizes speed and occlusion information of the target object corresponding to the first frame; the second motion information matrix represents the speed and the occlusion information of the target object in the second frame, and the state change processing is realized by a kalman filter model, which is shown in formula (4):
Figure SMS_18
(4);
wherein the content of the first and second substances,
Figure SMS_19
is a motion state transition matrix, is based on a motion state>
Figure SMS_20
Is a second motion information matrix describing the expectation of second motion information of the object a in a second frame, which is greater than or equal to>
Figure SMS_21
Is a first motion information matrix, which corresponds to the first motion information expectation>
Figure SMS_22
Is a relational function.
In step 1025, the variance of the motion information is processed for the first motion information of the first frame corresponding to the target object, so as to obtain the variance of the second motion information of the second frame corresponding to the target object.
In some embodiments, in step 1025, a motion information variance change process is performed on first motion information of the target object corresponding to the first frame to obtain a second motion information variance of the target object corresponding to the second frame, which may be implemented by the following technical solutions: acquiring a first motion information variance corresponding to the first motion information; multiplying the motion state transition matrix by a first motion information variance of a first frame corresponding to the target object to obtain a fifth multiplication result, and multiplying the fifth multiplication result by a transposed matrix of the motion state transition matrix to obtain a sixth multiplication result; acquiring an object image corresponding to a target object in a first frame, and acquiring a gray histogram feature of the object image; generating a gray histogram feature matrix based on the gray histogram feature of the object image; and adding the sixth multiplication result and the gray histogram feature matrix to obtain a second motion information variance of the target object in a second frame.
As an example, wherein a second motion information variance of the target object in the second frame characterizes an uncertainty of a second motion information of the target object in the second frame, and a first motion information variance in the first frame characterizes an uncertainty of a first motion information of the target object corresponding to the first frame; here, the state change processing is also realized by a kalman filter model, see formula (5):
Figure SMS_23
(5);
wherein the content of the first and second substances,
Figure SMS_24
is a motion state transfer matrix, and->
Figure SMS_25
Is a second motion information variance matrix used to delineateThe uncertainty of the second motion information of the object A in the second frame is greater or less>
Figure SMS_26
Is a first motion information variance matrix, based on the first frame, of the object>
Figure SMS_27
Representing histogram features of the object.
In step 1026, a random process constructed by the second motion information expectation and the second motion information variance is sampled to obtain second motion information of the second frame corresponding to the target object.
In step 103, when the second position of the target object corresponding to the second frame is abnormal, the second position of the target object is smoothed based on the occlusion related information of the target object and the correction coefficient corresponding to the object type, so as to obtain the position of the target object, and the second position is replaced by the position.
In some embodiments, in step 103, the second position of the target object is smoothed based on the occlusion related information of the target object and the correction coefficient of the corresponding object type, so as to obtain the position of the target object, which may be implemented by the following technical solutions: the following processing is performed for each target object: acquiring historical positions of the target object from a plurality of positions of each historical frame based on the shielding correlation information of the target object; when the track formed by the historical positions of the target object is abnormal, summing the plurality of historical positions, wherein the summing the historical positions refers to respectively summing the abscissas of the plurality of historical positions and summing the ordinates of the plurality of historical positions, subsequently executing subsequent processing based on the summation result of the ordinates and the summation result of the abscissas respectively, and multiplying the summation result and the correction coefficient corresponding to the object type to obtain an index coefficient, wherein the correction coefficient comprises the correction coefficient corresponding to the ordinates and the correction coefficient corresponding to the abscissas, and is a pre-configured coefficient obtained through a plurality of tests and used for representing the weights of the three-dimensional moving object and the object to be moved. The correction coefficient corresponding to the three-dimensional moving object (moving object type) is 0.8, the correction coefficient corresponding to the object to be determined (to-be-determined type) is 0.2, the index coefficient is obtained by multiplying the sum of the correction coefficient and the historical position, the influence of the historical position on target objects of different object types can be corrected by using the correction coefficient, and therefore the track correction accuracy is improved in a targeted manner; performing index calculation processing based on the index coefficient to obtain an index calculation result, and performing multiplication processing on the index calculation result corresponding to the ordinate and the index calculation result corresponding to the abscissa to obtain a seventh multiplication result; a position positively correlated with the seventh multiplication result is acquired.
As an example, when the position of the object a is abnormal, the motion trajectory of the historical frame is obtained, see fig. 7, fig. 7 is a schematic diagram of the smoothing process of the data processing method provided in the embodiment of the present application, and it is assumed that the historical positions of the jth frame to the jth + k frame are positions(s) in sequence j ),..,position(s j+k ) When the motion track of the historical frame does not accord with the motion principle, the correction prediction is carried out by using a track smoothing algorithm, and the formula (6) is shown:
Figure SMS_28
(6);
wherein the content of the first and second substances,
Figure SMS_29
is the updated position in frame t +1 @>
Figure SMS_30
Is the second position in frame t +1 before it has been updated, is->
Figure SMS_31
Is the first position in the t-th frame, K is a constant, the value is 2.157 in the experiment, and a represents the weight of the three-dimensional moving object and the object to be moved. Three-dimensional moving object (moving object type)The weight of (2) is 0.8, and the weight of the object to be moved (the type to be determined) is 0.2.
The method and the device fuse the image characteristics and the modeling characteristics of the objects in the virtual scene to perform comprehensive processing, specifically, the positions of the objects are determined by utilizing image characteristic identification processing, the object types are judged from the modeling characteristics, each object is subjected to relationship binding, and the motion track of each object is marked and predicted.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
In some embodiments, a user logs in a client (e.g., a network-version game application) operated by a terminal through an account, during the game operation, a plurality of objects (first objects) are displayed in a virtual scene, including a moving object (a target object) and a stationary object, the terminal 400 sends a real-time game frame to a server along with the movement of the moving object, the server predicts the position of each target object in each game frame in real time through a data processing method provided by the embodiment of the present application, and returns the position of each target object in each frame to the terminal, so as to present a position mark of each target object in a human-computer interaction interface of the terminal.
In some embodiments, referring to fig. 4, fig. 4 is an interface schematic diagram of a data processing method provided in the embodiment of the present application, and the data processing method provided in the embodiment of the present application has been applied to the field of game automation, and can perform position prediction on a main character of a game, thereby implementing game automation.
In some embodiments, referring to fig. 5, fig. 5 is a logic framework diagram of a data processing method provided in an embodiment of the present application, and the data processing method provided in the embodiment of the present application is mainly divided into three parts: the motion detection module, the motion correlation module and the motion track reconnection module. The motion detection module tracks by utilizing the gray level histogram characteristics, the motion detection module distinguishes two-dimensional motion and three-dimensional motion by utilizing graphic characteristic statistics, the three-dimensional motion can most visually reflect game roles in a game, and the two-dimensional motion is only a background motion picture and has no obvious graphic characteristics. The motion correlation module is used for establishing a correlation relationship between moving objects, for example, matching of adjacent targets. Also, in a crowded multi-object scene, there is a large amount of interaction and occlusion between different pedestrians, which will result in a broken or overlapping trajectory. In order to alleviate the problems, a reconnection mechanism based on dynamic motion is designed, a motion trajectory reconnection module carries out full-life-cycle prediction on all trajectories, even if position vacancy occurs in the trajectories, the trajectory reconnection mechanism can be used as a space-time clue to wait for position prediction detection association, and a pseudo-observation trajectory filling strategy is used to ensure that the motion trajectories are not lost.
In some embodiments, referring to fig. 6, fig. 6 is a motion detection flowchart of a data processing method provided by an embodiment of the present application.
In the motion detection process, the graphical characteristics of the game role are mainly used for motion recognition. In the Unity or UE three-dimensional game, game characters are created by three-dimensional modeling, but a user sees and operates a two-dimensional game screen, so that motion detection needs to be performed in combination with graphical features of the game.
The motion detection module combines image feature matching of game characters, and simultaneously uses graphical features of game objects to distinguish two-dimensional motion from three-dimensional motion. Because the game characters are modeled by using a three-dimensional engine, three-dimensional features of the game characters need to be extracted for distinguishing and marking. The method uses opengles 'Application Program Interface (API) features, wherein hook statistics is performed on opengles' key APIs such as pixelonetr, drawcall, triangle counters, and the like, and then objects with high feature quantity are screened out as three-dimensional moving objects, if the objects are background images, no three-dimensional modeling process exists, or the objects with very low feature quantity are considered as non-three-dimensional moving objects, the objects with the value of 0 are marked as necessary two-dimensional static objects, and other objects are marked as pending moving objects. In the subsequent calculation process, the weight of the three-dimensional moving object is adjusted to 0.8, and the weight of the two-dimensional static object is adjusted to 0.1, and the weight of the object to be determined is adjusted to 0.2.
In the area around the marked object A in the tth frame, the moving objects in the front and back frames have no great distance difference, so the target in the (t + 1) th frame is matched by using the area around the mark of the tth frame. For a plurality of object images, performing nearby matching by using a sliding window strategy, taking a surrounding area marked by a t-th frame as a search range, assuming that the length and the width of an object a are w and h, the size of a search box is 2w and 2h, each search interval is 0.2w and 0.2h, the size of a collected image is w and h, and using a gray histogram feature of the image as an image feature for performing image matching, see formula (7):
Figure SMS_32
(7);
wherein the content of the first and second substances,
Figure SMS_33
is a score for each captured image, based on the number of captured images in the image pool>
Figure SMS_34
Is to acquire an image
Figure SMS_35
Is selected based on the gray level histogram feature of (4)>
Figure SMS_36
Is a subject image of subject A->
Figure SMS_37
The gray-level histogram feature of (a),
Figure SMS_38
is the euclidean distance.
In the embodiment of the application, the Euclidean distance is used for performing score matching on the candidate target (the acquired image), and the candidate target closest to the t frame object A is found out and is used as the motion detection object (whether the detection is a three-dimensional motion object) for the t +1 frame.
In some embodiments, the motion correlation model is established to establish a relationship between different moving objects (two-dimensional static objects), and the relationship between the moving objects includes: distance relationship, occlusion overlap relationship, etc., and the motion correlation model is shown in formula (8)
Figure SMS_39
(8);
Wherein, Y represents the histogram feature of the moving object, Q represents the process uncertainty, F represents the state transition matrix of the kalman filter, S = [ position, speed, neighbor, repeat ] represents the rectangular position, the moving speed, the center of the adjacent target of the moving object, repeat represents no occluded target if it is 0, repeat represents an occluded target if it is 1, the crp function is the relationship function established according to the relationship between the adjacent frames, and the state transition matrix of the kalman filter can be updated by the covariance algorithm.
A Kalman filtering model is established for a certain moving object, the estimated matrix S is utilized to mark the moving object, and then the matrixes S of different moving objects are combined into a target matrix, so that the complete relevance between different targets is established.
In some embodiments, in order to predict the game character accurately and describe the moving target track, the embodiment of the present application adopts a pseudo-observation track filling strategy, which is equivalent to an occlusion filling strategy simulating human eye observation, that is, when there is an overlapping occlusion relationship between adjacent targets, the assumed target moving track is used by default to fill the disappeared targets, rather than being discarded directly. For example, when an abnormality occurs in the position of a certain object, it may be caused by the existence of an overlapping occlusion relationship. Through the embodiment of the application, the full life cycle prediction is carried out on the tracks of all moving objects, even if position abnormity occurs in the tracks, the tracks can be used as space-time clues to wait for position prediction detection association, and a pseudo observation track filling strategy is used for ensuring that the moving tracks are not lost. The motion detection part records the modeling feature quantity of the moving object, when an overlapped target and a close-distance adjacent target exist, the modeling feature quantity can be increased, then modeling feature subtraction can be carried out on two adjacent frames, when redundant modeling features belong to modeling features of an occlusion target, and meanwhile, the position prediction of the occlusion target is carried out according to the position distribution information of the modeling features, such as pixel coloring offset space information.
When the position of a certain object is abnormal, the motion trajectory of the historical frame is obtained, see fig. 7, fig. 7 is a schematic diagram of the smoothing process of the data processing method provided in the embodiment of the present application, and it is assumed that the motion positions of the jth frame to the jth + k frame are positions(s) in sequence j ),..,position(s j+k ) When the motion track of the historical frame does not accord with the motion principle, the correction prediction is carried out by using a track smoothing algorithm, and the formula (9) is shown:
Figure SMS_40
(9);
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_41
is the updated position in frame t +1 @>
Figure SMS_42
Is the second position in frame t +1 before it has been updated, is->
Figure SMS_43
Is the first position in the t-th frame, K is a constant, the value is 2.157 in the experiment, and a represents the weight of the three-dimensional moving object and the object to be moved. The weight of the three-dimensional moving object (moving object type) is 0.8, and the weight of the object to be moved (pending type) is 0.2.
Therefore, the context processing is carried out on the correlation among the targets and the motion tracks before and after the targets, and the track prediction among the targets is very accurate. Meanwhile, if the track predicted by the object to be moved accords with a moving strategy, namely the object to be moved processed by the formula (9) does not move for a long time, the object to be moved is a three-dimensional moving object, the label of the object to be moved is updated to the three-dimensional moving object, and otherwise, the object to be moved is marked to be a two-dimensional static object.
The method and the device have good effect in the aspect of track position prediction of the game role, the position of the game role can be correctly identified in a multi-role and multi-scene through the technology, and the efficiency of numerical analysis of the game role is optimized to a great extent.
It is understood that, in the embodiments of the present application, the data related to the user information and the like need to be approved or approved by the user when the embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related countries and regions.
Continuing with the exemplary structure of the data processing device 455 provided by the embodiments of the present application as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the data processing device 455 of the memory 450 may include: a type module 4551, configured to perform object type identification processing on each first object based on a modeling feature of each first object in the virtual scene to obtain an object type of each first object, and use a first object that does not belong to a static object type as a target object; a state module 4552, configured to perform state change processing based on a first position of each target object in the virtual scene corresponding to the first frame and first motion information of each target object corresponding to the first frame, to obtain a second position of each target object corresponding to the second frame and second motion information of each target object corresponding to the second frame; the second frame is a next frame adjacent to the first frame, the first position of each target object is obtained by performing image feature identification processing based on a third position on the first frame of the virtual scene, the third position is a position of each target object corresponding to the third frame, the third frame is a previous frame adjacent to the first frame, and the second motion information and the first motion information comprise shielding related information of the target object; a position module 4553, configured to, when a second position of the target object corresponding to the second frame is abnormal, perform smoothing processing on the second position of the target object based on the occlusion related information of the target object and the correction coefficient of the corresponding object type to obtain a position of the target object, and replace the second position with the position.
In some embodiments, the status module 4552 is further configured to: the following processing is performed for each target object: acquiring a search range which takes the third position as a center and accords with a set area in the first frame; performing sliding window sampling processing on a search range in a first frame to obtain a plurality of candidate sampling images; and determining a sampling image matched with the target object from the plurality of candidate sampling images, and taking the center of the sampling image as the first position of the target object in the first frame.
In some embodiments, the status module 4552 is further configured to: acquiring a sliding interval distance for executing sliding window sampling processing; and sequentially executing a plurality of times of sliding window sampling processing based on the sliding interval distance in the search range to obtain a plurality of candidate sampling images.
In some embodiments, the status module 4552 is further configured to: acquiring an object image corresponding to the target object in the third frame; acquiring the gray level histogram characteristics of the object image and acquiring the gray level histogram characteristics of each candidate sampling image; acquiring a characteristic distance between the gray level histogram characteristic of the object image and the gray level histogram characteristic of each candidate sampling image; and taking the candidate sampling image corresponding to the maximum characteristic distance as a sampling image matched with the target object.
In some embodiments, the type module 4551 is further configured to: the following processing is performed for each first object: acquiring an object image corresponding to the first object in the second frame; extracting modeling characteristics corresponding to a first object from the object image, and performing statistical processing on the modeling characteristics to obtain statistical values; when the statistics of the first object is larger than a first statistic threshold value, identifying the object type of the first object as a moving object type; when the statistic value of the first object is smaller than a second statistic threshold value, the object type of the first object is a static object type, wherein the second statistic threshold value is smaller than the first statistic threshold value; and when the statistical value of the first object is not less than the second statistical threshold value and not more than the first statistical threshold value, the object type of the first object is set as a pending type.
In some embodiments, the status module 4552 is further configured to: the following processing is performed for each target object: carrying out position expectation change processing on a first position of the target object corresponding to the first frame to obtain a second position expectation of the target object corresponding to the second frame; performing position variance change processing on a first position variance of the target object corresponding to the first frame to obtain a second position variance of the target object corresponding to the second frame; sampling a first random process constructed by the second position expectation and the second position variance to obtain a second position of the target object corresponding to a second frame; performing expected change processing on the motion information of the target object corresponding to the first motion information of the first frame to obtain a second expected motion information of the target object in a second frame; carrying out motion information variance change processing on first motion information of a target object corresponding to a first frame to obtain a second motion information variance of the target object corresponding to a second frame; and sampling a random process constructed by the second motion information expectation and the second motion information variance to obtain second motion information of a second frame corresponding to the target object.
In some embodiments, the status module 4552 is further configured to: generating a first position matrix corresponding to a first position based on the first position of the target object corresponding to the first frame; matrix multiplication processing is carried out on the motion state transition matrix and the first position matrix to obtain a first multiplication result, and mapping processing based on a relation function is carried out on the first multiplication result to obtain a second position matrix; a second location expectation is extracted from the second location matrix.
In some embodiments, the status module 4552 is further configured to: constructing a first position variance matrix of the target object corresponding to the first frame based on a first position variance of the target object corresponding to the first frame, wherein the first position variance in the first frame represents the uncertainty of the target object at the first position of the first frame; multiplying the motion state transition matrix by a first position variance matrix of a first frame corresponding to the target object to obtain a second multiplication result; multiplying the second multiplication result by the transposed matrix of the motion state transfer matrix to obtain a third multiplication result; acquiring an object image corresponding to a target object in a first frame, and acquiring a gray histogram feature of the object image; generating a gray level histogram feature matrix based on the gray level histogram features of the object image; adding the third multiplication result and the gray histogram feature matrix to obtain a second position variance matrix of the target object in a second frame, wherein the second position variance of the target object in the second frame represents the uncertainty of the target object in a second position of the second frame; a second location variance is extracted from the second location variance matrix.
In some embodiments, the status module 4552 is further configured to: acquiring a first motion information expectation corresponding to the first motion information; constructing a first motion information matrix corresponding to a first motion information expectation based on the first motion information expectation of the target object in the first frame; the first motion information matrix represents the speed and the shielding information of a target object corresponding to a first frame; performing matrix multiplication processing on the motion state transition matrix and the first motion information matrix to obtain a fourth multiplication result, and performing mapping processing based on a relation function on the fourth multiplication result to obtain a second motion information matrix; the second motion information matrix represents the speed and the shielding information of the target object in a second frame; a second motion information expectation is extracted from the second motion information matrix.
In some embodiments, the status module 4552 is further configured to: acquiring a first motion information variance corresponding to the first motion information; multiplying the motion state transition matrix by a first motion information variance of a first frame corresponding to the target object to obtain a fifth multiplication result, wherein the first motion information variance in the first frame represents the uncertainty of the first motion information of the first frame corresponding to the target object; multiplying the fifth multiplication result by the transposed matrix of the motion state transfer matrix to obtain a sixth multiplication result; acquiring an object image corresponding to a target object in a first frame, and acquiring a gray level histogram feature of the object image; generating a gray level histogram feature matrix based on the gray level histogram features of the object image; and adding the sixth multiplication result and the gray histogram feature matrix to obtain a second motion information variance of the target object in the second frame, wherein the second motion information variance of the target object in the second frame represents the uncertainty of the second motion information of the target object in the second frame.
In some embodiments, the location module 4553 is further configured to: the following processing is performed for each target object: acquiring historical positions of the target object from a plurality of positions of each historical frame based on the shielding correlation information of the target object; when the track formed by the historical positions of the target object is abnormal, summing the multiple historical positions, and multiplying the summation result by the correction coefficient of the corresponding object type to obtain an index coefficient; performing index calculation processing based on the index coefficient to obtain an index calculation result, and multiplying the index calculation result by the second position to obtain a seventh multiplication result; a position positively correlated with the seventh multiplication result is obtained.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the data processing method described in the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium which stores executable instructions, and the executable instructions are stored in the computer-readable storage medium and when being executed by a processor, the executable instructions can cause the processor to execute the data processing method provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the image features and the modeling features of the objects in the virtual scene are fused for comprehensive processing, specifically, the positions of the objects are determined by using image feature recognition processing, the types of the objects are determined from the modeling features, each object is related and bound, and the motion track of each object is marked and predicted.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (13)

1. A method of data processing, the method comprising:
based on modeling characteristics of each first object in a virtual scene, carrying out object type identification processing on each first object to obtain an object type of each first object, and taking the first object which does not belong to a static object type as a target object;
performing state change processing based on a first position of each target object corresponding to a first frame in the virtual scene and first motion information of each target object corresponding to the first frame to obtain a second position of each target object corresponding to a second frame and second motion information of each target object corresponding to the second frame;
the second frame is a next frame adjacent to the first frame, the first position of each target object is obtained by performing image feature recognition processing based on a third position on the first frame of the virtual scene, the third position is a position of each target object corresponding to a third frame, the third frame is a previous frame adjacent to the first frame, and the second motion information and the first motion information include occlusion related information of the target object;
when the second position of the target object corresponding to the second frame is abnormal, executing the following processing for each target object:
acquiring historical positions of the target object from a plurality of positions of each historical frame based on the shielding correlation information of the target object;
when the track formed by the historical positions of the target object is abnormal, summing the historical positions, and multiplying the sum result by a correction coefficient corresponding to the object type to obtain an index coefficient;
performing index calculation processing based on the index coefficient to obtain an index calculation result, and multiplying the index calculation result and the second position to obtain a seventh multiplication result;
obtaining a position positively correlated with the seventh multiplication result, and replacing the second position with the position.
2. The method of claim 1, further comprising:
performing the following processing for each of the target objects:
acquiring a search range which is centered at the third position and accords with a set area in the first frame;
performing sliding window sampling processing on the search range in the first frame to obtain a plurality of candidate sampling images;
determining a sampling image matched with the target object from the candidate sampling images based on the image characteristics of the candidate sampling images, and taking the center of the sampling image as the first position of the target object in the first frame.
3. The method of claim 2, wherein the performing a sliding window sampling process on the search range in the first frame to obtain a plurality of candidate sample images comprises:
acquiring a sliding interval distance for executing the sliding window sampling processing;
and sequentially executing a plurality of times of sliding window sampling processing based on the sliding interval distance in the search range to obtain a plurality of candidate sampling images.
4. The method of claim 2, wherein the image feature is a gray-level histogram feature, and wherein determining the sample image matching the target object from the plurality of candidate sample images based on the image features of the plurality of candidate sample images comprises:
acquiring an object image corresponding to the target object in the third frame;
acquiring the gray level histogram characteristics of the object image and acquiring the gray level histogram characteristics of each candidate sampling image;
acquiring a characteristic distance between the gray level histogram characteristic of the object image and the gray level histogram characteristic of each candidate sampling image;
and taking the candidate sampling image corresponding to the maximum characteristic distance as the sampling image matched with the target object.
5. The method of claim 1, wherein performing an object type identification process on each first object based on the modeled features of each first object in the virtual scene to obtain an object type of each first object comprises:
performing the following processing for each of the first objects:
acquiring an object image corresponding to the first object in the second frame;
extracting modeling characteristics corresponding to the first object from the object image, and performing statistical processing on the modeling characteristics to obtain a statistical value;
identifying an object type of the first object as a moving object type when the statistics of the first object are greater than a first statistical threshold;
when the statistics value of the first object is smaller than a second statistical threshold value, the object type of the first object is a static object type, wherein the second statistical threshold value is smaller than the first statistical threshold value;
and when the statistic value of the first object is not less than the second statistic threshold value and not more than the first statistic threshold value, the object type of the first object is a pending type.
6. The method according to claim 1, wherein performing state change processing based on a first position of each target object in the virtual scene corresponding to a first frame and first motion information of each target object corresponding to the first frame to obtain a second position of each target object corresponding to a second frame and second motion information of each target object corresponding to the second frame comprises:
performing the following processing for each of the target objects:
performing position expectation change processing on a first position of the target object corresponding to the first frame to obtain a second position expectation of the target object corresponding to the second frame;
performing position variance change processing on a first position variance of the target object corresponding to the first frame to obtain a second position variance of the target object corresponding to the second frame;
sampling a first random process constructed by the second position expectation and the second position variance to obtain a second position of the target object corresponding to the second frame;
performing motion information expectation change processing on first motion information of the target object corresponding to the first frame to obtain second motion information expectation of the target object in the second frame;
carrying out motion information variance change processing on first motion information of the target object corresponding to the first frame to obtain a second motion information variance of the target object corresponding to the second frame;
and sampling a random process constructed by the second motion information expectation and the second motion information variance to obtain second motion information of the target object corresponding to the second frame.
7. The method according to claim 6, wherein performing a position expectation change process on a first position of the target object corresponding to the first frame to obtain a second position expectation of the target object corresponding to the second frame comprises:
generating a first position matrix corresponding to a first position of the target object on the basis of the first position corresponding to the first frame;
performing matrix multiplication processing on the motion state transition matrix and the first position matrix to obtain a first multiplication result, and performing mapping processing based on a relation function on the first multiplication result to obtain a second position matrix;
a second location expectation is extracted from the second location matrix.
8. The method according to claim 6, wherein the performing a position variance change process on a first position variance of the target object corresponding to the first frame to obtain a second position variance of the target object corresponding to the second frame comprises:
constructing a first position variance matrix of the target object corresponding to the first frame based on a first position variance of the target object corresponding to the first frame, wherein the first position variance in the first frame characterizes an uncertainty of the target object at a first position of the first frame;
multiplying the motion state transition matrix by a first position variance matrix of the target object corresponding to the first frame to obtain a second multiplication result;
multiplying the second multiplication result by the transposed matrix of the motion state transfer matrix to obtain a third multiplication result;
acquiring an object image corresponding to the target object in the first frame, and acquiring a gray histogram feature of the object image;
generating a gray histogram feature matrix based on the gray histogram feature of the object image;
adding the third multiplication result and the gray histogram feature matrix to obtain a second position variance matrix of the target object in the second frame, wherein the second position variance of the target object in the second frame represents the uncertainty of the target object in the second position of the second frame;
and extracting the second position variance from the second position variance matrix.
9. The method according to claim 6, wherein performing motion information expectation change processing on the first motion information of the target object corresponding to the first frame to obtain a second motion information expectation of the target object in the second frame comprises:
acquiring a first motion information expectation corresponding to the first motion information;
constructing a first motion information matrix corresponding to a first motion information expectation of the target object in the first frame based on the first motion information expectation;
the first motion information matrix represents the speed and the occlusion information of the target object corresponding to the first frame;
performing matrix multiplication processing on the motion state transition matrix and the first motion information matrix to obtain a fourth multiplication result, and performing mapping processing based on a relation function on the fourth multiplication result to obtain a second motion information matrix;
wherein the second motion information matrix represents the speed and occlusion information of the target object in the second frame;
extracting the second motion information expectation from the second motion information matrix.
10. The method according to claim 6, wherein the performing a motion information variance change process on the first motion information of the target object corresponding to the first frame to obtain a second motion information variance of the target object corresponding to the second frame includes:
acquiring a first motion information variance corresponding to the first motion information;
multiplying the motion state transition matrix by a first motion information variance of the target object corresponding to the first frame to obtain a fifth multiplication result, wherein the first motion information variance in the first frame represents the uncertainty of the first motion information of the target object corresponding to the first frame;
multiplying the fifth multiplication result by a transposed matrix of the motion state transition matrix to obtain a sixth multiplication result;
acquiring an object image corresponding to the target object in the first frame, and acquiring a gray histogram feature of the object image;
generating a gray histogram feature matrix based on the gray histogram feature of the object image;
and adding the sixth multiplication result and the gray histogram feature matrix to obtain a second motion information variance of the target object in the second frame, wherein the second motion information variance of the target object in the second frame represents uncertainty of second motion information of the target object in the second frame.
11. A data processing apparatus, characterized in that the apparatus comprises:
the type module is used for carrying out object type identification processing on each first object based on the modeling characteristics of each first object in the virtual scene to obtain the object type of each first object, and taking the first objects which do not belong to the static object type as target objects;
a state module, configured to perform state change processing based on a first position of each target object corresponding to a first frame in the virtual scene and first motion information of each target object corresponding to the first frame, to obtain a second position of each target object corresponding to a second frame and second motion information of each target object corresponding to the second frame; the second frame is a next frame adjacent to the first frame, the first position of each target object is obtained by performing image feature recognition processing based on a third position on the first frame of the virtual scene, the third position is a position of each target object corresponding to a third frame, the third frame is a previous frame adjacent to the first frame, and the second motion information and the first motion information include occlusion related information of the target object;
a location module, configured to, when a second location of the target object corresponding to the second frame is abnormal, perform the following processing for each target object: acquiring historical positions of the target object from a plurality of positions of each historical frame based on the shielding correlation information of the target object; when the track formed by the historical positions of the target object is abnormal, summing the historical positions, and multiplying the sum result by a correction coefficient corresponding to the object type to obtain an index coefficient; performing index calculation processing based on the index coefficient to obtain an index calculation result, and multiplying the index calculation result by the second position to obtain a seventh multiplication result; obtaining a position positively correlated with the seventh multiplication result, and replacing the second position with the position.
12. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the data processing method of any one of claims 1 to 10 when executing executable instructions stored in the memory.
13. A computer readable storage medium storing executable instructions, wherein the executable instructions when executed by a processor implement the data processing method of any one of claims 1 to 10.
CN202211445032.3A 2022-11-18 2022-11-18 Data processing method, device, equipment and storage medium Active CN115619867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211445032.3A CN115619867B (en) 2022-11-18 2022-11-18 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211445032.3A CN115619867B (en) 2022-11-18 2022-11-18 Data processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115619867A CN115619867A (en) 2023-01-17
CN115619867B true CN115619867B (en) 2023-04-11

Family

ID=84878848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211445032.3A Active CN115619867B (en) 2022-11-18 2022-11-18 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115619867B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349478B (en) * 2023-10-08 2024-05-24 国网江苏省电力有限公司经济技术研究院 Resource data reconstruction integration system based on digital transformation enterprise

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008134939A (en) * 2006-11-29 2008-06-12 Nippon Telegr & Teleph Corp <Ntt> Moving object tracking apparatus, moving object tracking method, moving object tracking program with the method described therein, and recording medium with the program stored therein
CN105844634B (en) * 2016-03-18 2019-04-05 阜阳师范学院 A kind of multiple mobile object tracking monitor method
CN111815755B (en) * 2019-04-12 2023-06-30 Oppo广东移动通信有限公司 Method and device for determining blocked area of virtual object and terminal equipment
CN111228797B (en) * 2020-01-13 2021-05-28 腾讯科技(深圳)有限公司 Data processing method, data processing device, computer and readable storage medium
CN113592895A (en) * 2021-01-29 2021-11-02 腾讯科技(深圳)有限公司 Motion information determination method and device and computer readable storage medium
CN114299115A (en) * 2021-12-28 2022-04-08 天翼云科技有限公司 Method and device for multi-target tracking, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115619867A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN114155543B (en) Neural network training method, document image understanding method, device and equipment
CN109766840B (en) Facial expression recognition method, device, terminal and storage medium
US9904664B2 (en) Apparatus and method providing augmented reality contents based on web information structure
WO2022142450A1 (en) Methods and apparatuses for image segmentation model training and for image segmentation
JP7425147B2 (en) Image processing method, text recognition method and device
US20220351390A1 (en) Method for generating motion capture data, electronic device and storage medium
CN112132032B (en) Traffic sign board detection method and device, electronic equipment and storage medium
KR20220038475A (en) Video content recognition method and apparatus, storage medium, and computer device
KR102305023B1 (en) Key frame scheduling method and apparatus, electronic device, program and medium
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN115619867B (en) Data processing method, device, equipment and storage medium
CN112749758A (en) Image processing method, neural network training method, device, equipment and medium
US20230386041A1 (en) Control Method, Device, Equipment and Storage Medium for Interactive Reproduction of Target Object
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
JP2023543964A (en) Image processing method, image processing device, electronic device, storage medium and computer program
CN113989442B (en) Building information model construction method and related device
CN114219971A (en) Data processing method, data processing equipment and computer readable storage medium
CN113705293A (en) Image scene recognition method, device, equipment and readable storage medium
CN110990106B (en) Data display method and device, computer equipment and storage medium
CN115965736B (en) Image processing method, device, equipment and storage medium
CN116774973A (en) Data rendering method, device, computer equipment and storage medium
CN114092608B (en) Expression processing method and device, computer readable storage medium and electronic equipment
CN111915701B (en) Button image generation method and device based on artificial intelligence
CN112132871B (en) Visual feature point tracking method and device based on feature optical flow information, storage medium and terminal
CN113408329A (en) Video processing method, device and equipment based on artificial intelligence and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant