CN117237409A - Shooting game sight correction method and system based on Internet of things - Google Patents

Shooting game sight correction method and system based on Internet of things Download PDF

Info

Publication number
CN117237409A
CN117237409A CN202311143607.0A CN202311143607A CN117237409A CN 117237409 A CN117237409 A CN 117237409A CN 202311143607 A CN202311143607 A CN 202311143607A CN 117237409 A CN117237409 A CN 117237409A
Authority
CN
China
Prior art keywords
gun body
information
generating
solid model
sight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311143607.0A
Other languages
Chinese (zh)
Inventor
王晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Feiman Thinking Digital Technology Co ltd
Original Assignee
Guangzhou Feiman Thinking Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Feiman Thinking Digital Technology Co ltd filed Critical Guangzhou Feiman Thinking Digital Technology Co ltd
Priority to CN202311143607.0A priority Critical patent/CN117237409A/en
Publication of CN117237409A publication Critical patent/CN117237409A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a shooting game sight correction method and a shooting game sight correction system based on the Internet of things, and belongs to the technical field of the Internet of things. According to the method, the singular value decomposition algorithm is introduced to reconstruct the bounding box of the multi-view gun body entity image, redundant bounding box ranges are removed, the calculation complexity of the gun body entity model in the interaction process can be reduced, and the operation robustness in the interaction process is improved.

Description

Shooting game sight correction method and system based on Internet of things
Technical Field
The invention relates to the technical field of the Internet of things, in particular to a shooting game sight correction method and system based on the Internet of things.
Background
With rapid advances in graphics and display technology, computer games have rapidly evolved worldwide, and have become one of the important ways of entertainment. Meanwhile, the requirements of people on game experience are higher and higher, and the immersive game based on the multi-projection display technology has large-size, high-resolution and wide-view-angle game pictures, so that high-fidelity game experience is brought to players, and the immersive game has become one of important research directions at home and abroad. Conventional computer games typically employ a mouse, keyboard, or touch technology for human interaction. To create a highly immersive visual experience, immersive games require a more natural manner of human-computer interaction to enable players to fully integrate into the game environment. The somatosensory interaction device controls the game by tracking the movement of the player, has stronger integration sense than a mouse, a keyboard and a touch device, and is more suitable for the immersive game. In the aspect of immersive game application, shooting games adopting a somatosensory interaction mode obtain a great deal of attention, and are successfully applied to interactive entertainment and simulation projects. The interaction between the gun body object and the data in the game is completed through the field multi-view technology, however, in the process of the interaction between the gun body object and the data in the game, the bounding box range of the reconstructed three-dimensional model of the gun body object is overlarge due to the influence of the shooting angle of the camera, the calculation complexity of the information interaction process is improved, the position of the sight is offset to a certain extent, and the game experience of a user is reduced.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a shooting game sight correction method and system based on the Internet of things.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the invention provides a shooting game sight correction method based on the Internet of things, which comprises the following steps of:
the method comprises the steps of installing a multi-view camera on a shooting game field, acquiring multi-view gun body image data information to be interacted with by the shooting game through the multi-view camera, and processing the multi-view gun body image data information to generate a gun body entity model diagram;
extracting features of the gun body solid model diagrams, acquiring feature data in each gun body solid model diagram, and generating a comparison template according to the feature data in the gun body solid model diagrams;
introducing a twin network tracking algorithm, and carrying out model tracking on each gun body entity model graph according to a comparison template by the twin network tracking algorithm to obtain sight position information at the current moment;
and generating relevant interaction information according to the sight position information at the current moment.
Further, in the method, the gun body entity model diagram is generated by processing the multi-view gun body image data information, and specifically comprises the following steps:
The method comprises the steps of obtaining preprocessed multi-gun body image data information through filtering and denoising of multi-view gun body image data information, introducing a singular value decomposition algorithm, and performing redundancy processing on the preprocessed multi-gun body image data information through the singular value decomposition algorithm to generate a redundant image;
selecting one image in the images subjected to redundancy processing as a registration image, generating source point cloud data according to the registration image, introducing an ICP algorithm based on the source point cloud data, and generating point cloud data to be registered according to the images subjected to redundancy processing;
calculating an overlapping area and a non-overlapping area between point cloud data to be registered and source point cloud data according to an ICP algorithm, introducing a robust kernel function to detect abnormal points in the overlapping area and the non-overlapping area, and generating an error transformation matrix;
and reconstructing the three-dimensional model according to the overlapping area and the non-overlapping area to generate an initial solid model diagram, and correcting the initial solid model diagram based on the wrong transformation matrix to generate a gun solid model diagram.
Further, in the method, a singular value decomposition algorithm is introduced, and redundancy processing is performed on the preprocessed multi-gun body image data information through the singular value decomposition algorithm, so as to generate a redundant processed image, which specifically comprises the following steps:
Performing target point cloud description on the preprocessed multi-gun body image data information to generate a related covariance matrix, and introducing a singular value decomposition algorithm;
performing feature decomposition on the related covariance matrix according to a singular value decomposition algorithm, generating an orthogonal matrix and a diagonal matrix, wherein the orthogonal matrix and the diagonal matrix are formed by feature vectors according to columns, and selecting a random feature vector in the orthogonal matrix as a reference;
constructing a new coordinate system according to the standard, describing the target point cloud according to an orthogonal matrix and a diagonal matrix formed by characteristic vectors in columns, and generating an optimized bounding box by calculating the bounding box of the target point cloud under the new coordinate system;
and acquiring eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system, remapping the eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system into the world coordinate system through coordinate change, and generating the redundant processed image.
Further, in the method, feature extraction is performed on the gun body solid model diagrams to obtain feature data in each gun body solid model diagram, and a comparison template is generated according to the feature data in the gun body solid model diagrams, and the method specifically comprises the following steps:
Introducing a feature pyramid, extracting regional features of each gun body solid model through the feature pyramid, obtaining feature data in each gun body solid model diagram, and introducing a mahalanobis distance measurement method;
calculating a mahalanobis distance value between characteristic data in each gun body entity model graph according to a mahalanobis distance measurement method, and judging whether the mahalanobis distance value is larger than a preset mahalanobis distance threshold;
when the mahalanobis distance value is larger than a preset mahalanobis distance threshold, taking feature data corresponding to the mahalanobis distance value larger than the preset mahalanobis distance threshold as identification features of the gun body entity model diagram, updating the identification features regularly, and generating a comparison template according to the identification features;
and when the March distance value is not greater than the preset March distance threshold, taking the feature data corresponding to the March distance value not greater than the preset March distance threshold as the common features of the gun body entity model diagram, and eliminating the common features.
Further, in the method, a twin network tracking algorithm is introduced, model tracking is carried out on the gun body entity model diagram according to a comparison template through the twin network tracking algorithm, and sight position information at the current moment is obtained, and the method specifically comprises the following steps:
acquiring current multi-view gun body image data information, introducing a twin network tracking algorithm, taking a comparison template and a gun body entity model image as input, and respectively extracting the characteristics of the comparison template and the current multi-view gun body image data information by using the sharing parameters through a characteristic extraction module;
The extracted features are fed to a cooperative attention module, and the enhanced features are output through the cooperative attention module and feature enhancement, so that the internal relation between the features in the comparison template and the features in the gun body entity model diagram is obtained;
acquiring identity verification information of each gun body according to internal relations between features in the comparison template and features in the gun body solid model diagram, tracking each gun body solid model according to the identity verification information of each gun body, and generating position information of the gun body solid model at the current moment;
and acquiring the position relation between each gun body solid model and the sight, and generating the sight position information at the current moment according to the position information of the gun body solid model at the current moment and the position relation.
Further, in the method, related interaction information is generated according to the sight position information at the current moment, and the method specifically comprises the following steps:
setting at least two position detection sensors in the sight position of the entity gun body of the shooting game, acquiring at least two position information of the sight position through the direction sensors, and determining the direction information of the sight position in the current space according to the at least two position information of the sight position;
Generating direction vector information of the sight according to the direction information of the sight position in the current space and the sight position information of the current moment, and acquiring a game picture of a current shooting game display;
and generating relevant interaction information from the direction vector information of the sight, and displaying the sight on a game picture of a current shooting game display according to the relevant interaction information.
The invention provides a shooting game sight correction system based on the Internet of things, which comprises a memory and a processor, wherein the memory comprises a shooting game sight correction method program based on the Internet of things, and when the shooting game sight correction method program based on the Internet of things is executed by the processor, the following steps are realized:
the method comprises the steps of installing a multi-view camera on a shooting game field, acquiring multi-view gun body image data information to be interacted with by the shooting game through the multi-view camera, and processing the multi-view gun body image data information to generate a gun body entity model diagram;
extracting features of the gun body solid model diagrams, acquiring feature data in each gun body solid model diagram, and generating a comparison template according to the feature data in the gun body solid model diagrams;
Introducing a twin network tracking algorithm, and carrying out model tracking on each gun body entity model graph according to a comparison template by the twin network tracking algorithm to obtain sight position information at the current moment;
and generating relevant interaction information according to the sight position information at the current moment.
Further, in the system, the gun body entity model diagram is generated by processing the multi-view gun body image data information, and specifically comprises the following steps:
the method comprises the steps of obtaining preprocessed multi-gun body image data information through filtering and denoising of multi-view gun body image data information, introducing a singular value decomposition algorithm, and performing redundancy processing on the preprocessed multi-gun body image data information through the singular value decomposition algorithm to generate a redundant image;
selecting one image in the images subjected to redundancy processing as a registration image, generating source point cloud data according to the registration image, introducing an ICP algorithm based on the source point cloud data, and generating point cloud data to be registered according to the images subjected to redundancy processing;
calculating an overlapping area and a non-overlapping area between point cloud data to be registered and source point cloud data according to an ICP algorithm, introducing a robust kernel function to detect abnormal points in the overlapping area and the non-overlapping area, and generating an error transformation matrix;
And reconstructing the three-dimensional model according to the overlapping area and the non-overlapping area to generate an initial solid model diagram, and correcting the initial solid model diagram based on the wrong transformation matrix to generate a gun solid model diagram.
Further, in the system, a singular value decomposition algorithm is introduced, and redundancy processing is performed on the preprocessed multi-gun body image data information through the singular value decomposition algorithm, so as to generate a redundant processed image, which specifically comprises the following steps:
performing target point cloud description on the preprocessed multi-gun body image data information to generate a related covariance matrix, and introducing a singular value decomposition algorithm;
performing feature decomposition on the related covariance matrix according to a singular value decomposition algorithm, generating an orthogonal matrix and a diagonal matrix, wherein the orthogonal matrix and the diagonal matrix are formed by feature vectors according to columns, and selecting a random feature vector in the orthogonal matrix as a reference;
constructing a new coordinate system according to the standard, describing the target point cloud according to an orthogonal matrix and a diagonal matrix formed by characteristic vectors in columns, and generating an optimized bounding box by calculating the bounding box of the target point cloud under the new coordinate system;
and acquiring eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system, remapping the eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system into the world coordinate system through coordinate change, and generating the redundant processed image.
Further, in the system, a twin network tracking algorithm is introduced, model tracking is carried out on the gun body entity model diagram according to a comparison template through the twin network tracking algorithm, and sight position information at the current moment is obtained, and the method specifically comprises the following steps:
acquiring current multi-view gun body image data information, introducing a twin network tracking algorithm, taking a comparison template and a gun body entity model image as input, and respectively extracting the characteristics of the comparison template and the current multi-view gun body image data information by using the sharing parameters through a characteristic extraction module;
the extracted features are fed to a cooperative attention module, and the enhanced features are output through the cooperative attention module and feature enhancement, so that the internal relation between the features in the comparison template and the features in the gun body entity model diagram is obtained;
acquiring identity verification information of each gun body according to internal relations between features in the comparison template and features in the gun body solid model diagram, tracking each gun body solid model according to the identity verification information of each gun body, and generating position information of the gun body solid model at the current moment;
and acquiring the position relation between each gun body solid model and the sight, and generating the sight position information at the current moment according to the position information of the gun body solid model at the current moment and the position relation.
The invention solves the defects existing in the background technology, and has the following beneficial effects:
according to the invention, the multi-view cameras are installed on a shooting game field, multi-view gun body image data information to be interacted with in the shooting game is obtained through the multi-view cameras, a gun body solid model diagram is generated through processing the multi-view gun body image data information, further, feature data in each gun body solid model diagram is obtained through feature extraction of the gun body solid model diagram, a comparison template is generated according to the feature data in the gun body solid model diagram, a twin network tracking algorithm is introduced, model tracking is carried out on each gun body solid model diagram according to the comparison template through the twin network tracking algorithm, sight position information at the current moment is obtained, and relevant interaction information is finally generated according to the sight position information at the current moment. According to the method, the singular value decomposition algorithm is introduced to reconstruct the bounding box of the multi-view gun body entity image, redundant bounding box ranges are removed, the calculation complexity of the gun body entity model in the interaction process can be reduced, and the operation robustness in the interaction process is improved. Secondly, when the gun body model is tracked, the recognition speed in the tracking process can be greatly improved according to the different characteristics of the gun body as recognition characteristics, so that the sight positions of the corresponding gun bodies can be tracked and positioned rapidly, and the visual game experience effect of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other embodiments of the drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows an overall method flow diagram of a shooting game sight correction method based on the Internet of things;
FIG. 2 shows a first method flow diagram of a shooting game sight correction method based on the Internet of things;
FIG. 3 shows a second method flow diagram of a shooting game sight correction method based on the Internet of things;
fig. 4 shows a system block diagram of a shooting game sight correction system based on the internet of things.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
As shown in fig. 1, the first aspect of the present invention provides a shooting game sight correction method based on the internet of things, comprising the following steps:
s102, acquiring multi-view gun body image data information to be interacted with in a shooting game through a multi-view camera installed on a shooting game field, and processing the multi-view gun body image data information to generate a gun body entity model diagram;
s104, extracting features of the gun body solid model diagrams, obtaining feature data in each gun body solid model diagram, and generating a comparison template according to the feature data in the gun body solid model diagrams;
s106, introducing a twin network tracking algorithm, and carrying out model tracking on each gun body entity model diagram according to a comparison template by the twin network tracking algorithm to obtain sight position information at the current moment;
s108, generating relevant interaction information according to the sight position information at the current moment.
In the invention, the bounding box reconstruction is carried out on the gun body entity image with multiple views by introducing the singular value decomposition algorithm, and the redundant bounding box range is removed, so that the calculation complexity of the gun body entity model in the interaction process can be reduced, and the operation robustness in the interaction process can be improved. Secondly, when the gun body model is tracked, the recognition speed in the tracking process can be greatly improved according to the different characteristics of the gun body as recognition characteristics, so that the sight positions of the corresponding gun bodies can be tracked and positioned rapidly, and the visual game experience effect of a user is improved.
As shown in fig. 2, in the method, the gun body solid model map is generated by processing the multi-view gun body image data information, which specifically includes:
s202, filtering and denoising the multi-view gun body image data information to obtain preprocessed multi-gun body image data information, introducing a singular value decomposition algorithm, and performing redundancy processing on the preprocessed multi-gun body image data information through the singular value decomposition algorithm to generate a redundancy processed image;
s204, selecting one image in the images subjected to redundancy processing as a registration image, generating source point cloud data according to the registration image, introducing an ICP algorithm based on the source point cloud data, and generating point cloud data to be registered according to the images subjected to the rest redundancy processing;
S206, calculating an overlapping area and a non-overlapping area between point cloud data to be registered and source point cloud data according to an ICP algorithm, introducing a robust kernel function to detect abnormal points in the overlapping area and the non-overlapping area, and generating an error transformation matrix;
and S208, reconstructing the three-dimensional model according to the overlapping area and the non-overlapping area to generate an initial solid model diagram, and correcting the initial solid model diagram based on the wrong transformation matrix to generate a gun solid model diagram.
It should be noted that, in the experience process of the actual virtual game, the model of the three-dimensional model is built on the gun body through the multi-view technology, and the ICP algorithm is the most typical stitching algorithm, so as to reconstruct the three-dimensional object of the field of view, where the ICP does not consider the situation that two point clouds to be registered have non-overlapping areas, and when only partial areas of the two point clouds overlap, the algorithm is prone to generating incorrect matching points, which are called outliers. Outliers can cause larger error terms, and algorithms in minimizing the objective function are more prone to reduce the error terms of these outliers, resulting in an erroneous transformation matrix. In the invention, abnormal point detection is carried out on the overlapped area and the non-overlapped area by introducing a robust kernel function, wherein the robust kernel function meets the following relational expression:
Wherein, in the relation, P is the value calculated by the robust kernel function, according to the relation of P, the function is continuously conductive and monotonically decreases in (0, ++) and preferentially meets the requirement of the robust kernel function; a is a constant, and the value is 1; r is (r) i Is the error term of the i-th term to the matching point in the registration process.
When the calculated value is larger than the preset value, the error item of the matching point is an abnormal point, and the error item of the matching point can be evaluated by the method, so that the gun body solid model diagram is corrected, and the accuracy of model reconstruction is improved.
When r is i When the gradient increase of P is larger than the two-norm measurement, the influence of abnormal points on an optimization result is reduced, and an error transformation matrix is generated according to the abnormal points, so that the gun body solid model diagram is corrected, the three-dimensional model reconstruction precision under the multi-view angle and the correction precision of the sight are improved, and better visual experience is provided for a user.
Further, in the method, a singular value decomposition algorithm is introduced, and redundancy processing is performed on the preprocessed multi-gun body image data information through the singular value decomposition algorithm, so as to generate a redundant processed image, which specifically comprises the following steps:
Performing target point cloud description on the preprocessed multi-gun body image data information to generate a related covariance matrix, and introducing a singular value decomposition algorithm;
performing feature decomposition on the related covariance matrix according to a singular value decomposition algorithm, generating an orthogonal matrix and a diagonal matrix, wherein the orthogonal matrix and the diagonal matrix are formed by feature vectors according to columns, and selecting a random feature vector in the orthogonal matrix as a reference;
constructing a new coordinate system according to the standard, describing the target point cloud according to an orthogonal matrix and a diagonal matrix formed by characteristic vectors in columns, and generating an optimized bounding box by calculating the bounding box of the target point cloud under the new coordinate system;
and acquiring eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system, remapping the eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system into the world coordinate system through coordinate change, and generating the redundant processed image.
In the shooting game, a model of the gun body is constructed by a multi-view technology, so that the gun body is mapped into a virtual space.
Further, in the method, feature extraction is performed on the gun body solid model diagrams to obtain feature data in each gun body solid model diagram, and a comparison template is generated according to the feature data in the gun body solid model diagrams, and the method specifically comprises the following steps:
introducing a feature pyramid, extracting regional features of each gun body solid model through the feature pyramid, obtaining feature data in each gun body solid model diagram, and introducing a mahalanobis distance measurement method;
calculating a mahalanobis distance value between characteristic data in each gun body entity model graph according to a mahalanobis distance measurement method, and judging whether the mahalanobis distance value is larger than a preset mahalanobis distance threshold;
when the mahalanobis distance value is larger than a preset mahalanobis distance threshold, taking feature data corresponding to the mahalanobis distance value larger than the preset mahalanobis distance threshold as identification features of the gun body entity model diagram, updating the identification features regularly, and generating a comparison template according to the identification features;
and when the March distance value is not greater than the preset March distance threshold, taking the feature data corresponding to the March distance value not greater than the preset March distance threshold as the common features of the gun body entity model diagram, and eliminating the common features.
It should be noted that, when the shooting game is data-interacted, there may be a plurality of users in an area with interaction information to perform shooting game experience, and the character and the gun body may move, and each gun body may have certain same characteristics or different characteristics.
As shown in fig. 3, further, in the method, a twin network tracking algorithm is introduced, model tracking is performed on the gun entity model diagram according to a comparison template through the twin network tracking algorithm, and sight position information at the current moment is obtained, which specifically includes:
s302, acquiring current multi-view gun body image data information, introducing a twin network tracking algorithm, taking a comparison template and a gun body entity model diagram as input, and respectively extracting the characteristics of the comparison template and the current multi-view gun body image data information by using the sharing parameters through a characteristic extraction module;
s304, feeding the extracted features to a cooperative attention module, and outputting the enhanced features through the cooperative attention module and feature enhancement, so as to obtain the internal relation between the features in the comparison template and the features in the gun body entity model diagram;
s306, acquiring identity verification information of each gun body according to internal relations between features in the comparison template and features in the gun body entity model diagram, tracking each gun body entity model according to the identity verification information of each gun body, and generating position information of the gun body entity model at the current moment;
s308, acquiring the position relation between each gun body solid model and the sight, and generating the sight position information at the current moment according to the position information of the gun body solid model at the current moment and the position relation.
It should be noted that, the gun body is tracked by the fusion twin network tracking algorithm, so as to determine the sight position information at the current moment, and enable the sight to be corrected. The identity verification information of the gun body can be identified according to the internal relation between the features in the comparison template and the features in the gun body entity model graph, so that one or more sight is better tracked, and the tracking operation response speed is improved.
Further, in the method, related interaction information is generated according to the sight position information at the current moment, and the method specifically comprises the following steps:
setting at least two position detection sensors in the sight position of the entity gun body of the shooting game, acquiring at least two position information of the sight position through the direction sensors, and determining the direction information of the sight position in the current space according to the at least two position information of the sight position;
generating direction vector information of the sight according to the direction information of the sight position in the current space and the sight position information of the current moment, and acquiring a game picture of a current shooting game display;
and generating relevant interaction information from the direction vector information of the sight, and displaying the sight on a game picture of a current shooting game display according to the relevant interaction information.
By the method, related interaction information can be generated according to the direction vector information, the position information and the display screen size information of the sight, so that interaction of the solid model of the gun body in the shooting game is completed, and shooting game experience of a user is more real.
In addition, the invention can also comprise the following steps:
acquiring service condition data of the current indoor shooting game type, constructing a time stamp, and constructing service condition data of the indoor shooting game type based on a time sequence by combining the time stamp and the service condition data of the current indoor shooting game type;
constructing a current indoor shooting game type use change curve graph according to the use condition data of the indoor shooting game type based on the time sequence;
acquiring the use preference information in the current game room based on the use change curve graph of the shooting game type in the current game room;
and acquiring the combination information of the shooting game type in the current game room, and periodically adjusting the combination information of the shooting game type in the current game room according to the use preference information in the current game room.
It should be noted that, due to different shooting game types, the user may prefer to one or more shooting game types, and the method dynamically adjusts the combination of the shooting game types in the current game room through the use preference within the preset time, so that the operation in the game room better meets the requirements of the user. For example, there are a type A shooting game X table and a type B shooting game Y table in the room, and the combination information is formed by the type A shooting game X table and the type B shooting game Y table in the room.
In addition, the method can further comprise the following steps:
acquiring service data information of current indoor shooting game equipment, and constructing service data information of shooting game equipment based on a time sequence according to the service data information of the current indoor shooting game equipment and a time stamp;
constructing a Bayesian network, inputting service data information of the shooting game equipment based on the time sequence into the Bayesian network, and acquiring the Bayesian network of a training network;
predicting according to the Bayesian network of the training network, obtaining the estimated fault time of each shooting game device, obtaining the average time information used by the user of each shooting game device, and predicting the prediction period for stopping the user according to the average time information used by the user of each shooting game device;
when the estimated fault time of the shooting game equipment does not fall into the predicted period when the user stops using, generating related recommendation information according to the shooting game equipment corresponding to the predicted period when the estimated fault time does not fall into the user stops using, and displaying the related recommendation information according to a preset mode.
By the method, the experience of the user in using the shooting game device can be improved.
As shown in fig. 4, the second aspect of the present invention provides a shooting game sight correction system 4 based on the internet of things, the system includes a memory 41 and a processor 62, the memory 41 includes a shooting game sight correction method program based on the internet of things, and when the shooting game sight correction method program based on the internet of things is executed by the processor 62, the following steps are implemented:
the method comprises the steps of installing a multi-view camera on a shooting game field, acquiring multi-view gun body image data information to be interacted with by the shooting game through the multi-view camera, and processing the multi-view gun body image data information to generate a gun body entity model diagram;
extracting features of the gun body solid model diagrams, acquiring feature data in each gun body solid model diagram, and generating a comparison template according to the feature data in the gun body solid model diagrams;
introducing a twin network tracking algorithm, and carrying out model tracking on each gun body entity model graph according to a comparison template by the twin network tracking algorithm to obtain sight position information at the current moment;
and generating relevant interaction information according to the sight position information at the current moment.
Further, in the system, the gun body entity model diagram is generated by processing the multi-view gun body image data information, and specifically comprises the following steps:
the method comprises the steps of obtaining preprocessed multi-gun body image data information through filtering and denoising of multi-view gun body image data information, introducing a singular value decomposition algorithm, and performing redundancy processing on the preprocessed multi-gun body image data information through the singular value decomposition algorithm to generate a redundant image;
selecting one image in the images subjected to redundancy processing as a registration image, generating source point cloud data according to the registration image, introducing an ICP algorithm based on the source point cloud data, and generating point cloud data to be registered according to the images subjected to redundancy processing;
calculating an overlapping area and a non-overlapping area between point cloud data to be registered and source point cloud data according to an ICP algorithm, introducing a robust kernel function to detect abnormal points in the overlapping area and the non-overlapping area, and generating an error transformation matrix;
and reconstructing the three-dimensional model according to the overlapping area and the non-overlapping area to generate an initial solid model diagram, and correcting the initial solid model diagram based on the wrong transformation matrix to generate a gun solid model diagram.
Further, in the system, a singular value decomposition algorithm is introduced, and redundancy processing is performed on the preprocessed multi-gun body image data information through the singular value decomposition algorithm, so as to generate a redundant processed image, which specifically comprises the following steps:
performing target point cloud description on the preprocessed multi-gun body image data information to generate a related covariance matrix, and introducing a singular value decomposition algorithm;
performing feature decomposition on the related covariance matrix according to a singular value decomposition algorithm, generating an orthogonal matrix and a diagonal matrix, wherein the orthogonal matrix and the diagonal matrix are formed by feature vectors according to columns, and selecting a random feature vector in the orthogonal matrix as a reference;
constructing a new coordinate system according to the standard, describing the target point cloud according to an orthogonal matrix and a diagonal matrix formed by characteristic vectors in columns, and generating an optimized bounding box by calculating the bounding box of the target point cloud under the new coordinate system;
and acquiring eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system, remapping the eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system into the world coordinate system through coordinate change, and generating the redundant processed image.
Further, in the system, a twin network tracking algorithm is introduced, model tracking is carried out on the gun body entity model diagram according to a comparison template through the twin network tracking algorithm, and sight position information at the current moment is obtained, and the method specifically comprises the following steps:
acquiring current multi-view gun body image data information, introducing a twin network tracking algorithm, taking a comparison template and a gun body entity model image as input, and respectively extracting the characteristics of the comparison template and the current multi-view gun body image data information by using the sharing parameters through a characteristic extraction module;
the extracted features are fed to a cooperative attention module, and the enhanced features are output through the cooperative attention module and feature enhancement, so that the internal relation between the features in the comparison template and the features in the gun body entity model diagram is obtained;
acquiring identity verification information of each gun body according to internal relations between features in the comparison template and features in the gun body solid model diagram, tracking each gun body solid model according to the identity verification information of each gun body, and generating position information of the gun body solid model at the current moment;
and acquiring the position relation between each gun body solid model and the sight, and generating the sight position information at the current moment according to the position information of the gun body solid model at the current moment and the position relation.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely illustrative embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present invention, and the invention should be covered. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The shooting game sight correction method based on the Internet of things is characterized by comprising the following steps of:
the method comprises the steps of installing a multi-view camera on a shooting game field, acquiring multi-view gun body image data information to be interacted with by the shooting game through the multi-view camera, and processing the multi-view gun body image data information to generate a gun body entity model diagram;
extracting features of the gun body solid model diagrams to obtain feature data in each gun body solid model diagram, and generating a comparison template according to the feature data in the gun body solid model diagrams;
introducing a twin network tracking algorithm, and carrying out model tracking on each gun body entity model diagram according to the comparison template by the twin network tracking algorithm to obtain sight position information at the current moment;
And generating related interaction information according to the sight position information at the current moment.
2. The shooting game sight correction method based on the internet of things according to claim 1, wherein the generating of the gun body solid model map by processing the multi-view gun body image data information specifically comprises:
the method comprises the steps of obtaining preprocessed multi-gun body image data information through filtering and denoising the multi-view gun body image data information, introducing a singular value decomposition algorithm, and performing redundancy processing on the preprocessed multi-gun body image data information through the singular value decomposition algorithm to generate a redundant image;
selecting one image in the redundant processed images as a registration image, generating source point cloud data according to the registration image, introducing an ICP algorithm based on the source point cloud data, and generating point cloud data to be registered according to the rest of the redundant processed images;
calculating an overlapping area and a non-overlapping area between point cloud data to be registered and the source point cloud data according to the ICP algorithm, introducing a robust kernel function to detect abnormal points in the overlapping area and the non-overlapping area, and generating an error transformation matrix;
And reconstructing a three-dimensional model according to the overlapping area and the non-overlapping area to generate an initial solid model diagram, and correcting the initial solid model diagram based on the wrong transformation matrix to generate a gun solid model diagram.
3. The shooting game sight correction method based on the internet of things according to claim 2, wherein a singular value decomposition algorithm is introduced, redundancy processing is performed on the preprocessed multi-gun body image data information through the singular value decomposition algorithm, and a redundant processed image is generated, and specifically comprising:
performing target point cloud description on the preprocessed multi-gun body image data information to generate a related covariance matrix, and introducing a singular value decomposition algorithm;
performing feature decomposition on the related covariance matrix according to the singular value decomposition algorithm, generating an orthogonal matrix and a diagonal matrix, wherein the orthogonal matrix and the diagonal matrix are formed by feature vectors according to columns, and selecting a random feature vector in the orthogonal matrix as a reference;
constructing a new coordinate system according to the standard, describing the target point cloud according to an orthogonal matrix and a diagonal matrix formed by the feature vectors in columns, and generating an optimized bounding box by calculating the bounding box of the target point cloud under the new coordinate system;
And acquiring eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system, remapping the eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system into the world coordinate system through coordinate change, and generating the redundant processed image.
4. The shooting game sight correction method based on the internet of things according to claim 1, wherein the feature extraction is performed on the gun body solid model diagrams to obtain feature data in each gun body solid model diagram, and a comparison template is generated according to the feature data in the gun body solid model diagrams, and the method specifically comprises the following steps:
introducing a feature pyramid, extracting regional features of each gun body solid model through the feature pyramid, obtaining feature data in each gun body solid model diagram, and introducing a mahalanobis distance measurement method;
calculating a mahalanobis distance value between characteristic data in each gun body solid model graph according to the mahalanobis distance measurement method, and judging whether the mahalanobis distance value is larger than a preset mahalanobis distance threshold;
when the mahalanobis distance value is larger than the preset mahalanobis distance threshold, taking feature data corresponding to the mahalanobis distance value larger than the preset mahalanobis distance threshold as identification features of a gun body entity model diagram, periodically updating the identification features, and generating a comparison template according to the identification features;
And when the mahalanobis distance value is not larger than the preset mahalanobis distance threshold, taking the characteristic data corresponding to the mahalanobis distance value not larger than the preset mahalanobis distance threshold as the common characteristic of the gun body entity model diagram, and removing the common characteristic.
5. The shooting game sight correction method based on the internet of things according to claim 1, wherein a twin network tracking algorithm is introduced, model tracking is performed on a gun body entity model diagram according to the comparison template through the twin network tracking algorithm, and sight position information at the current moment is obtained, and the method specifically comprises the following steps:
acquiring current multi-view gun body image data information, introducing a twin network tracking algorithm, taking the comparison template and a gun body entity model image as input, and respectively extracting the characteristics of the comparison template and the current multi-view gun body image data information by using shared parameters through a characteristic extraction module;
the extracted features are fed to a cooperative attention module, and the enhanced features are output through the cooperative attention module and feature enhancement, so that the internal relation between the features in the comparison template and the features in the gun body entity model diagram is obtained;
Acquiring identity verification information of each gun body according to the internal relation between the features in the comparison template and the features in the gun body solid model diagram, tracking each gun body solid model according to the identity verification information of each gun body, and generating position information of the gun body solid model at the current moment;
and acquiring the position relation between each gun body solid model and the sight, and generating the sight position information at the current moment according to the position information of the gun body solid model at the current moment and the position relation.
6. The shooting game sight correction method based on the internet of things according to claim 1, wherein the generating of the related interaction information according to the sight position information at the current time specifically comprises:
setting at least two position detection sensors in the sight position of the entity gun body of the shooting game, acquiring at least two position information of the sight position through the direction sensors, and determining the direction information of the sight position in the current space according to the at least two position information of the sight position;
generating direction vector information of the sight according to the direction information of the sight position in the current space and the sight position information of the current moment, and acquiring a game picture of a current shooting game display;
And generating relevant interaction information from the direction vector information of the sight, and displaying the sight on a game picture of a current shooting game display according to the relevant interaction information.
7. The shooting game sight correction system based on the Internet of things is characterized by comprising a memory and a processor, wherein the memory comprises a shooting game sight correction method program based on the Internet of things, and when the shooting game sight correction method program based on the Internet of things is executed by the processor, the following steps are realized:
the method comprises the steps of installing a multi-view camera on a shooting game field, acquiring multi-view gun body image data information to be interacted with by the shooting game through the multi-view camera, and processing the multi-view gun body image data information to generate a gun body entity model diagram;
extracting features of the gun body solid model diagrams to obtain feature data in each gun body solid model diagram, and generating a comparison template according to the feature data in the gun body solid model diagrams;
introducing a twin network tracking algorithm, and carrying out model tracking on each gun body entity model diagram according to the comparison template by the twin network tracking algorithm to obtain sight position information at the current moment;
And generating related interaction information according to the sight position information at the current moment.
8. The shooting game sight correction system based on the internet of things according to claim 7, wherein the generating of the gun body solid model map by processing the multi-view gun body image data information specifically comprises:
the method comprises the steps of obtaining preprocessed multi-gun body image data information through filtering and denoising the multi-view gun body image data information, introducing a singular value decomposition algorithm, and performing redundancy processing on the preprocessed multi-gun body image data information through the singular value decomposition algorithm to generate a redundant image;
selecting one image in the redundant processed images as a registration image, generating source point cloud data according to the registration image, introducing an ICP algorithm based on the source point cloud data, and generating point cloud data to be registered according to the rest of the redundant processed images;
calculating an overlapping area and a non-overlapping area between point cloud data to be registered and the source point cloud data according to the ICP algorithm, introducing a robust kernel function to detect abnormal points in the overlapping area and the non-overlapping area, and generating an error transformation matrix;
And reconstructing a three-dimensional model according to the overlapping area and the non-overlapping area to generate an initial solid model diagram, and correcting the initial solid model diagram based on the wrong transformation matrix to generate a gun solid model diagram.
9. The shooting game sight correction system based on the internet of things according to claim 8, wherein a singular value decomposition algorithm is introduced, redundancy processing is performed on the preprocessed multi-gun body image data information through the singular value decomposition algorithm, and a redundant processed image is generated, and specifically comprising:
performing target point cloud description on the preprocessed multi-gun body image data information to generate a related covariance matrix, and introducing a singular value decomposition algorithm;
performing feature decomposition on the related covariance matrix according to the singular value decomposition algorithm, generating an orthogonal matrix and a diagonal matrix, wherein the orthogonal matrix and the diagonal matrix are formed by feature vectors according to columns, and selecting a random feature vector in the orthogonal matrix as a reference;
constructing a new coordinate system according to the standard, describing the target point cloud according to an orthogonal matrix and a diagonal matrix formed by the feature vectors in columns, and generating an optimized bounding box by calculating the bounding box of the target point cloud under the new coordinate system;
And acquiring eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system, remapping the eight coordinate vertexes corresponding to the optimized bounding box under the new coordinate system into the world coordinate system through coordinate change, and generating the redundant processed image.
10. The shooting game sight correction system based on the internet of things according to claim 7, wherein a twin network tracking algorithm is introduced, model tracking is performed on a gun body entity model diagram according to the comparison template through the twin network tracking algorithm, and sight position information at the current moment is obtained, and the shooting game sight correction system specifically comprises:
acquiring current multi-view gun body image data information, introducing a twin network tracking algorithm, taking the comparison template and a gun body entity model image as input, and respectively extracting the characteristics of the comparison template and the current multi-view gun body image data information by using shared parameters through a characteristic extraction module;
the extracted features are fed to a cooperative attention module, and the enhanced features are output through the cooperative attention module and feature enhancement, so that the internal relation between the features in the comparison template and the features in the gun body entity model diagram is obtained;
Acquiring identity verification information of each gun body according to the internal relation between the features in the comparison template and the features in the gun body solid model diagram, tracking each gun body solid model according to the identity verification information of each gun body, and generating position information of the gun body solid model at the current moment;
and acquiring the position relation between each gun body solid model and the sight, and generating the sight position information at the current moment according to the position information of the gun body solid model at the current moment and the position relation.
CN202311143607.0A 2023-09-06 2023-09-06 Shooting game sight correction method and system based on Internet of things Pending CN117237409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311143607.0A CN117237409A (en) 2023-09-06 2023-09-06 Shooting game sight correction method and system based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311143607.0A CN117237409A (en) 2023-09-06 2023-09-06 Shooting game sight correction method and system based on Internet of things

Publications (1)

Publication Number Publication Date
CN117237409A true CN117237409A (en) 2023-12-15

Family

ID=89094018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311143607.0A Pending CN117237409A (en) 2023-09-06 2023-09-06 Shooting game sight correction method and system based on Internet of things

Country Status (1)

Country Link
CN (1) CN117237409A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476509A (en) * 2023-12-27 2024-01-30 联合富士半导体有限公司 Laser engraving device for semiconductor chip product and control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539928A (en) * 2015-01-05 2015-04-22 武汉大学 Three-dimensional printing image synthesizing method for optical grating
CN107481270A (en) * 2017-08-10 2017-12-15 上海体育学院 Table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment
CN112907735A (en) * 2021-03-10 2021-06-04 南京理工大学 Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN113077674A (en) * 2021-03-12 2021-07-06 广东虚拟现实科技有限公司 Training method and device based on virtual training scene and storage medium
CN115690188A (en) * 2022-10-21 2023-02-03 武汉纺织大学 Human body three-dimensional measurement method based on point cloud model optimization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539928A (en) * 2015-01-05 2015-04-22 武汉大学 Three-dimensional printing image synthesizing method for optical grating
CN107481270A (en) * 2017-08-10 2017-12-15 上海体育学院 Table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment
CN112907735A (en) * 2021-03-10 2021-06-04 南京理工大学 Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN113077674A (en) * 2021-03-12 2021-07-06 广东虚拟现实科技有限公司 Training method and device based on virtual training scene and storage medium
CN115690188A (en) * 2022-10-21 2023-02-03 武汉纺织大学 Human body three-dimensional measurement method based on point cloud model optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔洲涓等: ""面向无人机的轻量级Siamese注意力网络目标跟踪"", 《光学学报》, vol. 40, no. 19, 31 October 2020 (2020-10-31), pages 1 - 13 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476509A (en) * 2023-12-27 2024-01-30 联合富士半导体有限公司 Laser engraving device for semiconductor chip product and control method
CN117476509B (en) * 2023-12-27 2024-03-19 联合富士半导体有限公司 Laser engraving device for semiconductor chip product and control method

Similar Documents

Publication Publication Date Title
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
CN109859296B (en) Training method of SMPL parameter prediction model, server and storage medium
CN106716450B (en) Image-based feature detection using edge vectors
US11232286B2 (en) Method and apparatus for generating face rotation image
EP3644277B1 (en) Image processing system, image processing method, and program
CN106940704B (en) Positioning method and device based on grid map
CN107993216B (en) Image fusion method and equipment, storage medium and terminal thereof
US20210023449A1 (en) Game scene description method and apparatus, device, and storage medium
CN108629843B (en) Method and equipment for realizing augmented reality
CN112733794B (en) Method, device and equipment for correcting sight of face image and storage medium
WO2021175050A1 (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
EP3016071B1 (en) Estimating device and estimation method
CN112733797B (en) Method, device and equipment for correcting sight of face image and storage medium
CN109684969B (en) Gaze position estimation method, computer device, and storage medium
CN112733795B (en) Method, device and equipment for correcting sight of face image and storage medium
WO2011075082A1 (en) Method and system for single view image 3 d face synthesis
CN109886223B (en) Face recognition method, bottom library input method and device and electronic equipment
CN113220251B (en) Object display method, device, electronic equipment and storage medium
CN117237409A (en) Shooting game sight correction method and system based on Internet of things
CN111815768B (en) Three-dimensional face reconstruction method and device
US20230100427A1 (en) Face image processing method, face image processing model training method, apparatus, device, storage medium, and program product
CN116097316A (en) Object recognition neural network for modeless central prediction
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
CN110751026A (en) Video processing method and related device
CN113610969B (en) Three-dimensional human body model generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination