CN111524157B - Touch screen object analysis method and system based on camera array and storage medium - Google Patents

Touch screen object analysis method and system based on camera array and storage medium Download PDF

Info

Publication number
CN111524157B
CN111524157B CN202010341477.1A CN202010341477A CN111524157B CN 111524157 B CN111524157 B CN 111524157B CN 202010341477 A CN202010341477 A CN 202010341477A CN 111524157 B CN111524157 B CN 111524157B
Authority
CN
China
Prior art keywords
touch screen
screen object
characteristic attribute
acquiring
attributes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010341477.1A
Other languages
Chinese (zh)
Other versions
CN111524157A (en
Inventor
陈龙
任伸
郝悍勇
赵永良
张朝阳
巢玉坚
邱玉祥
刘洋
饶涵宇
裘炜浩
王泉啸
蒋廷岳
李晓星
王奔
武玉峰
王澍
周夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Zhejiang Electric Power Co Ltd
NARI Group Corp
Nari Information and Communication Technology Co
Original Assignee
State Grid Corp of China SGCC
State Grid Zhejiang Electric Power Co Ltd
NARI Group Corp
Nari Information and Communication Technology Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Zhejiang Electric Power Co Ltd, NARI Group Corp, Nari Information and Communication Technology Co filed Critical State Grid Corp of China SGCC
Priority to CN202010341477.1A priority Critical patent/CN111524157B/en
Publication of CN111524157A publication Critical patent/CN111524157A/en
Application granted granted Critical
Publication of CN111524157B publication Critical patent/CN111524157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method, a system and a storage medium for analyzing a touch screen object based on a camera array, wherein the method comprises the following steps: acquiring characteristic attributes of a touch screen object; acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attributes; acquiring the identification efficiency of the touch screen object according to the acquisition time; and acquiring the optimal characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold. According to the method, the optimal characteristic attribute is obtained through efficient analysis of the identification efficiency and the identification accuracy of the touch screen object, so that the accuracy of obtaining the optimal characteristic attribute is effectively improved; based on multi-screen efficient splicing and center point positioning.

Description

Touch screen object analysis method and system based on camera array and storage medium
Technical Field
The invention relates to the technical field of electronic interaction, in particular to a touch screen object analysis method and system based on a camera array and a storage medium.
Background
In order to strengthen the informatization construction and application of the power enterprise, according to the overall requirements of wider coverage, deeper integration, higher intellectualization, stronger safety, better interactivity and better visualization, the insights, analysis capability and visualization display capability of power grid operation control and company operation management are improved, and the information operation characteristics of 'higher reliability, wider service connotation, higher response speed and better service quality' of the power enterprise are supported, a visualization display platform which is separated from implementation and development, separated from display and service, faces to various display environments, can be configured and can be reused is required to be constructed. The existing electric power industry touch screen object feature recognition and data analysis technology development degree is low, and the problems of high cost, low accuracy rate of screening of touch screen object feature attributes and the like universally exist in large-screen visualization platforms of electric power enterprises, and the method mainly comprises the following aspects:
firstly, the definition and management of characteristic attributes of touch screen objects are not perfect. The characteristic attributes of the touch screen object are very many, but the characteristic attributes applied to most of the current visual display platforms of the power enterprises are only a few attributes such as the coverage range of the bottom surface of the touch screen object
Secondly, the difficulty in identifying the characteristic attributes of the touch screen objects is high, and particularly, the touch screen objects crossing multiple screens are identified. The touch screen object feature recognition and multi-screen splicing efficiency is low, and the problems of insufficient recognition accuracy, low recognition efficiency, low reaction speed and the like can be caused.
And thirdly, the large-screen visual display platform has no self-learning function, and the automatic adjustment and optimization of the system performance can not be realized on the basis of carrying out large data analysis on the identification efficiency and the identification accuracy of the touch screen object.
Disclosure of Invention
In order to solve the problems, the invention provides a method and a system for analyzing a touch screen object based on a camera array and a storage medium, so as to solve the problems of insufficient efficiency and accuracy in identifying characteristic attributes of the touch screen object in the prior art.
In order to achieve the technical purpose, the invention is realized by the following technical scheme:
a method for touch screen object analysis based on a camera array, the method comprising:
acquiring characteristic attributes of a touch screen object;
acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attributes;
acquiring the identification efficiency of the touch screen object according to the acquisition time;
and acquiring the optimal characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold.
Further, the characteristic attributes comprise a static characteristic attribute, a bottom surface characteristic attribute, a stereo characteristic attribute and a dynamic characteristic attribute; the bottom surface characteristic attribute comprises a single-screen bottom surface characteristic attribute and an integral bottom surface characteristic attribute.
Further, the method for acquiring the overall bottom surface characteristic attribute comprises the following steps:
and processing the single-screen bottom surface characteristic attribute through a multi-screen splicing algorithm to obtain the integral bottom surface characteristic attribute.
Further, the multi-screen splicing algorithm is as follows:
Figure GDA0003658651670000021
in the formula, N, S, E, W respectively represents the up, down, left and right directions of a current single screen i, K represents the identifier of the current touch screen object, i is the identifier of the kth touch screen object on the ith screen, d represents the number of touch screen points on the edge of the direction, and QiRepresenting the total area, Q, of the current multi-screen mosaicKDenotes the total floor area of the touch screen object, AiRepresenting the set of bottom surface characteristic attributes of the current multi-screen mosaic, AiN、AiS、AiE、AiWAnd respectively representing single-screen bottom surface characteristic attribute sets in the four directions of the current touch screen object.
Further, the dynamic characteristic attribute includes a reference center point, a reference vertex, a motion speed, a rotation angular velocity, a motion direction, and a motion trajectory.
Further, the method for acquiring the dynamic characteristic attribute comprises the following steps:
acquiring a reference center point and a reference vertex of the touch screen object according to the bottom surface characteristic attribute;
and calculating to obtain the motion speed, the rotation angular speed, the motion direction and the motion track according to the coordinate variation of the reference central point and the reference vertex.
Further, the calculation method of the reference center point is as follows:
Figure GDA0003658651670000031
in the formula, X and Y respectively represent the horizontal and vertical coordinates of each current point, K is the identifier of the current touch screen object, and n is the identifier of the current touch screen objectkNumber of current touch screen bottom vertices, XiAnd YiIs the coordinate of the ith vertex, Xk,YkAnd the horizontal and vertical coordinates of the reference center point of the Kth touch screen object.
Further, the reference vertex is a vertex closest to the main camera for identifying the bottom surface characteristic attribute of the touch screen object.
Further, the calculation method of the recognition efficiency includes:
Figure GDA0003658651670000032
in the formula, n represents the record number of the touch screen object identification efficiency; i denotes the ith record, TiThe touch screen identification time of the ith record is represented; b iskIndicating the efficiency of the recognition of the touch screen object k.
Further, the calculation method of the identification accuracy rate includes:
Figure GDA0003658651670000041
in the formula, M represents the record number of the identification accuracy of the touch screen object; j represents the jth record; pjRepresenting the touch screen identification accuracy of the jth record; ckIndicating the recognition accuracy of the touch screen object k.
A touch screen object analysis system based on a camera array, the system comprising:
a first obtaining module: the method comprises the steps of obtaining characteristic attributes of a touch screen object;
a second obtaining module: acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attribute;
a third obtaining module: the identification efficiency of the touch screen object is obtained according to the acquisition time;
a fourth obtaining module: and the method is used for acquiring the preferred characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold value.
A camera array based touch screen object analysis system, the system comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps according to the method described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the optimal characteristic attribute is obtained through efficient analysis of the identification efficiency and the identification accuracy of the touch screen object, so that the accuracy of obtaining the optimal characteristic attribute is effectively improved; based on many high-efficient concatenations of screen and center point location: based on a multi-screen splicing algorithm, the method realizes that all characteristic attributes of an object spanning multiple screens on each screen are completely spliced, wherein the characteristic attributes comprise the overall shape and the overall area of a touch screen object, the position of the touch screen object on each screen and the like; after the overall attributes of the bottom surfaces of the touch screen objects spanning multiple screens are determined, the bottom surface center points can be obtained according to a center point positioning algorithm, the problem that the touch screen objects spanning multiple screens are inaccurate in identification is effectively solved, and a foundation is laid for defining the operation attributes and the operation tracks of the touch screen objects.
Drawings
FIG. 1 is an overall architecture diagram of a method for touch screen object analysis based on a camera array according to the present invention;
FIG. 2 is a model architecture diagram of a touch screen object feature attribute model of the present invention;
FIG. 3 is a system architecture diagram of the large screen visualization platform of the present invention;
FIG. 4 is a flow chart of an embodiment of the present invention.
Detailed Description
The present invention will be better understood and implemented by those skilled in the art by the following detailed description of the technical solution of the present invention with reference to the accompanying drawings and specific examples, which are not intended to limit the present invention.
As shown in fig. 4, a method for analyzing a touch screen object based on a camera array includes:
s01: acquiring characteristic attributes of a touch screen object;
the overall architecture is as shown in FIG. 1, and a touch screen object characteristic attribute model is constructed;
touch screen objects are classified into two types, one type is a specially-made touch screen object based on the technology of the internet of things sensor, and the other type is a human body (a finger, a palm or the like) or a human-like body which can be identified by a screen.
As shown in fig. 2, the feature attributes include a static feature attribute, a bottom surface feature attribute, a stereo feature attribute, and a dynamic feature attribute; the bottom surface characteristic attribute comprises a single-screen bottom surface characteristic attribute and an integral bottom surface characteristic attribute.
The static characteristic attributes refer to attributes such as material, weight, bottom surface shape and bottom surface size of the touch screen object, and the attributes are implanted into a chip of a specially-made touch screen object and can be read by a system through the technology of the sensor of the internet of things.
The single-screen bottom surface characteristic attribute refers to attributes of the single-screen bottom surface shape, the single-screen bottom surface area, the bottom surface position and the like of the touch screen object, and the bottom surface shape and the area of the touch screen object on the screen and the position of the touch screen object on the screen are identified based on the camera array technology.
The whole bottom surface characteristic attribute refers to attributes such as the whole bottom surface shape, the whole bottom surface area and the whole bottom surface position of the touch screen object, and the whole bottom surface characteristic attribute and the static characteristic attribute can be compared by the large-screen visual platform of the power enterprise to confirm the identification accuracy of the system.
The three-dimensional characteristic attributes are mainly the attributes of the height and the like of the touch screen object, and the attributes of the high three-dimensional dimensionality of the touch screen object are identified based on a radar scanning technology. The system is set to only need to identify a touch screen object with the height of 0-10cm, and if the height of the touch screen object can be identified, the touch screen object is a specially-made touch screen object; on the contrary, if the height of the touch screen object cannot be identified, the touch screen object is a human body or a human-like body.
The dynamic characteristic attributes refer to attributes such as a center point, a motion speed, a motion direction and a motion track of the touch screen object.
The method comprises the steps of identifying static characteristic attributes of a touch screen object based on the sensing technology of the Internet of things, identifying single-screen bottom surface characteristic attributes of the touch screen object based on a camera array, identifying overall bottom surface characteristic attributes of the touch screen object based on a multi-screen splicing algorithm, identifying three-dimensional characteristic attributes of the touch screen object based on a radar scanning technology, and identifying dynamic characteristic attributes of the touch screen object based on a motion algorithm of the touch screen object; the overall bottom surface characteristic attribute of the touch screen object is realized by adopting a multi-screen splicing algorithm based on the bottom surface characteristic attributes of the single screens. The method comprises the steps that a multi-screen splicing algorithm is used for relying on the single-screen bottom surface characteristic attribute of a multi-screen-crossing touch screen object on each screen to form an integral bottom surface characteristic attribute; the characteristic attributes of the whole bottom surface of the touch screen object are as follows: based on a multi-screen splicing algorithm, all characteristic attributes of objects spanning multiple screens on each screen are completely spliced, including the overall shape and the overall area of the touch screen object, the position of the touch screen object on each screen and the like. The multi-screen splicing algorithm refers to:
Figure GDA0003658651670000071
in the formula, N, S, E, W respectively represents the up, down, left and right directions of the current single screen i, K represents the identifier of the current touch screen object, i is the identifier of the kth touch screen object on the ith screen, d represents the number of touch screen points on the edge of the direction, and Q is the number of touch screen points on the edge of the directioniRepresenting the total area, Q, of the current multi-screen mosaicKDenotes the total floor area of the touch screen object, AiRepresenting the set of bottom surface feature attributes of the current multi-screen mosaic, AiN、AiS、AiE、AiWRespectively represent the four directions of the current touch screen objectAnd (4) a single screen bottom surface characteristic attribute set.
The dynamic characteristic attributes comprise a reference central point, a reference vertex, a motion speed, a rotation angular velocity, a motion direction and a motion track.
The touch screen object motion algorithm is: the method comprises the steps of firstly calculating the position of a center point of the whole bottom surface of a touch screen object based on characteristic attributes of the whole bottom surface of the touch screen object, wherein the bottom surface of the touch screen object is mainly divided into a circle, a triangle, a square, a rectangle, a regular polygon and the like at present, and the external circle center of the touch screen object is obtained based on a plurality of vertexes and is taken as the center point. In a few cases, the external circle center of the computer cannot be calculated, and the midpoint of the two fixed vertexes is set as the central point. Then, based on the movement of the center point of the touch screen object, the movement speed, the movement direction and the movement track of the touch screen object can be calculated, which are the movement speed, the movement direction and the movement track of the touch screen object.
The method for acquiring the dynamic characteristic attribute comprises the following steps:
acquiring a reference center point and a reference vertex of the touch screen object according to the bottom surface characteristic attribute;
and calculating to obtain the motion speed, the rotation angular speed, the motion direction and the motion track according to the reference central point and the reference vertex.
The calculation method of the reference center point is as follows:
Figure GDA0003658651670000081
in the formula, X and Y respectively represent the horizontal and vertical coordinates of each current point, K is the identifier of the current touch screen object, and n is the identifier of the current touch screen objectkNumber of current touch screen bottom vertices, XiAnd YiIs the coordinate of the ith vertex, Xk,YkAnd the coordinates of the reference center point of the Kth touch screen object.
The reference vertex is the vertex closest to the primary camera that identifies the bottom surface characteristic attribute of the touch screen object.
S02: acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attributes;
the calculation method of the identification accuracy comprises the following steps:
Figure GDA0003658651670000082
in the formula, M represents the record number of the identification accuracy rate of the touch screen object; j represents the jth record, and the larger the value of j is, the closer the j is to the current, the larger the corresponding weight is; pjRepresenting the touch screen identification accuracy of the jth record; ckIndicating the recognition accuracy of the touch screen object k, and if the recognition accuracy exceeds a threshold value, indicating that the attribute is a preferred attribute.
S03: acquiring the identification efficiency of the touch screen object according to the acquisition time;
s04: and acquiring the optimal characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold.
Based on a data analysis technology, effective analysis of the identification efficiency and the identification accuracy of the touch screen object is achieved, and a user is intelligently assisted to improve the system performance;
the data analysis technology of the touch screen object identification efficiency and the identification accuracy rate is as follows: firstly, determining the identification efficiency of the touch screen object based on the identification time; and judging whether the identification of each characteristic attribute of the touch screen object is accurate or not based on the user feedback. The large-screen visualization platform of the power enterprise has many application scenes and more users, and according to related requirements, related users need to feed back whether a certain recognition result is accurate or not in the system test and operation stage, and the performance of the system can be analyzed according to the feedback of a large number of users.
Then, based on the data statistical analysis technology, the recognition efficiency of the touch screen objects is classified and ranked, wherein the recognition efficiency of the categories (such as rectangle categories, wood categories, height 3cm categories and the like) is determined to be low, and the recognition efficiency of the touch screen objects in each category is determined to be low. The analyzed data is used as the basis for assisting the user to analyze and improving the system performance. In the testing stage and the actual application stage of the system in a partial period, the feedback condition of whether the user accurately identifies the characteristic attributes of the touch screen object can be counted, and the identification accuracy of the touch screen objects of which categories and the touch screen objects of which single category are low is determined based on the data statistical analysis technology. And the analyzed data is used as a basis for assisting the user in comprehensively analyzing and improving the system performance.
The calculation method of the identification efficiency comprises the following steps:
Figure GDA0003658651670000091
in the formula, n represents the record number of the touch screen object identification efficiency; i represents the ith record, the larger the value of i is, the closer the i is to the current record, the corresponding weight is also larger, TiThe touch screen identification time of the ith record is represented; bkRepresenting the recognition efficiency of the touch screen object k; if the recognition efficiency is within the threshold, this attribute is indicated as a preferred attribute.
As shown in fig. 3, the invention also discloses a touch screen object feature recognition and data analysis based on the camera array, and the system comprises a data layer, a service layer and a display layer.
And (3) a data layer: the system comprises a peripheral and a service system, supports data sources such as a database, a text, XML, Excel and the like, and realizes real-time data reading through an open data interface of a three-dimensional engine;
and (3) a service layer: the three-dimensional display system comprises a three-dimensional visualization system and a touch screen system, wherein a full three-dimensional engine system is adopted, the loading and data binding relation of a three-dimensional model is realized through a three-dimensional engine, a display picture is generated, and the interactive design is realized through a TUIO protocol and the touch screen system;
a display layer: the interactive demonstration system is used for rendering a three-dimensional engine in real time and supporting a TUIO protocol and a windows multi-point touch protocol.
Based on the statistical analysis data of the identification efficiency and the identification accuracy of the touch screen object, the user can be assisted to analyze related data, and therefore the performance of the large-screen visual display platform system of the electric power is continuously improved.
In the testing stage and the actual application stage of the system in a partial period, the feedback condition of whether the user accurately identifies the characteristic attributes of the touch screen object can be counted, and the identification accuracy of the touch screen objects of which categories and the touch screen objects of which single category are low is determined based on the data statistical analysis technology. And the analyzed data is used as a basis for assisting the user in comprehensively analyzing and improving the system performance.
According to the method, a touch screen object characteristic attribute model comprising 5 types of characteristic attributes including a static characteristic attribute, a single-screen bottom surface characteristic attribute, an overall bottom surface characteristic attribute, a three-dimensional characteristic attribute and a dynamic characteristic attribute is established, and a multi-screen splicing algorithm, a touch screen object recognition efficiency algorithm and a touch screen object characteristic attribute recognition accuracy algorithm are combined, so that the touch screen object recognition is more efficient, and the recognition accuracy is increased continuously. In addition, the relevant data are analyzed based on the big data statistical analysis technology, and the use of the user is assisted. The application of the patent of the invention aims at the core problem faced by the large-screen visualization platform of the power enterprise, takes the customer requirements in the new period as the guide, combines various different interaction technical means, improves the performance of the large-screen visualization platform of the power enterprise, and provides comprehensive support for constructing the visualization system capable of supporting the large-screen display and exhibition interactive display of the business system.
The invention has the beneficial effects that:
firstly, establishing a characteristic attribute model of a touch screen object: the method designs a touch screen object characteristic attribute model comprising 5 types of characteristic attributes including a static characteristic attribute, a single-screen bottom surface characteristic attribute, an integral bottom surface characteristic attribute, a three-dimensional characteristic attribute and a dynamic characteristic attribute, realizes the comprehensive management of the characteristic attributes of the touch screen object, and can dynamically expand the model according to the needs.
Secondly, multi-screen efficient splicing and center point positioning: based on a multi-screen splicing algorithm, all characteristic attributes of an object spanning multiple screens on each screen are completely spliced, wherein the characteristic attributes comprise the overall shape and the overall area of the object touching the screen, the position of the object on each screen and the like. After the overall attributes of the bottom surfaces of the touch screen objects spanning multiple screens are determined, the center points of the bottom surfaces can be obtained according to a center point positioning algorithm, and a foundation is laid for defining the operation attributes and the operation tracks of the touch screen objects.
Thirdly, high-efficiency analysis of the identification efficiency and the identification accuracy of the touch screen object: the system determines the identification efficiency of the touch screen object based on the identification time, judges whether the identification of each characteristic attribute of the touch screen object is accurate based on the user feedback, and can calculate the identification efficiency and the identification accuracy of the touch screen object based on a corresponding algorithm, and the calculation result can feed back the system for data statistics and assist the user in improving the system performance.
According to the method, a touch screen object characteristic attribute model comprising 5 types of characteristic attributes including a static characteristic attribute, a single-screen bottom surface characteristic attribute, an integral bottom surface characteristic attribute, a three-dimensional characteristic attribute and a dynamic characteristic attribute is established, and the method can be used for a large-screen visualization platform of a power enterprise, so that the identification of the touch screen object is more efficient, and the identification accuracy is increased continuously. In addition, relevant data are analyzed based on a big data statistical analysis technology, and a user is assisted in improving system performance. The application of the patent of the invention aims at the core problem faced by a large-screen visualization platform of a power enterprise, takes the customer requirements in the new period as the guide, and combines various different interaction technical means to provide comprehensive support for constructing a visualization system capable of supporting large-screen display and exhibition interactive display of a business system.
A touch screen object characteristic attribute model is constructed, and definition, integration, analysis and effective management of the touch screen object characteristic attributes are achieved. Furthermore, the data analysis of the touch screen object recognition efficiency and the recognition accuracy rate is realized based on a big data statistical analysis technology, the decision of a user is assisted, and the support is provided for the improvement of the system performance.
A touch screen object analysis system based on a camera array, the system comprising:
a first obtaining module: the method comprises the steps of obtaining characteristic attributes of a touch screen object;
a second obtaining module: the system is used for acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attribute;
a third obtaining module: the identification efficiency of the touch screen object is obtained according to the acquisition time;
a fourth obtaining module: and the method is used for acquiring the preferred characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold value.
A camera array based touch screen object analysis system, the system comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The present invention is not limited to the above embodiments, and any modifications, equivalent replacements, improvements, etc. made within the spirit and principle of the present invention are included in the scope of the claims of the present invention which are filed as the application.

Claims (10)

1. A touch screen object analysis method based on a camera array is characterized by comprising the following steps:
acquiring characteristic attributes of a touch screen object;
acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attributes;
acquiring the identification efficiency of the touch screen object according to the acquisition time;
acquiring an optimal characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold;
the characteristic attributes comprise a static characteristic attribute, a bottom surface characteristic attribute, a three-dimensional characteristic attribute and a dynamic characteristic attribute; the bottom surface characteristic attribute comprises a single-screen bottom surface characteristic attribute and an integral bottom surface characteristic attribute;
the method for acquiring the integral bottom surface characteristic attribute comprises the following steps:
processing the single-screen bottom surface characteristic attribute through a multi-screen splicing algorithm to obtain an integral bottom surface characteristic attribute;
the multi-screen splicing algorithm is as follows:
Figure FDA0003658651660000011
in the formula, N, S, E, W respectively represents the up, down, left and right directions of the current single screen i, K represents the identifier of the current touch screen object, i is the identifier of the kth touch screen object on the ith screen, d represents the number of touch screen points on the edge of the direction, and Q is the number of touch screen points on the edge of the directioniRepresenting the total area, Q, of the current multi-screen mosaicKDenotes the total floor area of the touch screen object, AiRepresenting the set of bottom surface feature attributes of the current multi-screen mosaic, AiN、AiS、AiE、AiWSingle screen respectively showing up, down, left and right directions of current touch screen objectA set of bottom surface feature attributes.
2. The camera array-based touch screen object analysis method according to claim 1, wherein the dynamic characteristic attributes comprise a reference center point, a reference vertex, a motion speed, a rotation angular speed, a motion direction and a motion trajectory.
3. The method for analyzing the touch screen object based on the camera array according to claim 2, wherein the method for acquiring the dynamic characteristic attribute comprises:
acquiring a reference center point and a reference vertex of the touch screen object according to the bottom surface characteristic attribute;
and calculating to obtain the motion speed, the rotation angular speed, the motion direction and the motion track according to the coordinate variation of the reference central point and the reference vertex.
4. The method for analyzing the touch screen object based on the camera array according to claim 3, wherein the calculation method of the reference center point is as follows:
Figure FDA0003658651660000021
in the formula, X and Y respectively represent the horizontal and vertical coordinates of each current point, K is the identifier of the current touch screen object, and n is the identifier of the current touch screen objectkNumber of current touch screen bottom vertices, XiAnd YiIs the coordinate of the ith vertex, Xk,YkAnd the horizontal and vertical coordinates of the reference center point of the Kth touch screen object.
5. The touch screen object analysis method based on the camera array according to claim 3, wherein the reference vertex is a vertex closest to a primary camera that identifies a bottom surface characteristic attribute of the touch screen object.
6. The method for analyzing the touch screen object based on the camera array according to claim 1, wherein the calculating method of the recognition efficiency comprises:
Figure FDA0003658651660000022
in the formula, n represents the record number of the touch screen object identification efficiency; i denotes the ith record, TiThe touch screen identification time of the ith record is represented; b iskIndicating the recognition efficiency of the touch screen object k.
7. The method for analyzing the touch screen object based on the camera array according to claim 1, wherein the calculating method of the recognition accuracy rate comprises:
Figure FDA0003658651660000031
in the formula, M represents the record number of the identification accuracy of the touch screen object; j represents the jth record; pjRepresenting the touch screen identification accuracy of the jth record; ckIndicating the recognition accuracy of the touch screen object k.
8. A touch screen object analysis system based on a camera array, the system comprising:
a first obtaining module: the method comprises the steps of obtaining characteristic attributes of a touch screen object;
a second obtaining module: the system is used for acquiring the acquisition time and the identification accuracy of the touch screen object according to the characteristic attribute;
a third obtaining module: the identification efficiency of the touch screen object is obtained according to the acquisition time;
a fourth obtaining module: the system is used for acquiring an optimal characteristic attribute from the characteristic attributes according to the identification efficiency, the identification accuracy and the threshold;
the characteristic attributes comprise a static characteristic attribute, a bottom surface characteristic attribute, a three-dimensional characteristic attribute and a dynamic characteristic attribute; the bottom surface characteristic attribute comprises a single-screen bottom surface characteristic attribute and an integral bottom surface characteristic attribute;
the method for acquiring the integral bottom surface characteristic attribute comprises the following steps:
processing the single-screen bottom surface characteristic attribute through a multi-screen splicing algorithm to obtain an integral bottom surface characteristic attribute;
the multi-screen splicing algorithm is as follows:
Figure FDA0003658651660000032
in the formula, N, S, E, W respectively represents the up, down, left and right directions of the current single screen i, K represents the identifier of the current touch screen object, i is the identifier of the kth touch screen object on the ith screen, d represents the number of touch screen points on the edge of the direction, and Q is the number of touch screen points on the edge of the directioniRepresenting the total area, Q, of the current multi-screen mosaicKDenotes the total floor area of the touch screen object, AiRepresenting the set of bottom surface feature attributes of the current multi-screen mosaic, AiN、AiS、AiE、AiWAnd respectively representing single-screen bottom surface characteristic attribute sets in the four directions of the current touch screen object.
9. A camera array based touch screen object analysis system, the system comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 7.
10. Computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010341477.1A 2020-04-26 2020-04-26 Touch screen object analysis method and system based on camera array and storage medium Active CN111524157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010341477.1A CN111524157B (en) 2020-04-26 2020-04-26 Touch screen object analysis method and system based on camera array and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010341477.1A CN111524157B (en) 2020-04-26 2020-04-26 Touch screen object analysis method and system based on camera array and storage medium

Publications (2)

Publication Number Publication Date
CN111524157A CN111524157A (en) 2020-08-11
CN111524157B true CN111524157B (en) 2022-07-01

Family

ID=71902891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010341477.1A Active CN111524157B (en) 2020-04-26 2020-04-26 Touch screen object analysis method and system based on camera array and storage medium

Country Status (1)

Country Link
CN (1) CN111524157B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509122A (en) * 2011-11-25 2012-06-20 广东威创视讯科技股份有限公司 Intelligent pen color identifying method applied to interactive touch screen
CN102792317A (en) * 2010-03-11 2012-11-21 高通股份有限公司 Image feature detection based on application of multiple feature detectors
CN106537305A (en) * 2014-07-11 2017-03-22 微软技术许可有限责任公司 Touch classification
CN107077235A (en) * 2014-09-30 2017-08-18 惠普发展公司,有限责任合伙企业 Determine that unintentional touch is refused
CN110647244A (en) * 2012-05-04 2020-01-03 三星电子株式会社 Terminal and method for controlling the same based on spatial interaction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102792317A (en) * 2010-03-11 2012-11-21 高通股份有限公司 Image feature detection based on application of multiple feature detectors
CN102509122A (en) * 2011-11-25 2012-06-20 广东威创视讯科技股份有限公司 Intelligent pen color identifying method applied to interactive touch screen
CN110647244A (en) * 2012-05-04 2020-01-03 三星电子株式会社 Terminal and method for controlling the same based on spatial interaction
CN106537305A (en) * 2014-07-11 2017-03-22 微软技术许可有限责任公司 Touch classification
CN107077235A (en) * 2014-09-30 2017-08-18 惠普发展公司,有限责任合伙企业 Determine that unintentional touch is refused

Also Published As

Publication number Publication date
CN111524157A (en) 2020-08-11

Similar Documents

Publication Publication Date Title
Davila Delgado et al. Augmented and virtual reality in construction: drivers and limitations for industry adoption
US7751627B2 (en) Image dominant line determination and use
CN102129344B (en) Via the layout constraint manipulation of user's posture identification
Kim et al. Interactive modeler for construction equipment operation using augmented reality
CN107798725B (en) Android-based two-dimensional house type identification and three-dimensional presentation method
US20210081782A1 (en) Action prediction
CN108062377A (en) The foundation of label picture collection, definite method, apparatus, equipment and the medium of label
CN114648615B (en) Method, device and equipment for controlling interactive reproduction of target object and storage medium
WO2022193453A1 (en) Dynamic gesture recognition method and apparatus, and readable storage medium and computer device
JPWO2020240808A1 (en) Learning device, classification device, learning method, classification method, learning program, and classification program
CN111144215A (en) Image processing method, image processing device, electronic equipment and storage medium
US20170249073A1 (en) Systems, devices, and methods for dynamic virtual data analysis
Yu et al. Collaborative SLAM and AR-guided navigation for floor layout inspection
CN112906092B (en) Mapping method and mapping system
JPH10187794A (en) Device and method for distributing design and manufacture information to all over thin metallic sheet manufacture equipment
CN111524157B (en) Touch screen object analysis method and system based on camera array and storage medium
CN113591433A (en) Text typesetting method and device, storage medium and computer equipment
US20040054510A1 (en) System and method for simulating human movement
CN115631374A (en) Control operation method, control detection model training method, device and equipment
CN111429578B (en) Three-dimensional model generation method and three-dimensional virtual overhaul system for thermal power plant unit
CN113762173A (en) Training method and device for human face light stream estimation and light stream value prediction model
CN115035129A (en) Goods identification method and device, electronic equipment and storage medium
US10163006B2 (en) Selection determination for freehand marks
US20230215466A1 (en) Digital Video Generation depicting Edit Operations to Digital Content
Wang Virtual display of intelligent human-computer interaction products based on attention matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant