CN116737019B - Intelligent display screen induction identification control management system - Google Patents

Intelligent display screen induction identification control management system Download PDF

Info

Publication number
CN116737019B
CN116737019B CN202311023108.8A CN202311023108A CN116737019B CN 116737019 B CN116737019 B CN 116737019B CN 202311023108 A CN202311023108 A CN 202311023108A CN 116737019 B CN116737019 B CN 116737019B
Authority
CN
China
Prior art keywords
induction
sensing
finger
deviation
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311023108.8A
Other languages
Chinese (zh)
Other versions
CN116737019A (en
Inventor
刘玉祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Tech Information Technology Co ltd
Original Assignee
Shandong Tech Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Tech Information Technology Co ltd filed Critical Shandong Tech Information Technology Co ltd
Priority to CN202311023108.8A priority Critical patent/CN116737019B/en
Publication of CN116737019A publication Critical patent/CN116737019A/en
Application granted granted Critical
Publication of CN116737019B publication Critical patent/CN116737019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention belongs to the technical field of display screen identification control management, and particularly discloses an intelligent display screen induction identification control management system, which comprises an induction process data extraction module, an induction environment data extraction module, an induction identification correction analysis module, an induction identification control analysis module, an identification information base and an induction trigger replacement module; according to the invention, the environment deviation correction factor is set by analyzing the identification environment data, the interference of the induction environment on the induction identification result is fully considered, and meanwhile, the single-finger induction and the multi-finger induction are respectively carried out on each induction of the target display screen, so that the control analysis is carried out on the induction processes of the single-finger induction and the multi-finger induction, the defect that the control is carried out only from the gesture level of the user at present is effectively solved, the reliability of the induction identification control of the display screen is ensured, the flexibility of gesture identification management and control of the display screen is promoted, and the use experience of the user is also promoted.

Description

Intelligent display screen induction identification control management system
Technical Field
The invention belongs to the technical field of display screen identification control management, and relates to an intelligent display screen induction identification control management system.
Background
Along with the rapid development of science and technology, intelligent display screens are widely applied in various fields, such as intelligent home, commercial display, traffic guidance, advertising and the like. In order to improve the use efficiency and user experience of the intelligent display screen, the sensing identification of the intelligent display screen needs to be controlled and managed.
The existing display screen induction recognition modes comprise four common display screen induction recognition modes of touch induction recognition, gesture induction recognition, voice induction recognition and eye induction recognition, wherein the gesture induction recognition mode is most frequently used, and the control management of the gesture induction recognition mode currently has the following defects: 1. the correction of the recognition errors is limited, when the gesture is corrected, the sensing environment is not carefully analyzed, the feasibility and pertinence of the positioning of the sensing errors cannot be ensured, the error eliminating effect is not obvious enough, the accuracy of the subsequent gesture sensing recognition is limited, and the referential of the analysis result of the subsequent gesture sensing recognition cannot be improved.
2. The intention recognition accuracy of the user is insufficient, gestures of the user are various and personalized, recognition deviation conditions of different types of gestures of different users are not deeply analyzed at present, suitability of trigger setting is not guaranteed, and accuracy of gesture recognition is difficult to guarantee.
3. The reliability of the current recognition control is insufficient, and the interference of the induction environment on the induction recognition result is not analyzed only from the gesture level of the user, so that the authenticity of the induction recognition analysis of the user is not high, the reliability of the induction recognition control of the subsequent display screen cannot be guaranteed, the control effect of the induction recognition of the subsequent display screen cannot be improved, a certain defect exists, and meanwhile, the flexibility of gesture control cannot be improved, and the use experience of the user cannot be guaranteed.
Disclosure of Invention
In view of this, in order to solve the problems set forth in the background art, an intelligent display screen induction recognition control management system is proposed.
The aim of the invention can be achieved by the following technical scheme: the invention provides an intelligent display screen induction identification control management system, which comprises: the sensing process data extraction module is used for extracting the type and sensing process data of each sensing of the target display screen, wherein the type comprises single-finger sensing and multi-finger sensing.
And the sensing environment data extraction module is used for extracting images collected in the sensing area and the light brightness monitored in each time when the target display screen senses each time.
The induction recognition correction analysis module is used for setting the environment deviation correction factors corresponding to each induction and is recorded asI represents the sensing order number,/->
The induction recognition control analysis module is used for dividing each induction into each single-finger induction and each multi-finger induction, analyzing induction process data of each single-finger induction and each multi-finger induction to obtain analysis results, and further confirming suitable triggering conditions of the single-finger induction and the multi-finger induction.
The sensing information base is used for storing the currently set single-finger sensing and multi-finger sensing triggering conditions of the target display screen, wherein the single-finger sensing triggering conditions are sensing gesture tracks and sensing hand operation duration intervals, and the multi-finger sensing triggering conditions are sensing triggering angle intervals.
And the induction trigger replacing module is used for replacing the currently set trigger conditions of the single-finger induction suitable trigger conditions and the multi-finger induction suitable trigger conditions.
Preferably, the sensing process data is sensing process video and response data, wherein the response data comprises a gesture sensing time point, a gesture response time point and a gesture response display time point.
Preferably, the setting the environmental deviation correction factor corresponding to each sensing includes: counting corresponding light interference degree of target display screen in each sensing
Extracting the number of people from the images acquired in the sensing area when the target display screen senses each time, and recording the number asJ represents the acquisition order number, < > j->
Will beAnd set the number of people with inductive interference->In contrast, statistics of each induction is greater than +.>Is recorded as +.>
Extracting the distance between the target sensing personnel position and the non-target sensing personnel position from the acquired images, calculating the average distance between the target sensing personnel position and the non-target sensing personnel position through means, and recording as
Statistics target display screen corresponds personnel interference degree when response each time,/>For the acquisition times->For the area of the sensing area>The personnel concentration of the set reference and the proper sensing interval distance are respectively set.
Calculating the environmental deviation correction factor corresponding to each induction,/>Respectively is set up asThe light interference and personnel interference of the permission bearing.
Preferably, the statistics target display screen corresponds to the light interference degree in each sensing, and includes: comparing the light brightness monitored in the sensing area each time when the target display screen senses each time with a set suitable sensing brightness interval, counting the monitoring times in the suitable sensing brightness interval, the lower limit value smaller than the suitable sensing brightness interval and the upper limit value larger than the suitable brightness interval, and respectively recording as、/>And->
Extracting maximum light brightness and minimum light brightness from each monitored light brightness, respectively recorded asAnd
counting corresponding light interference degree of target display screen in each sensing,/>For monitoring the number of times->The monitoring frequency difference, the lower limit value of the suitable sensing brightness interval and the upper limit value of the suitable sensing brightness interval are respectively set.
Preferably, the analyzing the sensing process data of each single finger sensing includes: extracting from the sensing process data of each single-finger sensingTaking a gesture sensing time point, a gesture response time point and a gesture response display time point, and respectively marking as、/>And->,/>Indicates the single finger sensing order number,/->
Positioning the ending time point of each hand operation from the video of the corresponding induction process of each single-finger induction, taking the hand operation with the shortest time interval with the gesture induction time point as the target hand operation, and marking the ending time point of the corresponding target hand operation of each induction as the target hand operation
The environmental deviation correction factors corresponding to each single finger induction are selected from the environmental deviation correction factors corresponding to each induction and are recorded as
Counting induction deviation degree corresponding to each single finger induction,/>The set reference induction deviation time, the set response deviation time and the set display deviation time are respectively, and e is a natural constant.
Will beDeviation from the set reference single finger sense>By comparison, screening out more than +.>Each single finger sense of (2) is referred to as each offset sense.
Analyzing gesture duration deviation degree of each deviation induction according to the induction process video of each deviation inductionAnd gesture movement track deviation degree +>,/>Indicating deviation sensing order number +.>
Will beAnd->As a result of analysis of the sensing process data of each single finger sensing.
Preferably, the analyzing the gesture duration deviation degree sensed by each deviation includes: positioning each hand operation before the gesture sensing time point from the corresponding sensing process video of each deviation sensing, marking as each reference hand operation, counting the reference hand operation times, and marking as
The duration of each reference hand operation is positioned from the video of the corresponding induction process of each deviation induction and is matched with the currently set single-finger induction pairComparing the time intervals of the hand operation to confirm the deviation operation timesAnd +.>
The interval duration between each reference hand operation is positioned from the video of the user induction process corresponding to each deviation induction, and the average interval duration is obtained through average value calculation
Counting gesture duration deviation degree sensed by each deviation,/>The gesture generation frequency of the set reference, the rapid gesture generation duty ratio, the reference sensing time deviation, the proper gesture generation interval duration and the +.>Indicating the duration of the video of the induction process corresponding to the f-th deviation induction,/-time>The lower limit value of the duration interval of the hand operation is sensed.
Preferably, the analyzing the deviation degree of the gesture movement track sensed by each deviation includes: positioning the hand track of each reference hand operation from the video of the corresponding induction process of each deviation induction, performing superposition comparison with the currently set induction gesture track to obtain the superposition track length of each reference hand operation, performing difference with the set reference superposition track length, counting the reference hand operation times with the difference value greater than or equal to 0, and recording as follows
Average value calculation is carried out on the length of the coincident track corresponding to each reference hand operation by each deviation induction, and the average length of the coincident track is obtained and is recorded as
Counting gesture movement track deviation degree sensed by each deviation,/>Respectively set coincidence operation times ratio and coincidence track length ratio +.>For the currently set length of the gesture track of the sense, +.>The reference hand operation times at the f-th deviation sensing.
Preferably, analyzing the sensed process data of each multi-finger sensing includes: according to the induction deviation degree corresponding to each single finger inductionThe statistical method of the multi-finger induction system is similar to statistics to obtain induction deviation degree corresponding to each multi-finger induction system>,/>Indicates the multi-finger induction sequence number,/->
Will beContrast +.>Screening out more than->The multi-finger sensing of each time is recorded as each target sensing, and gesture sensing time points of each target sensing are extracted.
And positioning each hand operation before the gesture sensing time point from the sensing video sensed by each target, recording the hand operation as each focusing operation, and extracting the initial finger included angle and the final finger included angle of each focusing operation.
The finger included angle of the initial finger and the finger included angle of the final finger of each focusing operation are formed into a finger included angle interval of each focusing operation, the finger included angle interval is compared with a currently set sensing trigger angle interval, each focusing operation in the sensing trigger angle interval is marked as each qualified operation, each qualified operation is filtered from each focusing operation, and each remaining focusing operation after filtering is marked as each unqualified operation.
And taking the finger included angle interval corresponding to each disqualified operation of each target induction as the analysis result of the induction process data of each multi-finger induction.
Preferably, the confirming the suitable trigger condition of single finger sensing includes: from the slaveAnd each deviation induction greater than 0 is positioned and recorded as each time deviation induction, and the duration time of each time deviation induction corresponding to each hand operation is extracted.
Counting gesture concentration duration time sensed by time deviation of each time, and sensing hand operation duration timeContrast, greater and less than +.>And respectively carrying out average value calculation on gesture concentrated duration time sensed by each time deviation, and further forming a proper trigger duration time interval by calculation results.
From the slaveAnd (3) positioning each deviation induction greater than 0, recording the deviation induction as each track deviation induction, extracting a hand track corresponding to each hand operation of each track deviation induction, and integrating to obtain a hand track of each accumulated hand operation.
Comparing the hand tracks of each accumulated hand operation with each other, if the hand track of one accumulated operation is the same as or similar to the hand track of other accumulated hand operations, marking the hand track as the same track, and counting the number of the same tracks and accumulated hand operation times corresponding to the same tracks.
If the number of times of the corresponding accumulated hand operations of a certain same track is greater thanThe same track is noted as an inductible track.
And combining each inductable track with the currently set induction gesture track to form an appropriate induction track set, and taking the appropriate triggering duration interval and the appropriate induction track set as appropriate triggering conditions of single-finger induction.
Preferably, confirming the appropriate trigger condition for multi-finger sensing comprises: and marking the sensing triggering angle interval and the finger included angle interval corresponding to each disqualified operation of each target sensing on a numerical axis respectively, and taking the increasing direction of the numerical value as the right direction.
If a certain target senses that the left end point of the finger included angle interval corresponding to a certain disqualified operation is not located in the sensing trigger angle interval, the disqualified operation is marked as a lower limit difference operation, and an upper limit difference operation is set in a similar way according to a setting mode of the upper limit difference operation.
And respectively carrying out average calculation on the upper limit value of the finger included angle section corresponding to each upper limit difference operation of each target induction and the lower limit value of the finger included angle section corresponding to each lower limit difference operation to obtain average upper limit angle values and average lower limit angle values corresponding to each target induction, confirming a proper trigger angle section, and taking the proper trigger angle section as a proper trigger condition of multi-finger induction.
Compared with the prior art, the invention has the following beneficial effects: (1) According to the invention, the environment deviation correction factor is set by analyzing the identification environment data, the interference of the induction environment on the induction identification result is fully considered, and meanwhile, the single-finger induction and the multi-finger induction are respectively carried out on the target display screen, so that the control analysis is respectively carried out on the induction processes of the single-finger induction and the multi-finger induction, the defect that the current control is only carried out from the gesture level of the user is effectively solved, the multidimensional induction identification control of the current display screen is realized, the reliability of the induction identification control of the display screen is ensured, the control effect of the induction identification of the subsequent display screen is also improved, the defect existing at present is overcome, the flexibility of the gesture identification management and control of the display screen is promoted, and the use experience of the user is greatly improved.
(2) According to the invention, by performing light interference analysis and personnel interference analysis, setting of an environment deviation correction factor is further performed, so that deep analysis of an induction environment is realized, the limitation of induction recognition error correction of the current display screen is broken, the feasibility, pertinence and elimination effect of induction error positioning are ensured, meanwhile, the accuracy and the referential of the induction recognition of the gesture of the subsequent display screen are improved, and the distribution condition of personnel is intuitively displayed by analyzing the flowing condition and the position of the personnel when the personnel interference analysis is performed, so that the reliability and the rationality of induction recognition error correction are improved.
(3) According to the invention, the video and the induction response data of the induction process of single-finger identification and multi-finger identification are regularly analyzed, so that the proper triggering conditions of single-finger induction and multi-finger induction are confirmed, the problem of insufficient accuracy of the current intention identification of a user is effectively solved, the accommodation of the induction identification of a display screen is expanded, the deep analysis of the identification deviation conditions of different types of gestures corresponding to different users is realized, the suitability and the laminating performance of triggering setting are ensured, and the accuracy of the subsequent gesture identification of the display screen is further ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the connection of the modules of the system of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides an intelligent display screen induction recognition control management system, which includes: the system comprises an induction process data extraction module, an induction environment data extraction module, an induction identification correction analysis module, an induction identification control analysis module, an identification information base and an induction trigger replacement module.
The induction identification control analysis module is respectively connected with the induction process data extraction module, the induction identification correction analysis module identification information base and the induction trigger replacement module, and the induction environment data extraction module is connected with the induction identification correction analysis module.
The sensing process data extraction module is used for extracting the type and sensing process data of each sensing of the target display screen, wherein the type comprises single-finger sensing and multi-finger sensing.
In one particular embodiment, the single-finger sensing corresponds to operations including, but not limited to, up-slide, down-slide, left-shift, and right-shift, and the multi-finger sensing corresponds to operations including, but not limited to, zoom-in and zoom-out.
Specifically, the sensing process data is sensing process video and response data, wherein the response data comprises a gesture sensing time point, a gesture response time point and a gesture response display time point.
The sensing environment data extraction module is used for extracting images collected in sensing areas and light brightness monitored in each time when the target display screen senses in each time.
The induction recognition correction analysis module is used for setting the environmental deviation correction factors corresponding to each induction, and is recorded asI represents the sensing order number,/->
Illustratively, setting the respective sensing-corresponding environmental bias correction factors includes: a1, counting the corresponding light interference degree of the target display screen in each induction
Understandably, the statistics of the corresponding light interference degree of the target display screen in each sensing includes: a1-1, comparing the light brightness monitored in each sensing area of the target display screen with a set proper sensing brightness interval when sensing each time, counting the monitoring times which are positioned in the proper sensing brightness interval, are smaller than the lower limit value of the proper sensing brightness interval and are larger than the upper limit value of the proper brightness interval, and respectively recording as、/>And->
A1-2, extracting maximum light brightness and minimum light brightness from each monitored light brightness, respectively recorded asAnd->
A1-3, counting corresponding light interference degree of target display screen in each induction,/>For monitoring the number of times->The monitoring frequency difference, the lower limit value of the suitable sensing brightness interval and the upper limit value of the suitable sensing brightness interval are respectively set.
A2, extracting the number of people from the images acquired in each time in the sensing area when the target display screen senses each time, and recording the number asJ represents the acquisition order number, < > j->
A3, willAnd set the number of people with inductive interference->In contrast, statistics of each induction is greater than +.>Is recorded as +.>
A4, collecting images from each timeExtracting the distance between the target sensing personnel position and each non-target sensing personnel position, calculating the average distance between the target sensing personnel position and the non-target sensing personnel position through the average value, and recording as
In one embodiment, the specific determination process of the target sensing personnel is as follows: a4-1, locating the center point position of the target display screen, the center point position of each person, the face orientation, the outline and the distance between the center point position and the face orientation of each person and the target display screen from the acquired images.
And A4-2, if the face orientation of a person is back to the target display screen, marking the person as a non-inductive person, counting the number of the non-inductive persons, filtering out the non-inductive persons from the persons, and marking the remaining persons after filtering as the persons to be confirmed.
A4-3, taking the central point of each person to be confirmed as an origin, taking the face orientation of each person to be confirmed as a straight line direction, making a straight line parallel to the ground as a central reference line of each person to be confirmed, and simultaneously taking the central point position of the target display screen as the origin, taking the display orientation of the target display screen as a straight line direction, and making a straight line parallel to the ground as a display screen reference line.
A4-4, extracting the included angle between the corresponding center reference line of each person to be confirmed and the reference line of the display screen, and recording asP represents the number of the person to be confirmed, +.>
A4-5, comparing the contours of the to-be-confirmed personnel with the contours of the conventional personnel to obtain the overlapping area of the contours of the to-be-confirmed personnel and the contours of the conventional personnel, and marking as
A4-6, each person to be confirmed is connected with the target display screenThe distance of (2) is recorded asCounting the corresponding induction tendency degree of each person to be confirmed>,/>,/>To set the profile area of the regular person->The body included angle and the maximum recognition distance of the recognition personnel are respectively set as references.
And A4-7, taking the person to be confirmed with the largest induction tendency as a target induction person.
A5, counting the corresponding personnel interference degree of the target display screen in each induction,/>For the acquisition times->For the area of the sensing area>The personnel concentration of the set reference and the proper sensing interval distance are respectively set.
A6, calculating environment deviation correction factors corresponding to each induction,/>The light interference degree and the personnel interference degree of the allowed bearing are respectively set.
According to the embodiment of the invention, the light interference analysis and the personnel interference analysis are carried out, the setting of the environmental deviation correction factors is further carried out, the deep analysis of the induction environment is realized, the limitation of the induction recognition error correction of the current display screen is broken, the feasibility, pertinence and elimination effect of the induction error positioning are ensured, the accuracy and the referential of the induction recognition of the gesture of the subsequent display screen are improved, and the distribution condition of personnel is intuitively displayed by analyzing the flowing condition and the position of the personnel when the personnel interference analysis is carried out, so that the reliability and the rationality of the induction recognition error correction are improved.
The induction recognition control analysis module is used for dividing each induction into each single-finger induction and each multi-finger induction, analyzing induction process data of each single-finger induction and each multi-finger induction to obtain analysis results, and further confirming suitable triggering conditions of the single-finger induction and the multi-finger induction.
Illustratively, analyzing the sensed process data of each single finger sense includes: b1, extracting gesture sensing time points, gesture response time points and gesture response display time points from sensing process data sensed by each single finger, and respectively marking as、/>And->,/>Indicates the single finger sensing order number,/->
B2、Positioning the ending time point of each hand operation from the video of the corresponding induction process of each single-finger induction, taking the hand operation with the shortest time interval with the gesture induction time point as the target hand operation, and marking the ending time point of the corresponding target hand operation of each induction as the target hand operation
B3, screening out the environmental deviation correction factors corresponding to each single finger induction from the environmental deviation correction factors corresponding to each single finger induction, and marking the environmental deviation correction factors as
B4, counting induction deviation degree corresponding to each single finger induction,/>The set reference induction deviation time, the set response deviation time and the set display deviation time are respectively, and e is a natural constant.
B5, willDeviation from the set reference single finger sense>By comparison, screening out more than +.>Each single finger sense of (2) is referred to as each offset sense.
B6, analyzing gesture duration deviation degree of each deviation induction according to the induction process video of each deviation inductionAnd gesture movement track deviation degree +>,/>Indicating deviation sensing order number +.>
Understandably, analyzing the gesture duration bias degree of each bias sensing includes: e1, positioning each hand operation before the gesture sensing time point from the corresponding sensing process video of each deviation sensing, marking as each reference hand operation, counting the reference hand operation times, and marking as
E2, locating the duration of each reference hand operation from the video of the induction process corresponding to each deviation induction, comparing the duration with the currently set duration interval of the induction hand operation corresponding to the single-finger induction, and confirming the deviation operation timesAnd +.>
The confirmation bases for confirming the deviation operation times and the reference hand operation time periods are respectively as follows: and (3) recording the reference hand operation times outside the single-finger induction corresponding induction hand operation time interval as deviation operation times, and carrying out average value calculation on the duration time of each reference hand operation in the single-finger induction corresponding induction hand operation time interval to obtain the reference hand operation time of each deviation induction.
E3, locating the interval duration between each reference hand operation from the video of the user induction process corresponding to each deviation induction, and obtaining the average interval duration through average value calculation
E4, counting gesture duration deviation degree sensed by each deviation,/>The gesture generation frequency of the set reference, the rapid gesture generation duty ratio, the reference sensing time deviation, the proper gesture generation interval duration and the +.>Indicating the duration of the video of the induction process corresponding to the f-th deviation induction,/-time>The lower limit value of the duration interval of the hand operation is sensed.
It is also understandable that analyzing the gesture movement track deviation degree sensed by each deviation includes: d1, locating the hand track of each reference hand operation from the video of the corresponding induction process of each deviation induction, performing superposition comparison with the currently set induction gesture track to obtain the superposition track length of each reference hand operation, further performing difference with the set reference superposition track length, counting the reference hand operation times with the difference value larger than or equal to 0, and recording as
D2, carrying out average value calculation on the length of the coincident track corresponding to each reference hand operation by each deviation induction to obtain the average coincident track length, and recording as
D3, counting gesture movement track deviation degree sensed by each deviation,/>Respectively set coincidence operation times ratio and coincidence track length ratio +.>For the currently set length of the gesture track of the sense, +.>The reference hand operation times at the f-th deviation sensing.
B7, willAnd->As a result of analysis of the sensing process data of each single finger sensing.
Further, confirming the proper trigger condition of the single finger sensing comprises: f1, slaveAnd each deviation induction greater than 0 is positioned and recorded as each time deviation induction, and the duration time of each time deviation induction corresponding to each hand operation is extracted.
F2, counting gesture concentration duration time of each time deviation induction and sensing hand operation duration timeContrast, greater and less than +.>And respectively carrying out average value calculation on gesture concentrated duration time sensed by each time deviation, and further forming a proper trigger duration time interval by calculation results.
It should be noted that, the duration of gesture concentration of each time deviation sensing refers to the duration of the time deviation sensing corresponding to the maximum number of hand operations.
It should be further noted that the upper limit value of the suitable trigger duration interval is greater thanThe average value of gesture concentration duration corresponding to each time deviation induction is smaller than +.>Average value of gesture concentration duration corresponding to each time deviation induction.
F3, slaveAnd (3) positioning each deviation induction greater than 0, recording the deviation induction as each track deviation induction, extracting a hand track corresponding to each hand operation of each track deviation induction, and integrating to obtain a hand track of each accumulated hand operation.
And F4, comparing the hand tracks of the accumulated hand operations, if the hand track of one accumulated operation is the same as or similar to the hand track of the other accumulated hand operations, marking the hand track as the same track, and counting the number of the same tracks and accumulated hand operation times corresponding to the same tracks.
In a specific embodiment, the hand trajectory similarity refers to the overlapping length of the hand trajectory and the currently set sensed gesture trajectory reaching more than eighty percent of the currently set sensed gesture trajectory length.
F5, if the accumulated hand operation times corresponding to the same track is greater thanThe same track is noted as an inductible track.
And F6, combining each inductable track with the currently set induction gesture track to form an appropriate induction track set, and taking the appropriate trigger duration interval and the appropriate induction track set as appropriate trigger conditions of single-finger induction.
Yet another exemplary, for each multi-finger senseThe sensing process data is analyzed, and the specific analysis process is as follows: u1, according to the induction deviation degree corresponding to each single finger inductionThe statistical method of the multi-finger induction system is similar to statistics to obtain induction deviation degree corresponding to each multi-finger induction system>,/>Indicates the multi-finger induction sequence number,/->
U2, willContrast +.>Screening out more than->The multi-finger sensing of each time is recorded as each target sensing, and gesture sensing time points of each target sensing are extracted.
And U3, positioning each hand operation before the gesture sensing time point from the sensing video sensed by each target, recording the hand operation as each focusing operation, and extracting a starting finger included angle and an ending finger included angle of each focusing operation.
And U4, forming a finger included angle interval of each focusing operation by the initial finger included angle and the end finger included angle of each focusing operation, comparing the finger included angle interval with a currently set sensing trigger angle interval, marking each focusing operation in the sensing trigger angle interval as each qualified operation, filtering each qualified operation from each focusing operation, and marking each remained focusing operation after filtering as each unqualified operation.
And U5, taking the finger included angle interval corresponding to each disqualified operation of each target induction as an analysis result of induction process data of each multi-finger induction.
Further, confirming the proper triggering condition of the multi-finger sensing comprises: and N1, marking the sensing trigger angle interval and the finger included angle interval corresponding to each disqualified operation of each target sensing on a numerical axis respectively, and taking the increasing direction of the numerical value as the right direction.
And N2, if a certain target senses that the left end point of the finger included angle interval corresponding to a certain disqualified operation is not positioned in the sensing trigger angle interval, marking the disqualified operation as a lower limit difference operation, and setting an upper limit difference operation in a similar way according to a setting mode of the upper limit difference operation.
And N3, respectively carrying out average value calculation on the upper limit value of the finger included angle interval corresponding to each upper limit difference operation of each target induction and the lower limit value of the finger included angle interval corresponding to each lower limit difference operation to obtain an average upper limit angle value and an average lower limit angle value corresponding to each target induction, confirming a proper triggering angle interval, and taking the proper triggering angle interval as a proper triggering condition of multi-finger induction.
The confirmation of the appropriate trigger angle interval includes: and respectively comparing the average upper limit angle value and the average lower limit angle value corresponding to each target induction, and respectively taking the average upper limit angle value with the largest target induction times and the average lower limit angle value with the largest target induction times as the upper limit value and the lower limit value of the suitable trigger angle interval, thereby forming the suitable trigger angle interval.
According to the embodiment of the invention, the video and the induction response data of the induction process of single-finger recognition and multi-finger recognition are regularly analyzed, so that the proper trigger conditions of single-finger induction and multi-finger induction are confirmed, the problem of insufficient accuracy of the current intention recognition of a user is effectively solved, the accommodation of the induction recognition of a display screen is expanded, the deep analysis of recognition deviation conditions of different users corresponding to different types of gestures is realized, the suitability and the laminating performance of trigger setting are ensured, and the accuracy of the subsequent gesture recognition of the display screen is further ensured.
The sensing information base is used for storing the currently set single-finger sensing and multi-finger sensing triggering conditions of the target display screen, wherein the single-finger sensing triggering conditions are sensing gesture tracks and sensing hand operation duration intervals, and the multi-finger sensing triggering conditions are sensing triggering angle intervals.
And the induction trigger replacing module is used for replacing the currently set trigger conditions of the single-finger induction suitable trigger conditions and the multi-finger induction suitable trigger conditions.
According to the embodiment of the invention, the environment deviation correction factor is set by analyzing the identification environment data, the interference of the induction environment on the induction identification result is fully considered, and meanwhile, the single-finger induction and the multi-finger induction are respectively carried out on each induction of the target display screen, so that the control analysis is carried out on the induction processes of the single-finger induction and the multi-finger induction, the defect that the control is carried out only from the gesture level of a user at present is effectively solved, the multidimensional induction identification control of the current display screen is realized, the reliability of the induction identification control of the display screen is ensured, the control effect of the induction identification of the subsequent display screen is also improved, the defect existing at present is overcome, the flexibility of the gesture identification control of the display screen is promoted, and the use experience of the user is greatly improved.
The foregoing is merely illustrative and explanatory of the principles of this invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of this invention or beyond the scope of this invention as defined in the claims.

Claims (3)

1. An intelligent display screen induction identification control management system is characterized in that: the system comprises:
the induction process data extraction module is used for extracting the type and induction process data of each induction of the target display screen, wherein the type comprises single-finger induction and multi-finger induction;
the sensing process data are sensing process video and response data, wherein the response data comprise gesture sensing time points, gesture response time points and gesture response display time points;
the sensing environment data extraction module is used for extracting images collected in the sensing area and the light brightness monitored in each time when the target display screen senses each time;
the induction recognition correction analysis module is used for setting the environment deviation correction factors corresponding to each induction and is recorded asI represents the sensing order number,/->
Setting the environmental deviation correction factors corresponding to each induction, including:
counting corresponding light interference degree of target display screen in each sensingThe method comprises the steps of carrying out a first treatment on the surface of the The statistics target display screen corresponds the light interference degree when response each time, includes:
comparing the light brightness monitored in the sensing area each time when the target display screen senses each time with a set suitable sensing brightness interval, counting the monitoring times in the suitable sensing brightness interval, the lower limit value smaller than the suitable sensing brightness interval and the upper limit value larger than the suitable brightness interval, and respectively recording as、/>And->
Extracting maximum light brightness and minimum light brightness from each monitored light brightness, respectively recorded asAnd->
Counting corresponding light interference degree of target display screen in each sensing,/>For monitoring the number of times->Respectively setting a monitoring frequency difference, a lower limit value of a suitable sensing brightness interval and an upper limit value of a suitable sensing brightness interval;
extracting the number of people from the images acquired in the sensing area when the target display screen senses each time, and recording the number asJ represents the acquisition order number, < > j->
Will beAnd set the number of people with inductive interference->In contrast, statistics of each induction is greater than +.>The acquisition times of (1) are recorded as
Extracting the target sensing personnel position and the non-target sensing personnel position from the acquired imagesThe distance between the positions is calculated by means to obtain the average distance between the positions of the target sensing personnel and the non-target sensing personnel, and is recorded as
Statistics target display screen corresponds personnel interference degree when response each time,/>For the acquisition times->For the area of the sensing area>The personnel concentration of the set reference and the proper sensing interval distance are respectively set;
calculating the environmental deviation correction factor corresponding to each induction,/>,/>The light interference degree and the personnel interference degree of the allowed bearing are respectively set;
the induction recognition control analysis module is used for dividing each induction into each single-finger induction and each multi-finger induction, analyzing induction process data of each single-finger induction and each multi-finger induction to obtain analysis results, and further confirming proper triggering conditions of the single-finger induction and the multi-finger induction;
the analysis of the sensing process data of each single finger sensing comprises the following steps:
extracting gesture sensing time points, gesture response time points and gesture response display time points from sensing process data sensed by each single finger, and respectively marking as、/>And->,/>Indicates the single finger sensing order number,/->
Positioning the ending time point of each hand operation from the video of the corresponding induction process of each single-finger induction, taking the hand operation with the shortest time interval with the gesture induction time point as the target hand operation, and marking the ending time point of the corresponding target hand operation of each induction as the target hand operation
The environmental deviation correction factors corresponding to each single finger induction are selected from the environmental deviation correction factors corresponding to each induction and are recorded as
Counting induction deviation degree corresponding to each single finger induction,/>The method comprises the steps of respectively setting reference induction deviation duration, response deviation duration and display deviation duration, wherein e is a natural constant;
will beDeviation from the set reference single finger sense>By comparison, screening out more than +.>Each single-finger induction of (2) is recorded as each deviation induction;
analyzing gesture duration deviation degree of each deviation induction according to the induction process video of each deviation inductionAnd gesture movement track deviation degree +>,/>Indicating deviation sensing order number +.>
Will beAnd->As the analysis result of the sensing process data of each single finger sensing;
the analyzing the gesture duration deviation degree sensed by each deviation comprises the following steps:
determining from each deviation induction corresponding induction process videoEach hand operation before the gesture sensing time point is recorded as each reference hand operation, and the reference hand operation times are counted and recorded as
The duration of each reference hand operation is positioned from the video of the induction process corresponding to each deviation induction, and is compared with the currently set duration interval of the induction hand operation corresponding to the single-finger induction, so as to confirm the deviation operation timesAnd +.>
The interval duration between each reference hand operation is positioned from the video of the user induction process corresponding to each deviation induction, and the average interval duration is obtained through average value calculation
Counting gesture duration deviation degree sensed by each deviation,/>The gesture generation frequency of the set reference, the rapid gesture generation duty ratio, the reference sensing time deviation, the proper gesture generation interval duration and the +.>Indicating the duration of the video of the induction process corresponding to the f-th deviation induction,/-time>A lower limit value of a duration interval for sensing the hand operation;
analyzing the gesture movement track deviation degree sensed by each deviation comprises the following steps:
positioning the hand track of each reference hand operation from the video of the corresponding induction process of each deviation induction, performing superposition comparison with the currently set induction gesture track to obtain the superposition track length of each reference hand operation, performing difference with the set reference superposition track length, counting the reference hand operation times with the difference value greater than or equal to 0, and recording as follows
Average value calculation is carried out on the length of the coincident track corresponding to each reference hand operation by each deviation induction, and the average length of the coincident track is obtained and is recorded as;
Counting gesture movement track deviation degree sensed by each deviation,/>Respectively set coincidence operation times ratio and coincidence track length ratio +.>For the currently set length of the gesture track of the sense, +.>The reference hand operation times are the f-th deviation sensing time;
analyzing the sensing process data of each multi-finger sensing, including:
according to the induction deviation degree corresponding to each single finger inductionThe statistical method of the multi-finger induction system is similar to statistics to obtain induction deviation degree corresponding to each multi-finger induction system>,/>Indicates the multi-finger induction sequence number,/->
Will beContrast +.>Screening out more than->Recording the multi-finger induction of each time as each target induction, and extracting gesture induction time points of each time of target induction;
positioning each hand operation before a gesture sensing time point from the sensing video sensed by each target, marking the hand operation as each focusing operation, and extracting a starting finger included angle and an ending finger included angle of each focusing operation;
forming a finger included angle interval of each focusing operation by the initial finger included angle and the end finger included angle of each focusing operation, comparing the finger included angle interval with a currently set sensing trigger angle interval, marking each focusing operation in the sensing trigger angle interval as each qualified operation, filtering each qualified operation from each focusing operation, and marking each remained focusing operation after filtering as each unqualified operation;
taking the finger included angle interval corresponding to each disqualified operation of each target induction as the analysis result of the induction process data of each multi-finger induction;
the sensing information base is used for storing the currently set single-finger sensing and multi-finger sensing triggering conditions of the target display screen, wherein the single-finger sensing triggering conditions are sensing gesture tracks and sensing hand operation duration intervals, and the multi-finger sensing triggering conditions are sensing triggering angle intervals;
and the induction trigger replacing module is used for replacing the currently set trigger conditions of the single-finger induction suitable trigger conditions and the multi-finger induction suitable trigger conditions.
2. The intelligent display screen induction identification control management system as set forth in claim 1, wherein: the confirmation of the proper triggering condition of single finger induction comprises the following steps:
from the slaveEach time of deviation induction larger than 0 is positioned and recorded as each time of time deviation induction, and the duration time of each time of time deviation induction corresponding to each time of hand operation is extracted;
counting gesture concentration duration time sensed by time deviation of each time, and sensing hand operation duration timeContrast, greater and less than +.>Respectively carrying out average value calculation on gesture concentrated duration time sensed by each time deviation, and further forming a proper trigger duration time interval by calculation results;
from the slaveEach deviation induction greater than 0 is positioned and recorded as each track deviation induction, the hand track corresponding to each hand operation of each track deviation induction is extracted, and the hand track of each accumulated hand operation is integrated;
comparing the hand tracks of each accumulated hand operation with each other, if the hand track of one accumulated operation is the same as or similar to the hand track of other accumulated hand operations, marking the hand track as the same track, and counting the number of the same tracks and accumulated hand operation times corresponding to the same tracks;
if the number of times of the corresponding accumulated hand operations of a certain same track is greater thanThe same track is marked as an inductable track;
and combining each inductable track with the currently set induction gesture track to form an appropriate induction track set, and taking the appropriate triggering duration interval and the appropriate induction track set as appropriate triggering conditions of single-finger induction.
3. The intelligent display screen induction identification control management system as set forth in claim 1, wherein: confirming a suitable trigger condition for multi-finger sensing, comprising:
marking the sensing triggering angle interval and the finger included angle interval corresponding to each disqualified operation of each target sensing on a numerical axis respectively, and taking the increasing direction of the numerical value as the right direction;
if a certain target senses that the left end point of the finger included angle interval corresponding to a certain disqualified operation is not located in the sensing trigger angle interval, marking the disqualified operation as a lower limit difference operation, and setting an upper limit difference operation in a similar way according to a setting mode of the upper limit difference operation;
and respectively carrying out average calculation on the upper limit value of the finger included angle section corresponding to each upper limit difference operation of each target induction and the lower limit value of the finger included angle section corresponding to each lower limit difference operation to obtain average upper limit angle values and average lower limit angle values corresponding to each target induction, confirming a proper trigger angle section, and taking the proper trigger angle section as a proper trigger condition of multi-finger induction.
CN202311023108.8A 2023-08-15 2023-08-15 Intelligent display screen induction identification control management system Active CN116737019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311023108.8A CN116737019B (en) 2023-08-15 2023-08-15 Intelligent display screen induction identification control management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311023108.8A CN116737019B (en) 2023-08-15 2023-08-15 Intelligent display screen induction identification control management system

Publications (2)

Publication Number Publication Date
CN116737019A CN116737019A (en) 2023-09-12
CN116737019B true CN116737019B (en) 2023-11-03

Family

ID=87906467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311023108.8A Active CN116737019B (en) 2023-08-15 2023-08-15 Intelligent display screen induction identification control management system

Country Status (1)

Country Link
CN (1) CN116737019B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341818A (en) * 2016-04-29 2017-11-10 北京博酷科技有限公司 Image analysis algorithm for the test of touch-screen response performance
CN107480496A (en) * 2017-07-28 2017-12-15 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN110741385A (en) * 2019-06-26 2020-01-31 Oppo广东移动通信有限公司 Gesture recognition method and device and location tracking method and device
CN110769582A (en) * 2019-11-12 2020-02-07 广州歌誉家居用品有限公司 Intelligent light control system for intelligent home
EP3798804A1 (en) * 2019-09-26 2021-03-31 Samsung Display Co., Ltd. Fingerprint sensor and display device including the same
WO2021072768A1 (en) * 2019-10-18 2021-04-22 深圳市汇顶科技股份有限公司 Fingerprint recognition device and electronic apparatus
CN112818857A (en) * 2021-02-02 2021-05-18 深圳市汇春科技股份有限公司 Method, device and equipment for recognizing air gesture and storage medium
CN113747216A (en) * 2020-05-29 2021-12-03 海信视像科技股份有限公司 Display device and touch menu interaction method
CN115309267A (en) * 2022-08-01 2022-11-08 东莞市柏群电子科技有限公司 Gesture recognition control system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341818A (en) * 2016-04-29 2017-11-10 北京博酷科技有限公司 Image analysis algorithm for the test of touch-screen response performance
CN107480496A (en) * 2017-07-28 2017-12-15 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN110741385A (en) * 2019-06-26 2020-01-31 Oppo广东移动通信有限公司 Gesture recognition method and device and location tracking method and device
EP3798804A1 (en) * 2019-09-26 2021-03-31 Samsung Display Co., Ltd. Fingerprint sensor and display device including the same
WO2021072768A1 (en) * 2019-10-18 2021-04-22 深圳市汇顶科技股份有限公司 Fingerprint recognition device and electronic apparatus
CN110769582A (en) * 2019-11-12 2020-02-07 广州歌誉家居用品有限公司 Intelligent light control system for intelligent home
CN113747216A (en) * 2020-05-29 2021-12-03 海信视像科技股份有限公司 Display device and touch menu interaction method
CN112818857A (en) * 2021-02-02 2021-05-18 深圳市汇春科技股份有限公司 Method, device and equipment for recognizing air gesture and storage medium
CN115309267A (en) * 2022-08-01 2022-11-08 东莞市柏群电子科技有限公司 Gesture recognition control system

Also Published As

Publication number Publication date
CN116737019A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN101359368B (en) Video image clustering method and system
CN108073923B (en) License plate correction method and device
CN102176228B (en) Machine vision method for identifying dial plate information of multi-pointer instrument
CN103077377B (en) Based on the fingerprint correction method of field of direction distribution
CN109242024B (en) Vehicle behavior similarity calculation method based on checkpoint data
CN103455794A (en) Dynamic gesture recognition method based on frame fusion technology
CN103226387A (en) Video fingertip positioning method based on Kinect
CN104050449A (en) Face recognition method and device
Gambino et al. Forensic surface metrology: tool mark evidence
CN101000652A (en) Automatic recognising method for digital telemetering image of flow meter and digital telemetering recording system
CN109377779A (en) Parking lot car searching method and parking lot car searching device
CN107645709A (en) A kind of method and device for determining personal information
CN104036287A (en) Human movement significant trajectory-based video classification method
CN105573613A (en) Program icon sorting method and apparatus
CN111523431A (en) Face recognition method, device and equipment
CN111688711A (en) Alcohol detection management system and method based on cloud computing
CN111868686A (en) Export method of common application program and export device using same
CN105243420A (en) Accurate statistical method of bus passenger flow
CN104933408A (en) Hand gesture recognition method and system
CN107180425A (en) Rail clip automatic identifying method and device
CN116737019B (en) Intelligent display screen induction identification control management system
CN109146913B (en) Face tracking method and device
WO2015049340A1 (en) Marker based activity transition models
CN108399416A (en) A kind of substation&#39;s pointer instrument automatic identifying method having circular arc boundary
US20090318135A1 (en) System and method for indicating signal intensity of mobile phone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant