CN109725712A - Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing - Google Patents

Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing Download PDF

Info

Publication number
CN109725712A
CN109725712A CN201810487442.1A CN201810487442A CN109725712A CN 109725712 A CN109725712 A CN 109725712A CN 201810487442 A CN201810487442 A CN 201810487442A CN 109725712 A CN109725712 A CN 109725712A
Authority
CN
China
Prior art keywords
report forms
visual report
gesture
sensing
split screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810487442.1A
Other languages
Chinese (zh)
Inventor
臧磊
傅婧
郭鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
Original Assignee
OneConnect Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Smart Technology Co Ltd filed Critical OneConnect Smart Technology Co Ltd
Priority to CN201810487442.1A priority Critical patent/CN109725712A/en
Publication of CN109725712A publication Critical patent/CN109725712A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing, the Visual Report Forms methods of exhibiting includes: when Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the mapping relations of the standard body-sensing gesture operation prestored and input equipment input operation;Based on the mapping relations, chooses target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and operate;The display state of Visual Report Forms is adjusted and is shown based on target input operation.The present invention solves the existing technical problem that interactive form is single, intelligence degree is low during existing viewization report is shown.

Description

Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing
Technical field
The present invention relates to field of terminal technology more particularly to a kind of Visual Report Forms methods of exhibiting, device, equipment and readable Storage medium.
Background technique
Currently, Corporate Finance, the departments such as sale require to make and show Visual Report Forms, in the displaying of Visual Report Forms In the process, it needs to rely on the conventional externals equipment such as mouse, keyboard or remote control pen manipulation Visual Report Forms mostly, only passes through tradition External equipment, which manipulates Visual Report Forms, to be existed and the interactive form of user is single, not smart enoughization and the lower skill of user experience Art problem.
Summary of the invention
The main purpose of the present invention is to provide a kind of Visual Report Forms methods of exhibiting, device, equipment and readable storage mediums Matter, it is intended to solve the existing technical problem that interactive form is single, intelligence degree is low during existing viewization report is shown.
To achieve the above object, the present invention provides a kind of Visual Report Forms methods of exhibiting, the Visual Report Forms displaying side Method includes:
When Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the standard body-sensing hand prestored The mapping relations of gesture operation and input equipment input operation;
Based on the mapping relations, chooses target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and grasp Make;
The display state of Visual Report Forms is adjusted and is shown based on target input operation.
Optionally, described to be based on the mapping relations, it chooses corresponding with the body-sensing gesture matched standard body-sensing gesture of operation Target input operating procedure after include:
The voice messaging for acquiring the target user knows the voice messaging by the speech recognition modeling prestored Other places reason, to obtain identification content;
Based on the identification content, determine that the target input operates the mobile line number or column of corresponding Visual Report Forms Number.
Optionally, described to be based on the mapping relations, it chooses corresponding with the body-sensing gesture matched standard body-sensing gesture of operation Target input operating procedure before further include:
The motion profile for acquiring target user's expression in the eyes focal position judges track and the fortune that the body-sensing gesture operates Whether dynamic rail mark is consistent;
When the track of body-sensing gesture operation is consistent with the motion profile, executes and be based on the mapping relations, choosing The step of taking target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation to operate.
Optionally, described to be adjusted and show step packet to the display state of Visual Report Forms based on target input operation It includes:
When the target input operation be split screen operating gesture when, using the initial position of the split screen operating gesture as divide The subregion separator bar of screen;
Split screen processing is carried out to the Visual Report Forms based on the subregion separator bar, and shows split screen treated and is visual Change report.
Optionally, after the displaying split screen treated Visual Report Forms step further include:
If detect return operating gesture, whole screen is carried out to the Visual Report Forms of split screen processing and is shown.
Optionally, described when target input operation is split screen operating gesture, by rising for the split screen operating gesture Beginning position includes: as the subregion separator bar step of split screen
When target input operation is split screen operating gesture, judge whether the Visual Report Forms of current presentation can divide Every displaying;
When the Visual Report Forms of current presentation can separate displaying, the initial position of the split screen operating gesture is obtained, And obtain the gestures direction of the split screen operating gesture;
Initial position and gestures direction based on the split screen operating gesture, determine the subregion separator bar of split screen.
Optionally, described when Visual Report Forms are shown, the body-sensing gesture operating procedure for obtaining target user includes:
When Visual Report Forms are shown, the body-sensing gesture variation track of target user in preset time period is acquired;
Based on the body-sensing gesture variation track mapped matrix model, the body-sensing gesture operation of target user is predicted.
The present invention also provides a kind of Visual Report Forms to show device, and the Visual Report Forms show that device includes:
First obtains module, for when Visual Report Forms are shown, obtaining the body-sensing gesture operation of target user, and obtains The mapping relations of standard body-sensing gesture operation and input equipment the input operation prestored;
Module is chosen, for being based on the mapping relations, chooses and operates matched standard body-sensing gesture pair with body-sensing gesture The target input operation answered;
Second obtains module, for the display state of Visual Report Forms to be adjusted and opened up based on target input operation Show.
Optionally, the Visual Report Forms show device further include:
First acquisition module passes through the speech recognition modeling pair prestored for acquiring the voice messaging of the target user The voice messaging carries out identifying processing, to obtain identification content;
It is mobile to determine that the target input operates corresponding Visual Report Forms for being based on the identification content for determining module Line number or columns.
Optionally, the Visual Report Forms show device further include:
Second acquisition module judges the body-sensing gesture for acquiring the motion profile of target user's expression in the eyes focal position Whether the track of operation is consistent with the motion profile;
Execution module when the track for operating when the body-sensing gesture is consistent with the motion profile, executes and is based on institute Mapping relations are stated, the step of target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation operates is chosen.
Optionally, the second acquisition module includes:
Zoning unit is used for when target input operation is split screen operating gesture, by the split screen operating gesture Subregion separator bar of the initial position as split screen;
Split screen unit for carrying out split screen processing to the Visual Report Forms based on the subregion separator bar, and shows and divides Screen treated Visual Report Forms.
Optionally, described second module is obtained further include:
Whole screen display module, if for detect return operating gesture, to the split screen processing Visual Report Forms into The whole screen of row is shown.
Optionally, the zoning unit includes:
Judgment sub-unit, for judging the visual of current presentation when target input operation is split screen operating gesture Change whether report can separate displaying;
Subelement is obtained, for when the Visual Report Forms of current presentation can separate displaying, obtaining the split screen operation The initial position of gesture, and obtain the gestures direction of the split screen operating gesture;
Determine subelement, for based on the split screen operating gesture initial position and gestures direction, determine split screen point Area's separator bar.
Optionally, described first module is obtained further include:
Acquisition unit, for when Visual Report Forms are shown, the body-sensing gesture for acquiring target user in preset time period to become Change track;
Predicting unit predicts the body of target user for being based on the body-sensing gesture variation track mapped matrix model Feel gesture operation.
In addition, to achieve the above object, the present invention also provides a kind of Visual Report Forms presentation device, the Visual Report Forms Presentation device includes: memory, processor, and communication bus and the Visual Report Forms being stored on the memory show journey Sequence,
The communication bus is for realizing the communication connection between processor and memory;
The processor is for executing the Visual Report Forms presentation program, to perform the steps of
When Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the standard body-sensing hand prestored The mapping relations of gesture operation and input equipment input operation;
Based on the mapping relations, chooses target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and grasp Make;
The display state of Visual Report Forms is adjusted and is shown based on target input operation.
Optionally, described to be based on the mapping relations, it chooses corresponding with the body-sensing gesture matched standard body-sensing gesture of operation Target input operating procedure after include:
The voice messaging for acquiring the target user knows the voice messaging by the speech recognition modeling prestored Other places reason, to obtain identification content;
Based on the identification content, determine that the target input operates the mobile line number or column of corresponding Visual Report Forms Number.
Optionally, described to be based on the mapping relations, it chooses corresponding with the body-sensing gesture matched standard body-sensing gesture of operation Target input operating procedure before further include:
The motion profile for acquiring target user's expression in the eyes focal position judges track and the fortune that the body-sensing gesture operates Whether dynamic rail mark is consistent;
When the track of body-sensing gesture operation is consistent with the motion profile, executes and be based on the mapping relations, choosing The step of taking target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation to operate.
Optionally, described to be adjusted and show step packet to the display state of Visual Report Forms based on target input operation It includes:
When the target input operation be split screen operating gesture when, using the initial position of the split screen operating gesture as divide The subregion separator bar of screen;
Split screen processing is carried out to the Visual Report Forms based on the subregion separator bar, and shows split screen treated and is visual Change report.
Optionally, after the displaying split screen treated Visual Report Forms step further include:
If detect return operating gesture, whole screen is carried out to the Visual Report Forms of split screen processing and is shown.
Optionally, described when target input operation is split screen operating gesture, by rising for the split screen operating gesture Beginning position includes: as the subregion separator bar step of split screen
When target input operation is split screen operating gesture, judge whether the Visual Report Forms of current presentation can divide Every displaying;
When the Visual Report Forms of current presentation can separate displaying, the initial position of the split screen operating gesture is obtained, And obtain the gestures direction of the split screen operating gesture;
Initial position and gestures direction based on the split screen operating gesture, determine the subregion separator bar of split screen.
Optionally, described when Visual Report Forms are shown, the body-sensing gesture operating procedure for obtaining target user includes:
When Visual Report Forms are shown, the body-sensing gesture variation track of target user in preset time period is acquired;
Based on the body-sensing gesture variation track mapped matrix model, the body-sensing gesture operation of target user is predicted.
In addition, to achieve the above object, the present invention also provides a kind of readable storage medium storing program for executing, the readable storage medium storing program for executing storage Have one perhaps more than one program the one or more programs can be held by one or more than one processor Row is to be used for:
When Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the standard body-sensing hand prestored The mapping relations of gesture operation and input equipment input operation;
Based on the mapping relations, chooses target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and grasp Make;
The display state of Visual Report Forms is adjusted and is shown based on target input operation.
The present invention is operated by when Visual Report Forms are shown, obtaining the body-sensing gesture of target user, and is obtained and prestored The mapping relations of standard body-sensing gesture operation and input equipment input operation;Based on the mapping relations, selection and body-sensing gesture Operate the corresponding target input operation of matched standard body-sensing gesture;Based on target input operation to the display shape of Visual Report Forms State is adjusted and shows.In this application, since when Visual Report Forms are shown, can operate progress by body-sensing gesture can Displaying depending on changing report, and since standard body-sensing gesture operation and input equipment input operation are to carry out mapping association, thus, User is capable of the incidence relation of customized standard body-sensing gesture operation and the input operation of corresponding input equipment, to increase with user's Interaction, so that the displaying of Visual Report Forms is more intelligent, solution is existing in the prior art only to be grasped by conventional external equipment Visual Report Forms are controlled to exist and the interactive form of user is single, not smart enoughization and user experience are lower technical problem.
Detailed description of the invention
Fig. 1 is the flow diagram of Visual Report Forms methods of exhibiting first embodiment of the present invention;
Fig. 2 is the flow diagram of Visual Report Forms methods of exhibiting second embodiment of the present invention;
Fig. 3 is the device structure schematic diagram for the hardware running environment that present invention method is related to.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of Visual Report Forms methods of exhibiting, and first in Visual Report Forms methods of exhibiting of the present invention is implemented In example, referring to Fig.1, the Visual Report Forms methods of exhibiting includes:
When Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the standard body-sensing hand prestored The mapping relations of gesture operation and input equipment input operation;Based on the mapping relations, choose matched with body-sensing gesture operation The corresponding target input operation of standard body-sensing gesture;The display state of Visual Report Forms is adjusted based on target input operation And displaying.
Specific step is as follows:
Step S10 obtains the body-sensing gesture operation of target user when Visual Report Forms are shown, and obtains the mark prestored The mapping relations of quasi- body-sensing gesture operation and input equipment input operation;
Body feeling interaction refers to carrying out human-computer interaction by body-sensing, it should be noted that in the present embodiment, visualization Report methods of exhibiting depends on 3D body-sensing Visual Report Forms display systems, and 3D body-sensing Visual Report Forms display systems depend on 3D body Feel interactive camera, which includes infrared transmitter and infrared camera, RGB camera etc., wherein red External transmitter and infrared camera are used to detect 3D image, and RGB camera is used to obtain color image, and RGB camera is per second Available 30 frame image, RGB camera can also be carried out in the image for one or two people that equipment moves within sweep of the eye Bone tracking, to track multiple nodes on human body.In addition, 3D body feeling interaction system further includes expression in the eyes focus information acquisition Device, using the expression in the eyes focus information at family.
When Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the standard body-sensing hand prestored The mapping relations of gesture operation and input equipment input operation, it should be noted that in the present embodiment, standard body-sensing gesture operation There are mapping relations with input equipment input operation, be illustrated to specific embodiment, when standard body-sensing gesture operation is double It is the Ctrl+Home key on keyboard by the definition of gesture when hand crossover operation, it is corresponding to show Visual Report Forms homepage content, when When standard body-sensing gesture is sliding operation on finger, by the Ctrl and+key that the definition of gesture is keyboard, corresponding amplification visualization report Table, when standard body-sensing gesture is finger downslide operation, by the Ctrl and-key that the definition of gesture is keyboard, corresponding diminution is visual Change report etc., when Visual Report Forms are shown, obtains the body-sensing gesture operation of target user, wherein obtain target user's Body-sensing gesture operating process can be: Visual Report Forms display systems are after detecting user gesture variation, based on variation gesture Track etc. carries out the prediction of body-sensing gesture operation, to target user to obtain matched standard body-sensing gesture operation, wherein should Prediction process can be: the body-sensing gesture variation track of target user in acquisition preset time period, in the body-sensing for obtaining target user After gesture variation track, specific body-sensing gesture operation can be realized by body-sensing gesture variation track mapped matrix model Prediction, the matrix model specifically can be behavioural matrix model, in the present embodiment, such as acquiring target user by camera After first body-sensing variation track of gesture, if the first variation body-sensing track is corresponding with the body-sensing gesture operation that gesture both hands intersect Behavioural matrix matching, then the first variation body-sensing track corresponding body-sensing gesture operation is both hands crossover operation, is passing through camera shooting After second body-sensing variation track of head acquisition target user's gesture, if the body-sensing gesture of the second body-sensing variation track and upper cunning is grasped Make corresponding behavioural matrix matching, then the corresponding body-sensing gesture operation of the second body-sensing variation track is upper sliding operation, needs to illustrate , in the present embodiment, what gesture variation track referred to be can be in extremely short preset time as gesture corresponds to bone in 0.1ms Or trunk mapped distance and the variation in direction etc..
After prediction obtains body-sensing gesture operation, triggering obtains the standard body-sensing gesture operation prestored and input equipment inputs The process of the mapping relations of operation, in addition, it is necessary to which explanation, in the present embodiment, body-sensing gesture operation are defeated with input equipment Enter operation mapping relations be can be customized by user, i.e., it is described obtain the standard body-sensing gesture operation that prestores with it is defeated Before the mapping relations step for entering equipment input operation further include:
If detect the mapping relations of the operation of change body-sensing gesture and input equipment input operation, obtained based on the change Newly-generated mapping relations;
If detect the mapping relations of the operation of change body-sensing gesture and input equipment input operation, obtained based on the change Newly-generated mapping relations, are illustrated to specific embodiment, when body-sensing gesture operation is upper sliding operation, if reflecting before Penetrating relationship is the upper sliding Ctrl and+key for operating corresponding keyboard, and user is the personal habits for being suitble to itself, can be grasped in body-sensing gesture When as upper sliding operation, mapping relations are changed to the Ctrl and-key that upper sliding operation corresponds to keyboard.It should be noted that After user changes mapping relations, confirmation prompt information is generated, wherein the content of the confirmation prompt information can be " confirmation modification Mapping relations ".
Save the newly-generated mapping relations.
After the completion of change, the newly-generated mapping relations are saved, thus, in the body-sensing hand for detecting target user again When gesture operates, the determination of input operation can be carried out based on newly-generated mapping relations.
Step S20 is based on the mapping relations, chooses mesh corresponding with the body-sensing gesture matched standard body-sensing gesture of operation Mark input operation;
Based on the mapping relations, chooses target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and grasp Make, after obtaining mapping relations, the body-sensing gesture of the target user based on acquisition is operated, and target input operation is obtained, such as when mark When quasi- body-sensing gesture is both hands crossover operation, the mapping relations are based on, determine matched target input operation for by lower keyboard On Ctrl+Home key.
Step S30 is adjusted and is shown to the display state of Visual Report Forms based on target input operation.
After confirmation target input operation, the display state of Visual Report Forms is adjusted based on target input operation and It shows, is illustrated to specific embodiment, be Ctrl+Home key when target input operates, then it is corresponding to show Visual Report Forms Homepage content is Ctrl and-key when target input operates, then corresponds to and show that Visual Report Forms current page corresponds to the interior of prevpage Hold.
The present invention is operated by when Visual Report Forms are shown, obtaining the body-sensing gesture of target user, and is obtained and prestored The mapping relations of standard body-sensing gesture operation and input equipment input operation;Based on the mapping relations, selection and body-sensing gesture Operate the corresponding target input operation of matched standard body-sensing gesture;Based on target input operation to the display shape of Visual Report Forms State is adjusted and shows.In this application, since when Visual Report Forms are shown, can operate progress by body-sensing gesture can Displaying depending on changing report, and since standard body-sensing gesture operation and input equipment input operation are to carry out mapping association, thus, User is capable of the incidence relation of customized standard body-sensing gesture operation and the input operation of corresponding input equipment, to increase with user's Interaction, so that the displaying of Visual Report Forms is more intelligent, solution is existing in the prior art only to be grasped by conventional external equipment Visual Report Forms are controlled to exist and the interactive form of user is single, not smart enoughization and user experience are lower technical problem.
Further, the present invention provides another embodiment of Visual Report Forms methods of exhibiting, in this embodiment, the base In the mapping relations, chooses target corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and input after operating procedure Include:
Step A1 acquires the voice messaging of the target user, is believed by the speech recognition modeling prestored the voice Breath carries out identifying processing, to obtain identification content;
In the present embodiment, after determining target input operation, the voice messaging of the target user is acquired, acquisition is passed through Voice messaging, the accurate target for determining target user inputs operation.Specifically, in the voice messaging for acquiring the target user In the process, identifying processing is carried out to the voice messaging by the speech recognition modeling prestored, to obtain identification content, wherein The speech recognition modeling prestored can be to be obtained by repeatedly training.
Step A2 is based on the identification content, determines that the target input operates the mobile line number of corresponding Visual Report Forms Or columns.
After obtaining speech recognition content, it is based on the identification content, determines the corresponding visualization of target input operation The mobile line number of report or columns, such as: the voice messaging, which can be, is moved to the left 5 rows, moves down the contents such as 10 column, thus After obtaining speech recognition content, can accurately determine target user target input operation be moved to the left 5 row of list, or to Lower move options 10 arrange.
In the present embodiment, by acquiring the voice messaging of the target user, pass through the speech recognition modeling pair prestored The voice messaging carries out identifying processing, to obtain identification content;Based on the identification content, the target input operation is determined Corresponding Visual Report Forms mobile line number or columns.The present embodiment can pass through voice messaging when Visual Report Forms are shown The mobile line number of Visual Report Forms or columns are accurately determined with user's body-sensing gesture operation, the user experience is improved.
Further, the present invention provides another embodiment of Visual Report Forms methods of exhibiting, in this embodiment, the base In the mapping relations, chooses target corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and input before operating procedure Further include:
Step B1 acquires the motion profile of target user's expression in the eyes focal position, judges the track of the body-sensing gesture operation It is whether consistent with the motion profile;
When Visual Report Forms are shown, the body-sensing gesture operation trace of target user is obtained, and obtains the focusing of user's expression in the eyes The motion profile of position, wherein after the motion profile for determining client iris by biological identification technology, that is, can determine user's eye The motion profile of refreshing focal position judges the body-sensing gesture after the motion profile for obtaining target user's expression in the eyes focal position Whether the track of operation is consistent with the motion profile, wherein judges the track and the movement rail that the body-sensing gesture operates The whether consistent detailed process of mark can be: the synchronous expression in the eyes focal position for extracting target user and body-sensing gesture operate corresponding position, The time-based relationship by objective (RBO) curve of corresponding position is operated to obtain target user's expression in the eyes focal position and body-sensing gesture, in the mesh It marks in relation curve, specifies target user's expression in the eyes focal position change direction, variation track etc., and specify target user's body Feel gesture operation change direction, body-sensing gesture operation change track etc..Based on the relationship by objective (RBO) curve, target user's expression in the eyes is obtained The collocation degree of focal position variation and the variation of gesture corresponding position, which includes that director collocation degree is adapted to track Degree can determine whether the track and the movement that the body-sensing gesture operates based on the sub- collocation degree of the direction and the sub- collocation degree in track Whether track is consistent, wherein user's expression in the eyes focal position change direction and gesture corresponding position change direction angle are less than default When angle, the two director collocation degree is matched, user's expression in the eyes focal position variation track and gesture corresponding position variation track phase When being greater than preset value like degree, the sub- collocation degree matching in the two track, wherein similarity is determined according to the model prestored.
Step B2 is executed when the track of body-sensing gesture operation is consistent with the motion profile and is based on the mapping Relationship chooses the step of target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation operates.
When the track of body-sensing gesture operation is consistent with the motion profile, executes and be based on the mapping relations, choosing The step of taking target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation to operate, when the body-sensing gesture operates Track it is consistent with the motion profile when, do not execute based on the mapping relations, choose and operate matched mark with body-sensing gesture The step of quasi- body-sensing gesture corresponding target input operation, to avoid maloperation.
In the present embodiment, by acquiring the motion profile of target user's expression in the eyes focal position, judge the body-sensing gesture Whether the track of operation is consistent with the motion profile;When the track of body-sensing gesture operation is consistent with the motion profile When, it executes and is based on the mapping relations, choose target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and grasp The step of making.The present embodiment effectively avoids maloperation that may be present by expression in the eyes focal position, promotes user experience.
Further, the present invention provides another embodiment of Visual Report Forms methods of exhibiting, in this embodiment, the base It is adjusted and shows that step includes: to the display state of Visual Report Forms in target input operation
Step S31, when target input operation is split screen operating gesture, by the start bit of the split screen operating gesture Set the subregion separator bar as split screen;
In the present embodiment, when target input operation is split screen operating gesture, by the split screen operating gesture Subregion separator bar of the initial position as split screen, is illustrated, the initial position of split screen operating gesture exists to specific embodiment 5th line position of the Visual Report Forms current presentation page sets place, then the 5th line position of the current presentation page is set to the area located as split screen In lines.
Specifically, step S31 includes:
Step C1 judges that the Visual Report Forms of current presentation are when target input operation is split screen operating gesture It is no to separate displaying;
When target input operation is split screen operating gesture, judge whether the Visual Report Forms of current presentation can divide Every displaying, wherein when current visible report is multiple lines and multiple rows table, which can separate displaying.
Step C2 obtains rising for the split screen operating gesture when the Visual Report Forms of current presentation can separate displaying Beginning position, and obtain the gestures direction of the split screen operating gesture;
Step C3, initial position and gestures direction based on the split screen operating gesture, determines the subregion separator bar of split screen.
When the Visual Report Forms of current presentation can separate displaying, the initial position of the split screen operating gesture is obtained, And obtain the gestures direction of the split screen operating gesture, wherein the initial position of split screen operating gesture determines subregion separator bar Initial position, the gestures direction of split screen operating gesture determine the decoupled direction of subregion separator bar, carry out in this way to Visual Report Forms Laterally separated or longitudinal subdivision etc..
Step S32 carries out split screen processing to the Visual Report Forms based on the subregion separator bar, and shows that split screen is handled Visual Report Forms afterwards.
Split screen processing is carried out to the Visual Report Forms based on the subregion separator bar, and shows split screen treated and is visual Change report, screen can show the first Visual Report Forms, the second Visual Report Forms with lower screen respectively such as in the correspondence of the differentiation row.
Wherein, after the displaying split screen treated Visual Report Forms step further include:
If step S33 carries out whole screen display to the Visual Report Forms of split screen processing detect return operating gesture Show.
If detect return operating gesture, whole screen is carried out to the Visual Report Forms of split screen processing and is shown, wherein should Returning to gesture can be the gesture of the opposite sliding of finger.If return to operating gesture due to detecting, to split screen processing Visual Report Forms carry out whole screen and show
In the present embodiment, by when the target input operation be split screen operating gesture when, by the split screen manipulator Subregion separator bar of the initial position of gesture as split screen;The Visual Report Forms are carried out at split screen based on the subregion separator bar Reason, and show split screen treated Visual Report Forms.In the present embodiment, it is possible to which accurate split screen processing, promotes user experience.
Referring to Fig. 3, Fig. 3 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
Visual Report Forms presentation device of the embodiment of the present invention can be PC, be also possible to smart phone, tablet computer, electronics (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark by book reader, MP3 Quasi- audio level 3) player, (Moving Picture Experts Group Audio Layer IV, dynamic image are special by MP4 Family's compression standard audio level 3) terminal devices such as player, portable computer.
As shown in figure 3, the Visual Report Forms presentation device may include: processor 1001, such as CPU, memory 1005, Communication bus 1002.Wherein, communication bus 1002 is for realizing the connection communication between processor 1001 and memory 1005.It deposits Reservoir 1005 can be high speed RAM memory, be also possible to stable memory (non-volatile memory), such as magnetic Disk storage.Memory 1005 optionally can also be the storage equipment independently of aforementioned processor 1001.
Optionally, which can also include target user interface, network interface, camera, RF (Radio Frequency, radio frequency) circuit, sensor, voicefrequency circuit, WiFi module etc..Target user interface may include Display screen (Display), input unit such as keyboard (Keyboard), optional target user interface can also include having for standard Line interface, wireless interface.Network interface optionally may include standard wireline interface and wireless interface (such as WI-FI interface).
It will be understood by those skilled in the art that Visual Report Forms presentation device structure shown in Fig. 3 is not constituted to can Restriction depending on changing report presentation device may include perhaps combining certain components or not than illustrating more or fewer components Same component layout.
As shown in figure 3, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium Believe module and Visual Report Forms presentation program.Operating system is to manage and control Visual Report Forms presentation device hardware and software The program of resource supports the operation of Visual Report Forms presentation program and other softwares and/or program.Network communication module is used for Realize the communication between each component in the inside of memory 1005, and with other hardware and softwares in Visual Report Forms presentation device it Between communicate.
In Visual Report Forms presentation device shown in Fig. 3, processor 1001 is used to execute to store in memory 1005 Visual Report Forms presentation program, the step of realizing Visual Report Forms methods of exhibiting described in any of the above embodiments.
Visual Report Forms presentation device specific embodiment of the present invention and above-mentioned each embodiment of Visual Report Forms methods of exhibiting Essentially identical, details are not described herein.
The present invention also provides a kind of Visual Report Forms to show device, and the Visual Report Forms show that device includes:
First obtains module, for when Visual Report Forms are shown, obtaining the body-sensing gesture operation of target user, and obtains The mapping relations of standard body-sensing gesture operation and input equipment the input operation prestored;
Module is chosen, for being based on the mapping relations, chooses and operates matched standard body-sensing gesture pair with body-sensing gesture The target input operation answered;
Second obtains module, for the display state of Visual Report Forms to be adjusted and opened up based on target input operation Show.
Visual Report Forms of the present invention show device specific embodiment and above-mentioned each embodiment of Visual Report Forms methods of exhibiting Essentially identical, details are not described herein.
The present invention provides a kind of readable storage medium storing program for executing, the readable storage medium storing program for executing is stored with one or more than one journey Sequence, the one or more programs can also be executed by one or more than one processor for realizing above-mentioned The step of Visual Report Forms methods of exhibiting described in one.
Readable storage medium storing program for executing specific embodiment of the present invention and the basic phase of above-mentioned each embodiment of Visual Report Forms methods of exhibiting Together, details are not described herein.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field similarly includes in patent process range of the invention.

Claims (10)

1. a kind of Visual Report Forms methods of exhibiting, which is characterized in that the Visual Report Forms methods of exhibiting includes:
When Visual Report Forms are shown, the body-sensing gesture operation of target user is obtained, and obtains the standard body-sensing gesture behaviour prestored Make the mapping relations with input equipment input operation;
Based on the mapping relations, chooses target input corresponding with the body-sensing gesture matched standard body-sensing gesture of operation and operate;
The display state of Visual Report Forms is adjusted and is shown based on target input operation.
2. Visual Report Forms methods of exhibiting as described in claim 1, which is characterized in that described to be based on the mapping relations, choosing Include: after taking target corresponding with the body-sensing gesture matched standard body-sensing gesture of operation to input operating procedure
The voice messaging for acquiring the target user carries out at identification the voice messaging by the speech recognition modeling prestored Reason, to obtain identification content;
Based on the identification content, determine that the target input operates the mobile line number or columns of corresponding Visual Report Forms.
3. Visual Report Forms methods of exhibiting as described in claim 1, which is characterized in that described to be based on the mapping relations, choosing Target corresponding with the body-sensing gesture matched standard body-sensing gesture of operation is taken to input before operating procedure further include:
The motion profile for acquiring user's expression in the eyes focal position judges that the track of the body-sensing gesture operation is with the motion profile It is no consistent;
When the body-sensing gesture operation track it is consistent with the motion profile when, execute be based on the mapping relations, choose and Body-sensing gesture operates the step of corresponding target input of matched standard body-sensing gesture operates.
4. Visual Report Forms methods of exhibiting as described in claim 1, which is characterized in that it is described based on target input operation to can Display state depending on changing report is adjusted and shows that step includes:
When target input operation is split screen operating gesture, using the initial position of the split screen operating gesture as split screen Subregion separator bar;
Split screen processing is carried out to the Visual Report Forms based on the subregion separator bar, and shows split screen treated visualization report Table.
5. Visual Report Forms methods of exhibiting as claimed in claim 4, which is characterized in that treated for the displaying split screen visually After change report step further include:
If detect return operating gesture, whole screen is carried out to the Visual Report Forms of split screen processing and is shown.
6. Visual Report Forms methods of exhibiting as claimed in claim 4, which is characterized in that described when target input operation is When split screen operating gesture, include: using the initial position of the split screen operating gesture as the subregion separator bar step of split screen
When target input operation is split screen operating gesture, judge whether the Visual Report Forms of current presentation can separate exhibition Show;
When the Visual Report Forms of current presentation can separate displaying, the initial position of the split screen operating gesture is obtained, and is obtained Take the gestures direction of the split screen operating gesture;
Initial position and gestures direction based on the split screen operating gesture, determine the subregion separator bar of split screen.
7. Visual Report Forms methods of exhibiting as claimed in any one of claims 1 to 6, which is characterized in that described in Visual Report Forms When displaying, the body-sensing gesture operating procedure for obtaining target user includes:
When Visual Report Forms are shown, the body-sensing gesture variation track of target user in preset time period is acquired;
Based on the body-sensing gesture variation track mapped matrix model, the body-sensing gesture operation of target user is predicted.
8. a kind of Visual Report Forms show device, which is characterized in that the Visual Report Forms show that device includes:
First obtains module, for when Visual Report Forms are shown, obtaining the body-sensing gesture operation of target user, and obtains and prestores Standard body-sensing gesture operation and input equipment input operation mapping relations;
Module is chosen, for being based on the mapping relations, is chosen corresponding with the body-sensing gesture matched standard body-sensing gesture of operation Target input operation;
Second obtains module, for the display state of Visual Report Forms to be adjusted and shown based on target input operation.
9. a kind of Visual Report Forms presentation device, which is characterized in that the Visual Report Forms presentation device includes: memory, place Device, communication bus and the Visual Report Forms presentation program being stored on the memory are managed,
The communication bus is for realizing the communication connection between processor and memory;
The processor is for executing the Visual Report Forms presentation program, to realize as described in any one of claims 1 to 7 Visual Report Forms methods of exhibiting the step of.
10. a kind of readable storage medium storing program for executing, which is characterized in that be stored with Visual Report Forms on the readable storage medium storing program for executing and show journey Sequence realizes such as visualization of any of claims 1-7 when the Visual Report Forms presentation program is executed by processor The step of report methods of exhibiting.
CN201810487442.1A 2018-05-18 2018-05-18 Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing Withdrawn CN109725712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810487442.1A CN109725712A (en) 2018-05-18 2018-05-18 Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810487442.1A CN109725712A (en) 2018-05-18 2018-05-18 Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing

Publications (1)

Publication Number Publication Date
CN109725712A true CN109725712A (en) 2019-05-07

Family

ID=66293801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810487442.1A Withdrawn CN109725712A (en) 2018-05-18 2018-05-18 Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing

Country Status (1)

Country Link
CN (1) CN109725712A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553135A (en) * 2021-07-29 2021-10-26 深圳康佳电子科技有限公司 Split screen display method based on gesture recognition, display terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553135A (en) * 2021-07-29 2021-10-26 深圳康佳电子科技有限公司 Split screen display method based on gesture recognition, display terminal and storage medium

Similar Documents

Publication Publication Date Title
US20170192500A1 (en) Method and electronic device for controlling terminal according to eye action
CN104267902B (en) A kind of application program interaction control method, device and terminal
US20170139556A1 (en) Apparatuses, systems, and methods for vehicle interfaces
US10466798B2 (en) System and method for inputting gestures in 3D scene
JP2019535055A (en) Perform gesture-based operations
CN105760102B (en) Terminal interaction control method and device and application program interaction control method
CN108762497A (en) Body feeling interaction method, apparatus, equipment and readable storage medium storing program for executing
CN103926997A (en) Method for determining emotional information based on user input and terminal
CN113568506A (en) Dynamic user interaction for display control and customized gesture interpretation
CN109034063A (en) Plurality of human faces tracking, device and the electronic equipment of face special efficacy
CN108495166B (en) Bullet screen play control method, terminal and bullet screen play control system
CN104238946A (en) Touch control method, device and terminal
CN107479691A (en) A kind of exchange method and its intelligent glasses and storage device
CN108958503A (en) input method and device
CN106104692B (en) The sequence of Highlights video segmentation
CN108052254B (en) Information processing method and electronic equipment
Sluÿters et al. Consistent, continuous, and customizable mid-air gesture interaction for browsing multimedia objects on large displays
CN111103982A (en) Data processing method, device and system based on somatosensory interaction
CN108401173A (en) Interactive terminal, method and the computer readable storage medium of mobile live streaming
CN104238746B (en) Information processing method and electronic equipment
CN109725712A (en) Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing
CN112114653A (en) Terminal device control method, device, equipment and storage medium
CN107544740B (en) Application processing method and device, storage medium and electronic equipment
US11301124B2 (en) User interface modification using preview panel
CN108989553A (en) The method, apparatus and electronic equipment of scene manipulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190507